Download printable version here
Like many metaphysical doctrines of the seventeenth century, the concepts of absolute space and time served both an important scientific function and also fostered vigorous and productive philosophical exchange. The preeminent instances of these dual roles are, respectively, Newton's Principia and the Clarke-Leibniz Correspondence. Consequently these texts have been the main focus for early modern scholars of space and time. The papers in this symposium, while recognizing the centrality of these texts, aim in a number of specific ways to broaden and enrich our understanding of 'the absolutes' in seventeenth century natural philosophy of space and time. First, several of the papers (Dunlop, Gorham, Slowik) investigate anticipations of the Newtonian absolutes in authors known to Newton, especially Gassendi, Barrow, and the Cambridge Platonists, and explore crucial but lesser known Newtonian texts such as the unpublished tract, De Gravitatione. The concern in these papers is not only with the influence on Ne wton, but also with the intrinsic nature and justification of the particular forms of absolute space and time proposed in the work of these influential authors. A second major theme of the papers (Dunlop, Futch, Gorham) is the epistemic or methodological status of absolute space and time. A major issue for Newton, and for his contemporary and subsequent critics, was the relation between the absolutes and their 'sensible measures' (bodies and motions). This symposium will show how this epistemic concern was at work in the philosophical precursors of Newton, as well as in the development of Leibniz's alternative to absolute space and time. Third, several of the papers (Dunlop, Futch, Gorham) examine the symmetry, or similarity, between absolute space and absolute time, which was a common preoccupation in seventeenth century treatments. While many -- such as Locke -- saw deep analogies between space and time, others -- such as Descartes -- conceived of them as importantly different. The symposium explores the c omplex role played by considerations of symmetry in arguments both for and against the existence of absolute space and time. Fourth, all of the papers aim to shed new light on the perennial relationalist-substantivalist debate. Slowik and Futch concentrate, respectively, on the original champions in the debate, Newton and Leibniz. But rather than enjoin the debate itself, each paper presents a novel reading of the metaphysical presuppositions of the opposing positions, which clarifies what is ultimately at stake. Dunlop and Gorham offer distinctive explanations for notable asymmetries in Newton's treatment of absolute space vs. absolute time. Dunlop illuminates Newton's approach to the ontology and measurement of space and time by contrasting his conception of geometrical practice with that of his teacher, Isaac Barrow, while Gorham argues that early modern conceptions of absolute space (but not absolute time) have their origin in updated versions of traditional thought experiments involving God's absolute p ower of annihilation.
In a recent overview of early modern metaphysics, Nicholas Jolley observes in passing that “philosophical theories that seem primarily tailored to space are often said to apply mutatis mutandis to the case of time”. I examine this tendency to argue by analogy from space to time in two likely influences on Newton: Pierre Gassendi and Isaac Barrow. In the Aristotelian tradition, space implies body and time implies motion. However this symmetry is broken by the end of the sixteenth century: void space is widely entertained but time remains wedded to motion or change. There are many reasons for this asymmetry; but here I emphasize a factor that has been little discussed. Practically every early modern natural philosopher who treats space and time invokes – even if only to refute – traditional metaphysical arguments for endowing space with intrinsic dimensionality apart from body. For example, it is argued that God could annihilate a certain part of the world leaving a vacated space with the same dimensions as the part destroyed. However, as Leibniz recognized, such arguments simply do not extend to time: "If there were a vacuum in space one could establish its size. But if there were a vacuum in time, i.e. duration without change, it would be impossible to establish its length. ....It follows from this that we cannot refute someone who says that two suc cessive worlds are contiguous in time. . . with no possible interval between them. The relative paucity of empirical investigations of changeless time, as compared with local spatial vacua, is unsurprising given the inherent conceptual barrier to gauging such duration. It is in this precisely this context that analogical arguments from space to time come to the fore. Drawing on recent philosophical discussions of analogical reasoning in science, I show how time is endowed with an intrinsic dimensionality and measure isomorphic to an already articulated absolute space. This kind of reasoning is pervasive in the seventeenth century but Gassendi and Barrow are especially influential and instructive instances. The former develops an elaborate version of the traditional thought experiment for absolute space, but his case for changeless time rests primarily on the otherwise strong analogy between time and his geometrical space: both are extended, continuous, neither substance nor accident, and composed of parts. Given these similarities, Gassendi concludes, time is simply the successive counterpart of absolute space: "there exist two diffusions, extensions, or quantities, one permanent, namely place or space, and one successive, namely time or duration". Similarly, in Barrow absolute time emerges parasitic on absolute space and inherits spatial features: the 'space of motion' (spatium motas), as he refers to time, is conceived as successive existence stretched out along a single spatial dimension. Finally, I will argue that the Gassendi-Barrow methodology is assimilated by Newton in his early accounts of absolute space and time (e.g. De Gravitatione). From this point of view, I suggest, we can make better sense of certain otherwise puzzling features of Newton’s famous Scholium, especially his comparatively limp defense of absolute time vs. absolute space and motion.
This presentation will investigate the influence of Cambridge neo-Platonic concepts and arguments on Newton’s natural philosophy of space, matter, and motion, with special emphasis placed on the manner by which both Henry More and Walter Charleton may have prompted or informed Newton’s ontology of space. A number of important questions, much discussed in the recent literature on Newton, will be addressed. (1) Did Newton accept a form of “substantivalism”, which (among other things) regards space as a form of substance or entity? (2) Did Newton ground the existence of space upon an incorporeal being (i.e., God or World Spirit), as did his neo-Platonic predecessors and contemporaries? (3) What is the status of the parts or points of space in Newton’s scheme, and does his pronouncement on the identity of the points of space (in his tract, De Gravitatione) undermine his alleged substantivalism? As regards (1), A number of important reappraisals by Howard Stein and Robert DiSalle have concluded that the conten t and function of Newton’s concept of “absolute” space should be kept separate from the question of Newton’s commitment to substantivalism. In Stein’s contribution to The Cambridge Companion to Newton, he further contends, more controversially, that Newton does not sanction substantivalism, a view that may also be evident in various early articles by J. E. McGuire. Concerning (2), Stein rejects any significant neo-Platonic content, as did McGuire’s early work. Finally, the problem of the points of space, raised by an enigmatic discussion in the De Gravitatione, has brought about several recent reappraisals by Nerlich and Huggett concerning the viability of Newton’s espoused substantivalism. This presentation will examine the ontology of Newton’s spatial theory in order to determine the adequacy of these interpretations and arguments. As will be demonstrated, Newton’s spatial theory is not only deeply imbued in neo-Platonic speculation, contra (2), but these neo-Platonic elements likewise compromise any strong non-substantivalist interpretation, contrary to (1). Throughout our investigation, however, the specific details and subtleties of Newton’s particular brand of neo-Platonism will be contrasted with the ontologies of his contemporaries and predecessors, especially More and Charleton, and by this means a more adequate grasp of the innovations and foreword-looking aspects of his theory of space can be obtained. In short, the spatial theory that Newton advances, especially in De Gravitatione, bears much in common with a property view of space, such that space is correlated and coextensive with the existence of an immaterial being, namely God (and where the details of this interpretation differ significantly from the conclusions reached by McGuire’s influential early work). Finally, the ontological implications associated with this picture of Newton’s spatial ontology also render his theory immune to some of the problems raised in the current literature, e.g. (3).
I show how the influence of Isaac Barrow accounts for asymmetries in Newton’s treatment of space and time. In the Scholium following the Principia’s Definitions, Newton contrasts the “absolute” quantities space and time with their “sensible measures”. He airs the possibility that no measure of time is “exact”. Time is measured only by motion. Since the rate of a motion, unlike “the flow of absolute time”, can change, the quantity of absolute time marked out by a motion may vary according to when in time’s “flow” the motion occurs. For Newton, measures of space are not subject to this kind of deviation. Absolute space is measured by “relative spaces”, which can move with respect to it. Newton asserts that the quantity of absolute space marked out by a relative space is the same no matter where (in absolute space) the relative space is located: though relative spaces and the parts of absolute space differ “numerically”, they remain “the same in magnitude”. I argue that Newton’s asymmetrical treatment of the quantities reflects his conception of geometry. He intends to prove the need for and possibility of a science, “rational mechanics”, that measures time as accurately as geometry measures space. In conceiving geometry as the science of sensible measures of spatial quantity, Newton follows Isaac Barrow. Barrow subscribes to an Aristotelian view on which measurement, to count as science, must pertain to the natures of things and involve the senses. On this basis, he concludes that the most basic measurement, determination of equality, must involve comparison (e.g. juxtaposition) of objects in space. Yet Barrow holds that a measure’s magnitude can be compared only with that of objects in space, not regions of space. For on his view space exists, and thus has magnitude, only insofar as it can be filled by objects. Hence, a spatial region’s magnitude cannot deviate from that of its potential measures. Barrow’s ontology of space, as the potential for magnitude-bearing objects, thus guarantees the accuracy of spatial measures. For Newton as well, the possibility of geometry guarantees the accuracy of spatial measures, but not on the same epistemological and ontological grounds. Without accepting Barrow’s Aristotelian view of science, Newton agrees that measures must be sensible and thus ultimately spatial. So on his view, like Barrow’s, the accuracy of spatial measures is a condition of all measurement. Yet for Newton, its satisfaction is not ontologically guaranteed. Space, on Newton’s more robust conception, actually has magnitude prior to being filled by objects. So its magnitude can deviate from that of its measures. I suggest that the satisfaction of the condition is instead guaranteed by Newton’s conception of geometrical activity. On Newton’s view, geometry is a practice of measurement involving the movement of objects through space. This practice of comparison would not be possible if measures did not retain the same quantity through changes in spatial position.
Few disagreements are more central to the history of the philosophy of space and time than that between substantivalists and relationalists, and few figures are as central to this quarrel as Leibniz. It was in no small measure because of his relationalism that Hans Reichenbach lauded Leibniz for possessing “insights that were too sophisticated” to be understood by his Newtonian adversaries. Leibniz’s commitment to relationalism is beyond dispute, but less obvious is what he takes to be the consequences of this view, especially with respect to the possibility of empty space. Though most are agreed that Leibniz denies that there is, as a contingent matter of fact, empty space, many scholars have concluded that he allows for the metaphysical and physical possibility of a spatial vacuum in a way that he does not allow for the possibility of time without change. This conclusion is reached on the basis of passages in Leibniz’s corpus where he points to a disanalogy between empty time (time without change) a nd empty space, averring that the former could not in principle be empirically detected, whereas the latter could. It therefore appears that Leibniz denies that space could exist in the absence of any matter whatsoever, but does not rule out the possibility of interstitial vacua between pieces of matter. This position is consistent with the demands of his relationalism since it holds that the existence of space in general is dependent upon the existence of at least some bodies – no bodies whatever, then no space – while also denying that every part of space must be filled with matter. In this paper, I argue against the above view, suggesting that Leibniz is no less committed to the impossibility of local regions of empty space than he is to changeless time. In particular, I try to show that (some of) the same reasons that lead Leibniz to opt for relationalism and to disavow the Newtonian view that space is ontologically prior to bodies also provide him with grounds for denying not only the actual existence but also the possibility of empty space. Here I focus on two tenets that are at the core of Leibniz’s philosophy of space and time, the Principle of the Identity of Indiscernibles and the Principle of Sufficient Reason. It is widely recognized that these principles are often employed by Leibniz against his Newtonian adversaries. My objective is to show how he similarly employs them to establish the conclusion, contra a prevalent view among Leibniz exegetes, that Leibniz does not countenance the possibility of unoccupied spatial positions.
The history of philosophy of science in nineteenth century reveals several interesting parallels with the development of logical positivism in the twentieth century. Philosophers such as Auguste Comte, Charles Renouvier, and Henri Poincaré were educated in mathematics and the sciences and outside mainstream academic philosophy. They were concerned with epistemological questions, but at least the earlier philosophers were also concerned with the role of science in society. Over time, however, these broader concerns appear to have given way to more detailed studies of issues in the foundations of specific sciences. Comte came of age during a time of political instability and rapid change in France, publishing the first volume of his Cours de philosophie positive during the same year as the July Revolution. As Michel Bordeau shows in his paper, “Philosophy of Science and Sociology of Science in Comte,” his goals were both philosophical and social: to systematize scientific knowledge and to make this systematic knowledge the foundation of a new, orderly, and progressive social system. The means to this end were through the establishment of the new science of sociology, grounded in the previous history of science and directing its future development, thus combining descriptive and normative elements in a way unlike contemporary sociology. Mary Pickering’s concern, in her paper “The Process and Goals of Scientific Discovery in Auguste Comte’s Sociological Vision,” is that to the extent he emphasized the social role of scientists, he undermined their intellectual role. Not only did he emphasize the practical over knowledge for its own sake, but as the sociological synthesis of the sciences was to replace religion as the glue that binds individuals to their society, the scientists would increasingly take on the role of moral educators. Pickering also exposes the anti-individualist lessons that Comte appeared to have drawn from history, conclusions that proved to be of deep concern to Renouvier. Renouvier was critical of Comte’s positivism both as a philosophy of science and as a political program. In his paper, “Renouvier’s Critique of Comte’s Sociological Philosophy of Science,” Warren Schmaus shows that Renouvier raised three sorts of arguments against Comte: (1) historical arguments against the three-state law, (2) epistemological arguments against Comte’s claims to have achieved certainty, and (3) broader philosophical arguments against Comte’s failure to separate normative from descriptive accounts of the sciences. Contra Comte, Renouvier defended individual liberty as necessary for thinkers like Poincaré to critically examine the foundations of mathematics and the sciences. Poincaré raised an additional challenge to Comte’s philosophy. Where Comte had included mathematics in his six-fold classification of the empirical sciences, Poincaré argued that mathematics is not an empirical science, even if experience does play a role in the formation of mathematical concepts. Michael Heidelberger, in his paper “Poincaré and Sense-physiology,” shows how Poincaré drew on experimental work in the physiology of perception to argue that sensory space is quite different from mathematical space, and thus that geometry cannot be an empirical science.
Renouvier recognized that Comte, in introducing a sociological point of view into the philosophy of science, had blended prescriptive with descriptive claims that needed to be kept separate. Renouvier did not challenge Comte’s empirical claims about such things as the way progress depends on specialization and the role of dogma in science. However, Comte also thought that sociology should play a role in coordinating and organizing the other sciences and directing them towards the satisfaction of human needs. This was a major part of his larger political program of re-establishing society on a scientific basis. Renouvier and his collaborator Pillon were highly critical of the positivist political program of Comte and Littré. Thus Renouvier worked to undermine the prescriptive aspects of Comte’s sociological philosophy of science and developed an alternative view of the social conditions on which the flourishing of science depends. Comte thought that sociology could provide the basis for organizing research and education in the sciences because it was grounded in his three-state law governing the history of the sciences and was thus inclusive of all the sciences. Sociology was the culminating science for Comte, placed at the summit of his six-fold hierarchy of the sciences. Renouvier sought to undermine this foundation for Comte’s program by providing evidence that the history of science did not unfold in accordance with the three-state law and six-fold hierarchy of the sciences. For Comte, to ground social policy in the positive sciences meant to base it on certain knowledge. Renouvier’s second line of critique was that no science, including sociology, could ever achieve certainty. As Renouvier moved away from his youthful Saint-Simonianism and positivism over the course of his philosophical career, he increasingly emphasized the hypothetical character of science. Thus science, including sociology, could not provide a firm foundation for the reorganization of society. However, Renouvier also recognized that even if Comte’s claims about the history of science were correct and empirical certainty could be achieved, that sociology could not provide a sufficient basis for organizing science and society. In what is perhaps his most important argument against the positivist program, Renouvier drew a clear distinction between normative and empirical questions. For Renouvier, Comte had failed to recognize that philosophical inquiry into both epistemological and moral issues required the use of methods of analysis beyond those of the empirical sciences. For all of these reasons, Renouvier rejected the idea that sociology could provide the basis for reorganizing science and society. Like Mill, Renouvier held our uncertainty about the affairs of everyday life as a premise from which to defend individual liberty. Renouvier also held that such liberty was essential for the conduct of science itself. Individual liberty of thought is necessary if there are to be scientists like Poincaré, who examine the foundations of the sciences and expose their presuppositions. In Renouvier’s mind, the positivist program had made no provisions for this sort of important conceptual work in science.
Comte is at the same time the first philosopher of science in the modern sense of the term and the founder of sociology; he was therefore quite naturally led to address the question of the relationship of the two domains. Comte’s idea of sociology is not ours anymore. According to him, sociology has a twofold status: it is a science among the other ones, but it is also the last one and, in this second function, it recapitulates the totality of knowledge. The first lesson of the Cours de philosophie positive (1830) shows already the prominent place given to sociology. Comte says he follows a double purpose. The first, general one is the systematization of our knowledge; the second, special one is the foundation of social science. It is the reason why sociology takes up half of the entire work, three volumes of six. From mathematics to biology, the sciences are already there. This is not the case with sociology: if Comte admits some predecessors, the work is still to be done. The second function is more explicitly stated in the general conclusions (1842) of the Cours, because sociology had to be available before we could measure the scope of the changes it introduced. Nevertheless, the idea follows immediately from the principles of the classification: the inferior supports the superior; the superior depends on the inferior without being reducible to it. It is because sociology, for Comte, includes all the previous sciences that it is able to be in charge of the overall development of science in general. For quite understandable reasons, sociologists have given up the second function. Sociology of science is today a branch of sociology, a sub-discipline. But Comte’s position is still attractive for a philosopher. As it was very clearly stated from the start, in the Cours, every science is studied twice: first for itself, then in the sociology lectures, as part of the global history of mankind. In this way, it is possible to do justice both to internalism and externalism.
My paper will examine Auguste Comte’s conception of the process of scientific discovery. It will explore his emphasis on the importance of social context and the contributions of history, which corresponded respectively to the two parts of his new science of sociology: social statics and social dynamics. In particular, he insisted that one had to learn the historical context of scientific discoveries because intellectual evolution was always subordinated to the social history of humanity. In both his Cours de philosophie positive (1830-42) and his Système de politique positive (1851-54), he covered the history of humanity extensively, in fact so extensively that it seemed to overshadow sociology. Part of sociology and yet not a major science, Comte’s history was baffling. It was a study of social, intellectual, political, and religious movements as well as the tale of the development of feelings and artistic movements, but without much emphasis on the role of individuals. His critique of the liberal dis course of individual rights and of psychology did not help dispel suspicions about his approach. He seemed to foreshadow the anti-humanist structural practices of the French Annales school of historians, such as Fernand Braudel. My paper will explore his attitude towards the accomplishments of individual scientists and investigate how he believed they should work together, considering that he wished to abolish academies and journals. What should motivate them: the pure love of research or more practical, utilitarian preoccupations? How should they behave? How and with whom would they interact? I will argue that as he became increasingly convinced of the tentativeness, “relativism,” and dryness of scientific thought, he began to undermine the role of scientists as pure intellectual adventurers and consider them more as moral educators, ideally devoted to improving the environment, society, and human nature.
At a number of points in his work on the foundations of mathematics, Poincaré employs results from sense-physiology and psychophysics. Indeed, it can be shown that Poincaré was well familiar with the methodology of measuring sensations which had been developed by Gustav Theodor Fechner, as well as with the controversy it occasioned. For instance, in La Science et l’Hypothèse, he invokes Fechner's demonstration of the intransitivity of comparisons of sensations. According to Poincaré, this demonstration makes it impossible to found the concept of a mathematical continuum on the data of experience alone. In those sections of the book covering the relation between perceptual spaces and the space of mathematics, Poincaré draws heavily on the sense-physiological theories of Fechner, Wilhelm Wundt and Helmholtz. In this paper, I argue that especially the German work on the mathematization of perceptual space during the 19th century led Poincaré to a new understanding of mathematics. How exactly, one might ask, did Poincaré view the relation between the experiences of the ordinary perceiver or of the sense physiologist to the theoretical science of mathematics? This relation seems to be much narrower for Poincaré than for the later orthodoxy, which neatly separates the context of discovery from the context of justification. For Poincaré, experience plays a necessary role in the genesis of geometry which makes geometry depend on it, although it does not thereby become an empirical science. In general, a mathematical idea must first be given in experience before it can be reworked by the mind to become an object of non-empirical consideration. It follows that geometrical and sensory space differ not only in their structure, but also in kind, and that sensory space cannot be viewed as a special kind of empirically inter preted mathematical space.
Analytic philosophy has long had a vexed relationship with empirical psychology. On one hand, analytic philosophy is inconceivable without the anti-psychologism of Frege and other influential neo-Kantians—that is, without the repudiation of psychology as a basis for logic, ethics, epistemology, and metaphysics (for an overview of varieties of psychologism, see Kusch 1995, 1-12). But on the other hand, by the 1930s at latest, analytic philosophy also developed a strong preference for empiricism, a tradition that has historically conceived of philosophy and empirical psychology as intimately connected. Recent scholarship sheds light on the development of analytic philosophy by studying founding figures’ reflections on the exact sciences—particularly on physics, mathematics, and logic. But less attention has been paid to the relationship between early analytic philosophy and what was, at the dawn of the twentieth century, an exceedingly controversial science: empirical psychology. This symposium is designed to enrich our understanding of the significance of psychology for early analytic philosophy, and the significance of philosophy for early empirical psychology. The vexed relationship between psychology and philosophy was not new with the analytic movement. During the late 19th century, the German- and English-speaking worlds were both afire with debates over the proper relationship between the two fields. Klein focuses on one such debate from the 1880s—an attack on “mental science” by the British idealists T. H. Green and F. H. Bradley, and a response by the young psychologist William James. Klein portrays this debate as a crucial backdrop to the later controversy over pragmatism, a controversy that pitted James against Bertrand Russell. Franz Brentano shared James’s empiricist commitment to psychology as a cornerstone of philosophy. But Brentano also maintained that certain basic features of thought are necessary presuppositions of any empirical science. Jacquette shows how Brentano balanced what seem like opposing claims—that the basic principles of psychology are both necessary and empirical. Edgar explores a figure who took a dimmer view of psychology’s significance for philosophy, compared with James and Brentano: the Marburg neo-Kantian Paul Natorp. Natorp had studied Mach’s psychology and determined that it could not account for the objectivity of scientific knowledge. Like Russell, Natorp had a decisive influence on analytic philosophy’s anti-psychologism. These turn-of-the-century debates affected more than just the development of analytic philosophy. In turn, they affected the development of behaviorism in psychology, a school from which analytic empiricists like Ayer and Quine would draw heavily. Wilson traces the role of Hegelian notions of function in psychology’s journey to becoming an objective science. The penultimate landmark on that journey was the emergence of behaviorism, Wilson argues. Two philosophers were crucial in bringing function to psychology: John Dewey and the American new realist E. B. Holt.
In 1908 – 1909, Russell published two sharp critiques of James, critiques that blackened pragmatism’s reputation for a generation. Though he sometimes read James uncharitably, Russell also raised serious concerns. I will argue that neither Russell’s concerns nor James’s response come into focus, though, unless one understands how their dialectic fits into a then-ongoing debate over psychology. According to Russell, pragmatists tell us that “truth” means “furthering our purposes” (Russell 1909/1966, 98). Their evidence comes from psychology, which shows that we are most likely to hold true ideas that further our purposes. Russell argues that pragmatists here equivocate between two meanings of “meaning.” Consider the difference between “that cloud means rain” and “pluie means rain.” The cloud is a causal predictor of rain. But “pluie” is a word that signifies rain. Pragmatists ought to give the meaning, in the latter sense, of “truth”; but they only give the meaning in the former sense—at best they isolate a causal antecedent of true beliefs, and that falls short of a philosophical account (Russell 1909/1966, 97). This argument turns on an underlying claim which had been hotly disputed in Russell and James’s circles since at least the 1880s: that a legitimate philosophical explanation may not consist of a mere causal description of cognitive processes. Russell hinted at his reasons for endorsing this claim—truth is a normative concept, he suggests, and so cannot be given a purely psychological (read: descriptive) explication (Russell 1909/1966, 92, 96-97). Russell could afford to be elliptical because more detailed arguments for this claim had emerged in an earlier debate in which James himself had participated. Green and Bradley, two idealist icons in the generation preceding Russell, had been severely critical of psychology’s significance for philosophy. For instance, in “Can There Be a Natural Science of Man?” (1882), Green argued that mental science was incapable of explaining a fundamental feature of our perceptual lives—our capacity to evaluate the veridicality of our perceptions. No purely causal description can explain what it is for a train engineer to “see a signal wrong” on a foggy night, he argued (Green 1882, 9-10). James conceded that psychological subjects must be conceived as having an irreducible capacity to evaluate experiences. But he rejected Green’s further claim that mental science collapses because it uses—without first explaining—philosophically-thorny concepts like evaluation. Just as physics cannot make progress if it gets ensnared in metaphysical debate over concepts like cause and effect, so the psychologist must be allowed to “uncritically accept” philosophically-thorny concepts like evaluation at the outset, James argued (James 1890/1981, 6). Now turn back to Russell. If normative concepts are legitimate and ineliminable in psychology, then it is false that all philosophical explications drawn from psychology commit an is/ought fallacy. Nevertheless, there is a deeper problem here. Jamesean psychology uses normative concepts “uncritically”; so what justifies the pragmatist in re-importing those concepts into philosophy? I try to mitigate this problem by reviewing James’s delicate account of psychology’s relationship to philosophy in (James 1892).
Anti-psychologism, the view that philosophical questions about knowledge cannot be answered by psychological investigations of the mind, was characteristic of both early analytic philosophy and the late nineteenth-century German philosophy out of which analytic philosophy grew. This paper will examine the emergence of one strain of anti-psychologism in the late nineteenth-century. In order to do so, the paper will begin by examining Ernst Mach’s conception of the mind, and identifying the features of that conception that motivated the anti-psychologism of the Marburg School Neo-Kantian Paul Natorp. It turns out that for Natorp, an investigation of the mind on Mach’s conception cannot explain the objectivity of scientific knowledge, and so philosophy of science must be anti-psychologistic. This paper will thus seek to show how anti-psychologism emerged in the late nineteenth century background to analytic philosophy as a response specifically to then-new empirical-psychological conceptions of the mind.
Franz Brentano’s method in philosophical psychology, the cornerstone of all his philosophy, is empiricist, but in a different way than is usually expected of inductive reasoning in the natural sciences. Brentano requires the study of consciousness to be securely grounded in experience of a subjective sort, in innere Wahrnehmung or internal perception. At the same time, Brentano characterizes the fundamental elements of thought as in some sense necessary and epistemically certain a priori, by virtue of being conceptually prior to any empirical science at the philosophical foundations of psychology. The challenge facing Brentano’s scientific methodology is to explain how the basic principles of philosophical psychology can at once be both empirical and a priori. If Brentano’s Psychognosie, developed in the 1890-1891 revision of his 1887-1888 lectures on Deskriptive Psychologie and Deskriptive Psychologie oder beschreibende Phänomenologie, is supposed to be the solution to this difficulty, then everything in Brentano’s philosophy depends on the possibility of establishing a priori principles of pure philosophical psychology from a distinctively a posteriori empirical phenomenological starting place. Psychognosy is supposed to be distinct from and a necessary preliminary preparation for what Brentano calls genetic psychology. Genetic psychology, like contemporary cognitive science, is interested especially in the causal, and, by extension, presumably also behavioral and information processing aspects of a psychological subject, using only hard public scientific evidence, repeatable experiments, and rig orous scientific theory-building protocols. Both psychognosy and genetic psychology are needed for a complete scientific theory of psychology, according to Brentano, but the principles of psychognosy are philosophically foundational. Against this historical background, I argue that, despite the difficulties in Brentano’s descriptive psychology, Brentano produces a method for a scientific pure philosophical psychology involving an unconventional but not necessarily philosophically objectionable inductive logic, one that permits the inductively correct inference of universal a priori conclusions from particular a posteriori and hence epistemically weaker empirical phenomenological evidence. When Brentano’s psychognosy is clarified, it can be seen to compare positively with George Boole’s similar method of discovering the universal laws of thought, even from single instances, in identifying the principles of an algebra of logic. The principles of logic and psychognosy are found in the only place and in the only way they possibly could, in thought and in the disciplined exercise of thought dialectically examining thought. What would ordinarily be inductively fallacious reasoning in Boole’s logic or Brentano’s psychognosy is avoided by the special nature of the case, in which a priori self-justifying propositions are empirically discovered in the experience of and reflection on the content and structure of thought. It is not the mode of empirical a posteriori discovery of Boole’s and Brentano’s principles that justifies them, but rather the character of the principles themselves as revealed in experience and justified by reflection as universal a priori truths.
This study aims to show how psychology was transformed from a science that introspectively analyzed consciousness into an objective science that dealt with behaviour understood as serving functions in a biological organism. Darwin was in the background, but the transition was effected in large part by two philosophers. One was John Dewey, who insisted on the functional point of view, but who understood that perspective in teleological and essentially anti-scientific, Hegelian terms. The other was E. B. Holt. John B. Watson showed how psychology could become objective but as he saw it the science remained atomistic in focus rather than functional. Holt, with an understanding of relations deriving from Russell, showed how psychology could be an objective science of human being which nonetheless eliminated psychological and logical atomism and became a science that dealt with structures and functions of behaviour. James and John Stuart Mill defended at length the idea that psychology was a natural science, and more specifically that it was a science of mental phenomena. Its method was that of the introspective analysis of conscious states into their parts. Psychology remained essentially a science of mental phenomena until Darwin. He conceived of organic forms as wholes which are acted upon by the environment, and in turn act on the environment in ways that make them more or less fit for survival and reproduction. He conceived of consciousness as another organ functioning to generate adaptive behaviour for the whole organism. Two philosophers were also important in transforming psychology from the earlier science of introspection to an objective science of a functioning organism. One was John Dewey who broke with introspective atomism and insisted (in his essay on “The Reflex Arc Concept”) that introspective parts are crucially related into wholes serving various functions. His concept of relation and of function was, however, hardly scientific, but was teleological, essentially derivative from Hegel and the British idealists. It was close enough to effect the introduction of Darwinian functionalism into psychology, though. Dewey’s colleague at Chicago, J. R. Angell, took up the functionalism, arguing that consciousness as an organ was a problem-solving mechanism that enabled the biological whole to become a creature better fitted to its environment. But the science was not yet wholly objective; he did not eliminate completely the anti-scientific teleology deriving from Hegel via Dewey. Angell’s student, John B. Watson, showed how to make the science wholly objective and how to eliminate the teleological dross. But in doing that, he reverted to a sort of atomism—now an atomism of bits of behaviour rather than bits of consciousness, but atomism nonetheless. This is just the sort of atomism the Darwinian revolution established as inadequate, and which Dewey, for all his inadequacies, was trying to overcome. It was another philosopher, E. B. Holt, one of the anti-Hegelian “new realists,” who showed how one psychology could be both an objective science and one in which the parts were seen holistically, as functioning to effect the ends of a biological organism.
Aristotle distinguishes three areas of theoretical knowledge: ‘first philosophy’, natural science, and mathematics. There are, however, numerous problems in trying to understand precisely how he thinks these fields are to be differentiated from each other, what sorts of investigations belong in each category, and whether each field has its own distinctive methods and principles. In this session we will focus on natural science, with a concentration on the following questions: How does Aristotle distinguish natural science from ‘first philosophy’? How is the study of the soul related to natural science? Can we determine which of the investigations Aristotle carried out himself fall within natural science? What, if anything, unifies these investigations such that he considered them to be at once independent investigations and at the same time contributions to a single science?
A discussion of the PA I 1 claim that nous and dianoia are *not* appropriate subjects for the student of nature (physikos), against the backdrop of De anima I.1 and III.10. The result of this discussion will be used to argue that it is the strong teleological integration of matter and form in Physics II that produces a dilemma about intellect for Aristotle.
A study of how the divine enters into the study of nature. The result of this study will be used to argue that Aristotle's discussion of the divine reflects a keen interest in enforcing a certain view of the science of nature. Essential components of this view are the ideas that the science of nature is a unified science, that it is a distinctly organized body of knowledge, and that there are disciplinary boundaries the student of nature is expected not to trespass.
Although different branches of natural science are each confined to one kind of phenomena, there is just one physical world to study, and the world is an interconnected whole of some type in which complex phenomena are constituted out of simpler phenomena. Quite generally Aristotle thinks that the possibility of natural science depends upon the existence of non-accidental, non-contingent regularities. The claim that there are such regularities is not itself one of the principles of a physical science, nor is it derived from any physical principles. The purpose of this paper is to examine Aristotle’s belief that some kind of explanation can be given as to why there is an orderly domain governed by regularities.
An investigation into Aristotle’s conception of the hierarchical order of the natural world and its relation to the divine, first unmoved mover. The starting point of this inquiry will be an analysis of GC II 10, 336b 27-337a 7, a passage which indicates, I shall argue, the main epistemological justification for the boundaries, internal divisions, hierarchies, as well as unity and basic concepts, of Aristotle’s science of nature.
Philosophical naturalism, the dominant biological research paradigm during and prior to Darwin's time (and of which he considered himself a member) had as its central preoccupationthe discovery and formulation of biological laws. For most people in the field, biology’s future as a science was tethered to whether researchers could successfully formulate true biological generalizations (primarily with respect to biological form). Whence this view of biology's scientific fate? The philosophical naturalists' picture of sciencewas strongly (if not entirely) informed by the enormously influential views of leading Victorian philosophers of science, John Herschel and William Whewell, both of whom had argued that the discovery of (or intent to discover) laws of nature was essential to science. Darwin was deeply committed both to the maxims laid out by Herschel and Whewell and to the philosophical naturalists' biological mission. However, contemporary biologists and philosophers of biology are generally skeptical a bout the existence of “distinctly biological laws” (Beatty 1995), a skepticism which they trace back to Darwin (in one way or another). I attempt to reconstruct the role of laws in the burgeoning science of pre-Darwinian biological theory and practice and argue that the types of scientific tasks for which putative laws of biology were used by the 19th philosophical naturalists, Darwin included, are still part and parcel of modern biological practice. This creates a prima facie plausible case for the view that Darwin's discoveries do not destroy the possibility of biological lawhood, and for the view that there are laws in biology.
In this paper, I discuss Darwin’s analogy to artificial selection in the Origin. I consider whether we ought to view it as a search for a vera causa, and I conclude that we should not. I then discuss whether we should view the analogy as an argument from analogy. I argue that there are reasons why we should not view it as this either. I then encourage a view that has the analogy doing minimal work overall. I then provide a positive account which examines the pedagogical value the analogy had for Darwin’s readers.
Thanks to the work of a number of HOPOS scholars over the last ten years, we now have a much better appreciation of the political engagement of the philosophy of science community in America in the years leading up to the Cold War. For example, we have a much better understanding of the lively debates over science and values that took place during this time, debates that included such figures as C. West Churchman, Philipp Frank, and Richard Rudner. Despite the progress that has been made in understanding this crucial period, however, more remains to be done. Toward this end, this paper will examine an important participant in the immediate post-WWII debates over science, values, and democracy, one who has received very little attention by historians of the philosophy of science – namely, James B. Conant. Conant is well known as a former president of Harvard University, an influential science policy advisor during WWII and in the beginning of the Cold War, and a commentator on American education. His philosophical writings on science are unfortunately less well-known. In addition to being an influential mentor to Thomas Kuhn, Conant developed a pragmatic theory of science that was firmly in the Deweyan tradition and that drew explicitly upon the epistemological holism of his Harvard colleague, W.V.O. Quine. Furthermore, Conant was concerned about the challenges that the increasing intertwining of science and politics posed for both science and democracy, and he drew upon his theory of science in order to propose corresponding changes in the organization of scientific research. These organizational changes, Conant argued, would raise epistemic standards within scientific research, which in turn would give the U.S. an advantage in the Cold War. This paper is divided into three sections. In the first, I discuss Conant’s argument against “the scientific method,” an argument that draws upon his work in the history of science and upon his acceptance of Quine’s epistemological holism. Conant’s adoption of Quinean underdeterminationism plays a crucial supporting role in his belief that the “activities of scientists in their laboratories are shot through with value judgments.” It also serves as background for his views on the nature of scientific theories, which is the subject of the second section. Influenced by Dewey’s The Quest for Certainty, Conant argued that theories are “policies, not creeds;” that is, theories are not attempts to represent the natural world with ever-increasing accuracy, but rather are tools for manipulating our environment in effective ways. In the third section, I discuss one of the ways in which Conant put his theory of science to work. His belief that scientific research is inevitably value-laden led him to argue that de cision-making in certain areas of scientific and technological research – especially in weapons research – should take place within a system of “quasi-judicial review,” according to which decisions to undertake certain kinds of research should be decided via an adversarial process.
Hume’s 1742 essay “That Politics May be Reduced to a Science,” asserted that there are some “eternal political truths” that may be as certain as mathematical propositions. This paper argues that Hume and Smith treated the moral sciences, and political economy more specifically, as on epistemic par with the natural sciences. It will establish that both Hume and Smith were strong instrumentalists and hence wary of the recent triumphs of Newtonian physics. Hume believed that our ascription of nomothetic patterns applied to both spheres, the natural and the moral, interchangeably. Smith went even further. Citing the system of Cartesian vortices and its prolonged success among the learned, Smith proposed that we are less likely to be deceived about the foundational claims in moral philosophy than in natural philosophy. Moreover, Hume and Smith drew epistemic distinctions between the physical sciences and natural history and embraced the latter when it came to inferential patterns in the moral sciences.
There is an enduring story about empiricism, popularized in the early twentieth century by philosophers like Ayer and Russell, and targeted by other philosophers such as Husserl. It runs as follows: from Locke onwards to Carnap, empiricism is the doctrine in which raw sense-data are received through the passive mechanism of perception; experience is the effect produced by external reality on the mind or ‘receptors’. In addition, empiricism on this view is the ‘handmaiden’ of experimental natural science, seeking to redefine philosophy and its methods in conformity with the results of modern science; it should naturally be allied to the emphasis on experimental confirmation of evidence supposedly developed in the Scientific Revolution. The following papers aim, in different ways, to revise our view of empiricism and its relation to experimental science, focusing in particular on its relation to medicine, by looking both to conceptual developments in medicine and their philosophical ramifications; the diffe rence between a specifically ‘medical’ empiricism and a more mainstream empiricist epistemology or doctrine of experiment. If we rethink some key moments in early modern philosophy (including figures such as Harvey, Boyle, Locke, and Leibniz) in light of their medical ramifications, this may give us some historiographic headway in relation to standard positivistic or Kantian histories of modern philosophy and science.
This paper examines the impact of the new theory of qualities on Galenic medicine in mid-seventeenth-century Britain. The new theory of qualities associated with the mechanical philosophy had far-reaching implications for the natural philosophy that underlay the traditional Galenic methodus medendi. In the mid-1660s, reformist physicians, inspired by the new theory of qualities espoused by the likes of Robert Boyle, challenged the theory of medical qualities that was part of Galenic orthodoxy. Traditionally, the Aristotelian primary qualities of hot, cold, wet and dry being the explanans in the diagnosis of humoral imbalance and in the elaboration of the therapeutic qualities of medicines and regimens. However, chymical physicians in Britain in the 1660s began to analyse the symptoms of disease and the actions of medicines in terms of the theory of qualities that accompanied the new corpuscular matter theory.
Traditionally Berkeley and Hume were thought to have drawn out the glaring skeptical absurdities inherent in Locke’s empiricism by pushing it to its absurd logical conclusion. Although Locke was to be praised for his original insight, he was to be declaimed for failing to see or appreciate the skeptical tangles inherent in it. Medical empiricism traditionally claimed to differ from skepticism in precisely that the scope of skepticism was limited. They did not allow skeptical arguments to be applied to all synthetic a priori theses or to sensory perception itself or to be pushed to their logical extremes. These medical empirics allowed for certain kinds of synthetic a priori claims, for certain kinds of commitments to contingent theoretical principles, and certain kinds of non-empirical inferences. For this paper, I will assume the historical thesis that Locke was a medical empiricist, that in other words he was not only inspired by the revival of Hellenistic medical empiricism but that he was intenti onally drawing on it, especially in regards to its limitations on philosophical skepticism. This paper addresses the philosophical question of whether such limitations on the application and extension of skeptical argumentation are philosophically supportable for Locke – “Can Locke’s philosophical project be justified given the historical assumption that he was a medical empiricist rather than a logical empiricist à la Hume?” Given the three avenues wherein medical empiricism differs from logical empiricism, this question can be broken down as: was Locke philosophically justified in his commitments to substance, cause, and possibility/necessity; was he justified in allowing for the provisional acceptance hypothetical theoretical (dogmatic) theses and for the testing of such hypotheses through experience and experiment, which solidifies their provisional epistemic status; and was he justified in accepting a limited yet non-empirical analogical reasoning? I shall argue that he was philosophically justified since such an extension requires a commitment to “criterion” tropes rather than “relativity” tropes, and that such limitations are perfectly compatible with a “materials” empiricism like Locke’s.
In this paper we suggest a revisionist perspective on two significant figures in early modern life science and philosophy: William Harvey and John Locke. Harvey, the discoverer of the circulation of the blood, is often named as one of the rare representatives of the ‘life sciences’ who was a major figure in the Scientific Revolution. While this status itself is problematic, we would like to call attention to a different kind of problem: Harvey dislikes abstraction and controlled experiments (aside from the ligature experiment in De Motu), tends to dismiss the value of instruments such as the microscope, and emphasizes instead the privileged status of ‘observed experience’. To use a contemporary term, Harvey appears to rely on, and chiefly value, ‘tacit knowledge’. Secondly, Locke’s project is often explained with reference to the image he uses in the Epistle to the Reader of his Essay, that he was an “underlabourer” of the sciences. In fact, Locke’s ‘empiricism’ turns out to be above all a practical proje ct (i.e. ‘moral’), which focuses on the delimitation of our powers in order to achieve happiness, and rejects the possibility of naturalizing knowledge. When combined, these two cases suggest a different view of some canonical moments in early modern natural philosophy.
In his unpublished Directiones ad rem medicam pertinentes of 1677, G. W. Leibniz puts forth a number of proposals for the improvement of public health. There is a persistent concern throughout this programmatic work to find ways to uncover what ordinarily lies beneath the surface. All opportunities should be taken to study bodily fluids. Autopsy on human beings is also useful for penetrating into the body, even if, in contrast to animals, we do not have the convenience of being permitted to cut them open while still alive, and he recommends that as many people as possible be subjected to autopsy after death. Leibniz hopes that soon a flesh-eating liquid might be discovered that will leave the veins and arteries intact, the better to study. But is it just the inside of the body that Leibniz hopes to get at through his reformed medicine, or is it something deeper? For a neo-hylomorphist such as Leibniz, it is not such a large step from a concern with the inner causes of outer features and symptoms, to a concern with the similarities --and not just analogies-- between medicine as treatment of the body, and religion as treatment of the soul. For Leibniz in the Directiones, these two projects seem to be two sides of the same coin. He thus proposes a number of measures for the organization of the medical profession, and repeatedly draws a parallel between the way the clergy is organized and the way doctors should ideally be organized. In this paper I would like to consider the extent to which Leibniz's proposed reforms of the medical system might not just be seen as helped along by analogy to ecclesiastical orders, but might indeed be seen as proposals for the supplementation, or even surpassing, of the institution that had traditionally been charged with the task of seeing to human well-being. I would like to consider the extent to which flesh-eating liquids, systematic autopsies, etc., might in light of this institutional shift come to play the role for Leibniz of what earlier had been seen to by philosophical investigators of the soul. Finally, I would like to consider the importance of this text and its proposals for our understanding of the deeper metaphysical problem of corporeal substance in Leibniz.
There’s little dispute that the claims regarding mathematical knowledge and mathematical reasoning forwarded by Kant during his critical period are as novel as they are difficult to interpret. While it’s long been recognized that we can make some headway in cracking the Kantian mathematical code by connecting his sometimes obscure remarks about math to various other discussions in the First Critique (1781/1787) and Prolegomena (1783), much recent scholarship concerning Kant’s math has offered a further and important interpretive lesson: adopting a broader, more historically sensitive perspective can illuminate Kant’s critical approach to the mathematical sciences. For instance, by taking seriously Kant’s peculiar interpretation of Euclidean geometry, Friedman (1992) and Shabel (2004) have shed further light on Kant’s account of geometrical construction procedures and the transcendental status of space. Narrowing their focus on the more immediate context in which Kant was working, Shabel (1998) and And erson (2005) have, in a similar vein, revealed the importance of Wolff’s mathematics for understanding the role of ‘symbolic construction’ and analyticity in the Kantian critical framework. Our goal in this symposium is to pursue this historically sensitive strategy and bring further clarity to Kant’s critical account of mathematical reasoning and mathematical knowledge by attending to the historical ancestry of proposals and ideas that are central to Kant’s mathematics. Specifically, we aim to shed further light on the distinctively Kantian rendering of the imagination, postulates, and axioms by appealing to the work of some of Kant’s predecessors and contemporaries, including John Locke, Johann Heinrich Lambert, and Johann Schultz. Beyond highlighting the ways in which Kant modifies and extends the work of these philosophers and mathematicians, we hope to reveal that a more historically sensitive understanding of these peculiar aspects of Kant’s account of mathematics can inform our interpretation of the broader critical project of the First Critique. Mary Domski will explore the role Kant grants the imagination in geometrical construction by comparing Kant’s account with that forwarded by Locke in the Essay Concerning Human Understanding. Attending to the connections between Lambert’s and Kant’s respective notions of postulates, Alison Laywine will examine the role of postulates in the Kantian critical framework and the Transcendental Deduction in particular. Daniel Sutherland will examine Kant’s rendering of axioms, principles, and postulates by appeal to the different ways in which Kant and others construed magnitudes and their relationship to mathematics. Michael Friedman will offer a brief commentary on these papers at the end of the session
In §3 of the Preamble to the Prolegomena (1783) Kant makes the somewhat curious remark that he found in Book 4 of Locke’s Essay Concerning Human Understanding (1689) a hint of the distinction between analytic and synthetic judgments (4:270). Though Kant makes no explicit reference to Locke’s construal of geometry or mathematical reasoning in this context, Kant’s remark has inspired some recent commentators to examine the possible affinities between Locke’s and Kant’s respective accounts of mathematical knowledge (cf. Wolfram 1978 and Cicovacki 1990). Such treatments tend to take as their starting point the noticeable similarity between Lockean instructive propositions and Kantian synthetic propositions: while Locke claims that mathematical propositions are instructive insofar as such propositions reveal the relationship between an idea and a property “not contained in [the idea]” (Essay 4.8.8), Kant claims, in brief, that mathematical propositions are synthetic insofar as they require we “go beyond” wh at is given in a concept. Such a general similarity notwithstanding, the differences between their accounts of mathematical knowledge and mathematical reasoning have also not gone unnoticed. Perhaps most notably, whereas Locke claims that we learn our simple idea of space through experience, Kant proposes an a priori form of spatial intuition that grounds the synthetic a priori status of mathematical judgments. In this respect, Kant stands in a class all his own, and as Carson has pointed out in her recent work (Carson 2002, 2006), the important differences between Kant’s and Locke’s portrayal of mathematical reasoning can be traced to Kant’s novel proposal of a pure form of spatial intuition that grounds the necessity and certainty of geometry. While there’s no dispute that Kant’s proposal of an a priori spatial background distances his critical account of mathematics from Locke’s empiricist account, there remains, I think, an important thread that ties their accounts together: the role of the imagination in geometrical construction. As I hope to show in this paper, attending to the imagination as a basis for comparison grants us a deeper appreciation of the differences between Locke’s account of how we define geometrical concepts and Kant’s critical account of mathematical construction. Whereas Locke, I argue, appeals to the limits of the imagination as that which limits our constructions, Kant appeals to the form of the a priori spatial background as that which determines what we can and cannot construct. Thus, by focusing on the role of the imagination in their respective accounts, we gain further insight into the distinctive character of Kant’s math. For on the portrait I offer, it was only by coupling the a priori form of spatial intuitio n with the constructions of the imagination and also rejecting the limits of the imagination that play a central role in Locke’s account that Kant was able to “go beyond” what he found in the Essay.
I have argued elsewhere that Kant was almost certainly impressed by Lambert's call to reform metaphysics by introducing into that science constructive postulates like those we find at the beginning of Book One of Euclid's Elements. This led me to make the following suggestion. It would have been natural for Kant to understand the problem (or part of the problem) of the Transcendental Deduction to be to show this: the understanding operates by something like Euclidean postulates to the extent that its pure concepts relate to objects a priori. The purpose of my HOPOS paper will be to put this suggestion to the test and see whether the argument of the Transcendental Deduction in any way turns on the notion of a Euclidean postulate.
Kant maintains that there are axioms of geometry, but denies there are axioms of arithmetic, or more precisely, axioms concerning magnitude (quantitas). However, while there are no axioms that concern the question “what is the magnitude of a thing,” Kant does allow that there are analytic propositions concerning quantitas, such as “Equals added to equals are equal.” This proposition had the status of a common notion in Euclid, but Kant denies that it is an axiom because it is analytic. On the other hand, he also allows that 7 + 5 = 12 is a self-evident synthetic a priori proposition of numerical relation, but denies it the status of an axiom because it lacks generality. But where does Kant’s position leave him with respect to the commutative and associative laws of arithmetic? As Charles Parsons once put it, Kant could not have denied their truth, so if they are indemonstrable, then they should be axioms; on the other hand, if they are demonstrable, they must have a proof to which he never alludes. Did Kant perhaps think that they are analytic? If so, what would their relation be to particular instances? Kant holds that in geometry certain analytic propositions are used in geometry, such as “a = a or the whole is equal to itself,” and “(a + b) > a, the whole is greater than its part,” the latter of which also corresponds to a Common Notion in Euclid. But Kant states that these propositions are only admitted into mathematics because they can be exhibited in intuition, and he denies them the status of principles. Of course there remains the nontrivial question of how such purportedly analytic principles play their role in mathematics. Kant’s views on the status of these laws is made more pressing by the fact that Kant’s student and defender, Johann Schultz, formulates the associative and commutative laws (in terms of magnitudes), argues that they are indemonstrable, and declares that they are the axioms of arithmetic. He also introduces two postulates of arithmetic, which assert how magnitudes can be generated. Gottfried Martin claimed that the move to arithmetic axioms was prompted by Kant, but this seems unlikely. Does this mean, as Parsons suggests, that Schultz understood the status of the associative and commutative laws better than Kant himself? These issues raise more general questions about the foundation for general claims in mathematics, and I believe that a proper understanding of Kant’s views regarding general claims in arithmetic will require a closer examination of general claims in geometry. Moreover, since philosophers and mathematicians before and during Kant’s and Schultz’s time thought of mathematics as the science of magnitudes, a proper understanding of Kant’s position will require sensitivity to the different ways in which Kant, his predecessors, and his contemporaries construed magnitudes and their relationship to mathematics. By appealing to these different views on magnitudes, I aim to add some clarity to Kant’s account of principles, axioms and postulates in mathematics.
Bergson elaborated his thought at a time when philosophy of science was beginning to be established as an academic specialty. Responding to Comte’s criticism of metaphysics, he endeavored to bring precision to the inquiries he had undertaken and did not hesitate to call on scientific discoveries. In turn philosophers of science came to define their approach with respect to this new philosophical system. It became one of the targets of criticism for a scientific philosophy. The results of historical study lead us to reexamine Bergson’s involvement with the sciences. He paid particular attention to the work of philosophically inclined scientists, such as Poincaré, Duhem and Cope. Covering various fields — mathematics, physics as well as biology — he brought into focus their differences. His work was reflected on by philosophers of science, such as Le Roy, Milhaud, Hannequin. Then, on the basis on relativity and quantum theories, Russell, Schlick and Bachelard sought to elaborate new approaches. The aim of this symposium is to explore the basic claims entering into Bergson’s conception of the sciences, their classification and their methods. Thereby we seek to provide a reconstruction of a debate that was crucial in defining the agenda of early philosophy of science.
The advent of the works of Bergson was experienced as a tremendous breath of fresh air for French intellectual orthodoxy in the study and the teaching of science. Soon after the publication of his works, relativity and quantum theories obligated a reconsideration of the role of philosophers and scientists in debate over the epistemological consequences of those theories, i.e., whether one should simply keep to the traditional positivistic, rationalistic empiricism or, as a consequence of these theories, construct completely new conceptions of science and of the mind. In the first camp were the positivists of the Vienna Circle, Benda, and Einstein, and in the other camp were Bergson, Bachelard, Popper, and Bohr. As with all great works that obligate a redefinition of their fields, Bergson’s work had a dual impact in France. He had supporters in Le Roy and De Broglie, but also detractors in Poincaré and Bachelard. The case of Bachelard’s reaction toward Bergson is most interesting. In La Dialectique de la durée (1936), Bachelard claims that “of Bergsonism we accept everything but continuity” and that the rest of the book will be an attempt to show the possibility of a “discontinuous Bergsonism”. These are intriguing statements, though perhaps ironic, if we take into account Bachelard’s reductionist approach to science, his belief that there is no philosophy beyond philosophy of science, his conception of history as evidence for epistemological ruptures between common sense and science, his position on the negative role of intuition in science, and his allegiance to the discontinuous character of scientific creativity. All of these positions were radically different from those taken by Bergson, who believed that philo sophy should be above science, biology is methodologically different from physics, that history is evidence of continuity, that intuition gives access to absolute knowledge, and that creativity is an effect of the élan vital. How could Bachelard possibly think that simply dismissing the continuity of Bergson would allow one to keep the rest of Bergsonism together with his own dialectical approach to scientific thinking? Furthermore, why was Bachelard so interested in ‘capturing’ Bergson to his side while at the same time being so critical of Bergson in virtually all of his epistemological works? And why did Bachelard devote two entire books (L’Intuition de l’instant (1933) and La Dialectique de la durée) to Bergson at a time when Bergson’s popularity inside of academia was already waning? My project is to explore one of these issues. I will look at the intellectual context in France at the time of the debates about “what to do” with Bergsonism and “what to do” with relativity and quantum theory. I focus on the reaction of Bachelard to some of the works of Bergson, including the Essai sur les données immédiates de la conscience (1889), L’Evolution créatrice (1907), and Durée et simultanéité: à propos de la théorie d’Einstein (1922) and indicate whether a discontinuous Bergsonism is indeed possible.
It is telling of the attraction that philosophy of science exerted in the 1880s to learn that Bergson was at first tempted to embrace this new field of study. As he confides to William James in a backward glance on his formative years: “Up until then I was imbued with the mechanistic theories which my reading of Herbert Spencer had led me to. My intention was to devote myself to what was then called ‘philosophy of science’ and, with this in mind, I set out to examine some fundamental scientific notions. It was the analysis of time as it occurs in mechanics or physics that overturned all my ideas”. The elaboration of his metaphysics is thus preceded by a period centered on the philosophical scrutiny of science. And his mature thought develops claims with respect to space, time, consciousness and life that have implications for philosophy of science. Bergson was a thinker with whom philosophers of science had to contend, as evidenced by the reactions of Bachelard, Russell and Schlick. The turn of the 19th and 20th centuries was a time when philosophy of science was introduced into the academic curriculum and the first chairs in the field were created. Bergson could not help becoming involved in this endeavor. There were those who sought to follow up the consequences of his thought in this direction, such as the mathematician Edouard Le Roy. There were those who opposed his views, such as Poincaré. In Arthur Hannequin’s critique of science and in particular in his inquiry into the notion of space, Bergson could find an argument in favor of his own philosophy. He also entered into discussion with Gaston Milhaud, who, following Boutroux and Poincaré, had developed a doctrine in which convention and contingency were allotted a role. We have thus here several thinkers involved in establishing philosophy of science with whom Bergson came into contact. The publication of an additional volume of Bergson’s correspondence and recent studies on the debates of the time provide new material, and the aim of this paper is to question Bergson’s relation to science by means of a study of his dialogue with the philosophers of science.
Recent inquiries into Henri Bergson’s work have demonstrated the importance of philosophy of science in both his development and concerns. Indeed, science and philosophy are often confronted in Bergson’s writings. One of the most prominent results of this confrontation is the bold claim that the science of matter and the science of life differ in their respective core structure. The former, according to Bergson, is exclusively a product of the human intelligence, whereas the latter requires, in addition, the contribution of intuition (both intelligence and intuition together forms the human spirit, still according to Bergson). Consequently, the epistemological status of physics (and, more generally, science of matter) and of biology (science of life) are claimed to be different. What do these differences consist in? Is it, as Jacques Monod suggested in his book Chance and necessity (1971), the merely classical difference between a vitalist conception on life and a materialist one? Or is it something diffe rent and possibly more subtle? In this paper, taking Bergson as a philosopher of science, we will examine the concepts used by him to characterize science of life as opposed to science of matter. Does the term “science” mean something different when applied to matter as compared to life? Or is there some unity in the concept of science of nature that encompasses matter and life? Looking back to the way Bergson was reflecting on life, we will address these core epistemological issues. We will confront Bergson’s claims with those of the contemporary biologist Ernst Mayr who devoted a large part of his work to similar questions (see, notably, his book published in 2004, What makes biology unique? Considerations on the autonomy of a scientific discipline). Doing so, we will show that modern biology emphasizes, if not always Bergson’s ideas, at least some of the questions they were supposed to answer to.
The American neo-Lamarckian Edward Drinker Cope (1840-1897) does not easily fit in with the major confrontations that Bergson’s Creative Evolution (1907) sets up with biology, and that no doubt relate to the following problems: the debate between mechanism and vitalism (Driesch, Reinke), the question of heredity (neo-Darwinism, De Vries, Eimer, the French neo-Lamarckians), and that of individuality (Weismann, Delage). But that doesn’t mean that Bergson attaches lesser importance to him, and it’s even the contrary: as a matter of fact, Bergson considers Cope as “one of the most eminent representatives” of neo-Lamarckism, and that declaration is to be found in a passage of the book where Bergson pronounces his own metaphysical assertion of a force that would be internal to life in general. That force, which Bergson is going to call “vital impulse”, would be an “effort” that could “imply consciousness and will”, and that’s what Cope would have made conceivable. Thus, the meaning of Cope’s doctrine, in Bergso n’s view, would be above all metaphysical, and that would be the reason why that doctrine seems to be a little subordinate, within the biological discussions that Creative Evolution pursues. The latter thesis can be confirmed, if one notices that Bergson borrows elements of Cope’s biological thought, in pages of the book where the latter is not necessarily mentioned. Now, questions that are both scientific and metaphysical are asked in each of those cases: the question of the process of growing old, which is the very mark of duration on the living body itself, as opposed to individual consciousness, to matter considered as a whole, and to life in general; the question of photosynthesis, which is the chemical phenomenon on which life in its totality is “dependent”; and above all, the question of the difference between life and matter, that Bergson conceives from the point of view of thermodynamics, a science to which great attention was paid by Cope himself.
In 1937, the british biologist John Burdon Sanderson Haldane, then widely known as one of the founders of population genetics, started to claim that marxist philosophy was a useful, if not necessary, tool for scientists. Not the only scientist turning to radical politics during those years, he joined the Communist Party of Great-Britain (CPGB) in 1942, and became, with a few others such as J.D. Bernal, one of the spokesmen for a generation of left-wing scientists. During his communist period (from 1937 to 1950 when he left the CPGB disagreeing with its position in the Lysenko controversy), he grew an increasing interest in philosophical issues about science. Unlike for example Bernal who's aim was mainly to build a marxist theory of the history of science, Haldane's goal was to use marxist philosophy, dialectical materialism, as a unifying for a global understanding of nature (updating Engel's dialectics of nature), science (both as a social and historical activity and as the production of objective knowledge), and general human history (in a classical marxist way). Studies about the british radical scientists of that time often stress the sociological and political causes of the phenomenon, explaining their marxist views about science as a way to gather political opinions with a social function as scientists. We believe that relying on these external factors as the sole explanation for this attempt to build a marxist philosophy of science is one-sided, and may be misleading. In our work, we will try to emphasize the interactions between these external, sociological factors, and, at least in the case of Haldane, more internal ones. We understand Haldane's use of marxism as a philosophical tool for the scientist as the conjunction of his political commitment, with his own scientific and philosophical evolution. We will focus on a few theoretical issues raised by Haldane before his turn to marxism, and to which dialectical materialism may have appeared as a solution:
– the question of reductionism : in the context of the vitalism vs. mechanism debate, Haldane had express the need for an ontology that refuted idealism without being reductionist;
– the question of the unity of science : being a polymath Haldane felt the need for a global epistemological view on science, on the relation between the different sciences;
– the relation between science and society : Haldane had since the end of the twenties argued for the extension of what he called the scientific point of view to the whole society, and especially for scientific politics.
We would like to show that Haldane’s acceptance of dialectical materialism, appearing to him as a materialistic but not reductionist ontology, as an epistemology emphasizing the unity of science as a specific social activity, as a scientific philosophy of politics, and as being at the same time an ontology, an epistemology and a general philosophy united by a common logic, can be understand only as the conjunction of philosophical and sociological factors.
The relationship between Gaston Bachelard’s epistemology and positions articulated in the context of Logical Empiricism is usually conceived as a ‚radical opposition’. Respective disinterest and non-communication is frequently assumed regarding these contemporary philosophies that were each concerned with the consequences of the upheavals of quantum physics and the theories of relativity as well as the subsequent questions on the representation of nature. With this contribution I intend to show that such an account is far too simple with respect to particular notions of Bachelard. By investigating the concept of ‚phenomenotechnics’ I shall demonstrate how some of the central elements of Bachelard’s epistemology ought to be seen as concrete answers to and results of a close examination of the propositions that Schlick, Carnap, Reichenbach, Neurath and others introduced to the philosophical public during the 1930s. While Bachelard is clear and explicit in his polemics against conventionalist philosophy since his doctoral thesis of 1927, his critique on the more recent philosophical movement of ‘logicism’ at that time is in contrast to this rather hesitant and put forward in an incident and indirect manner. Nevertheless, the according indications show up repeatedly in all of his work, prevalently with regard to one of his main objects of interest: the respective dynamics of elaborate experimental techniques on the one hand and highly abstract and general mathematical physics on the other. With support of a series of reviews that Bachelard wrote on books of Hans Hahn and Hans Reichenbach in the 1930s, his objections can be made even more articulate. Special attention will be paid to what Bachelard postulates as being the ‘appropriate’ role of mathematics in demarcation from the one that is devised by the – in Bachelardian terms – ‘philosophie viennoise’. Following Jean Cavaillès, he disapproves in particular of the assum ption that an empirical ‘content’ on the one hand and mathematical ‘form’ on the other could be regarded independently. Thus, though mathematical reasoning is a ubiquitous positive reference in his epistemological writings, Bachelard never alludes to the topical subject of its logical foundation. Rather he is interested in the physical application of mathematics, which he qualifies as the ‘active epistemic and productive centre’ of the contemporary sciences. In a general vein this contribution counters a recent tendency in philosophy which reduces Bachelards epistemology to being a ‚precursor’ of the approaches of Georges Canguilhem and Michel Foucault and therefore misses its autonomous position in the more confined discourse of philosophy of science of the 1930s.
Ludwik Fleck’s 1935 monograph Entstehung und Entwicklung einer wissenschaftlichen Tatsache (Genesis and Development of a Scientific Fact) focused on a then-widely-applied immunological claim: “the Wassermann reaction is related to syphilis” (1979, xxviii). This historical study of the concepts of syphilis and the Wassermann reaction, grounded in Fleck’s own serological research, culminated in a provocative epistemological thesis: scientific facts emerge gradually from interplay of socio-cultural factors and ‘resistance’ to collective human effort within a coherent system of shared assumptions (“thought-style”). Scientific facts (the ‘units’ of knowledge on Fleck’s view) thus cannot be assessed for correctness outside the ‘conceptual environment’ of the thought-style in which they emerge and (for a time) persist. Anticipating the vogue of social constructivism by decades, Fleck’s sociological account of scientific knowledge influenced Kuhn (1996, viii-ix) and has been approved by historically-inclined p hilosophers (Löwy), sociologists of science (Knorr Cetina) and postmodernists (Herrnstein Smith). However, in the concluding sections of his monograph, Fleck describes the thought-style of modern science in terms that social constructivists eschew: distinguished by “a common reverence for an ideal – the ideal of objective truth, clarity, and accuracy” (142). Unlike many of his present-day supporters, Fleck does not approach this ideal ironically, as disingenuous or misguided ideology. Instead, he characterizes it as expression of an “intellectual mood” sustained by and reflecting the tripartite social structure of modern scientific practice: peripheral ‘popular science,’ routine vademecum (handbook) science, and progressive journal science (111-125). I examine this feature of Fleck’s social account of science, and argue that it contains materials for a defense of objective scientific knowledge, which engages the socio-historical aspects of scientific practice. The ideal of objectivity, on Fleck’s view, is anchored to the social structure of modern science and “put into effect” by means of three epistemic norms: (1) the obligation of every scientist to subordinate his personal individuality to the democratic community of research; (2) the inclination to objectivize (depersonalize) thought-structures resulting from scientific work; and (3) striving for a maximum of information, to be represented in a coherent formal system (144-145). I examine these three epistemic norms in relation to Fleck’s own immunological practice, critique them in light of more recent studies of science, and propose modifications that yield a more defensible social account of scientific objectivity.
Throughout the 17th and 18th centuries, talk of infinitesimal line segments and numbers to measure them was commonplace in discussions of the calculus. However, as a result of the conceptual difficulties that arose from the misuse of these conceptions their role became more subdued in the 19th-century calculus discussions and was eventually "banished" therefrom. This is well known to historians and philosophers of mathematics alike. What is not so well known in these communities, however, is that whereas late 19th- and pre-Robinsonian 20th-century mathematicians banished infinitesimals from the calculus, they by no means banished them from mathematics. Indeed, contrary to what is widely believed by historians and philosophers, between the early 1870s and the appearance of Abraham Robinson's work on non-standard analysis in 1961 there emerged a large, diverse, technically deep and philosophically pregnant body of consistent (non-Archimedean) mathematics of the (non-Cantorian) infinitely large and the infin itely small. Unlike non-standard analysis, which is primarily concerned with providing a treatment of the calculus making use of infinitesimals, the bulk of the former work is either concerned with the rate of growth of real-valued functions or with geometry and the concepts of number and of magnitude, or grew out of the natural evolution of such discussions. with reference to the above, in [Ehrlich 2006] we wrote: "In this and a companion paper…we will explore the origins and development of this important body of work in the decades bracketing the turn of the twentieth century as well as the reaction of the mathematical community thereto. Besides helping to fill an important gap in the historical record, it is our hope that these papers will collectively contribute to exposing and correcting the misconceptions regarding non-Archimedean mathematics alluded to above and to shedding light on the mathematical, philosophical and historical roots thereof." In [Ehrlich 2006], we provided a philosophically sensitive, in-depth historical account of theory of non-Archimedean systems of magnitudes in the years prior to the development of non-Archimedean geometry (1870-1891), and in the companion paper, which covers the period 1891-1914, we provide an analogous account of the development of pantachies of real functions, non-Archimedean geometries, and non-Archimedean systems of finite, infinite and infinitesimal numbers that were introduced for the analytic representation of these algebraic and geometric structures. It is the author's hope that by drawing attention this remarkable body of work and to the spectrum of theories of the infinite and the infinitesimal that emerged therefrom, it will become clear that the standard 20th-century histories and philosophies of the actual infinite and the infinitesimal that are motivated largely by Cantor's theory of the infinite and by non-standard analysis (as well as by the more recent work in smooth infinitesimal analysis) are not only limited in scope but are inspired by an account of late 19th- and early 20th-century mathematics that is as mathematically myopic as it is historically flawed. The proposed talk will provided an overview of the above work. Ehrlich, Philip: 2006, "The Rise non-Archimedean Mathematics and the Roots of a Misconception I: the Emergence of Non-Archimedean Systems of Magnitudes," Archive for History of Exact Sciences 60, pp. 1-121.
This presentation will consider whether Carnap’s philosophical programme of explication is threatened by what many theorists consider to be a misadventure late in his career, namely, his forays into the ramseyfication of scientific theories. In doing so it seeks (i) to highlight one (under-discussed) aspect of the long-running debate about the propriety of employing the distinction between analytic and synthetic statements, (ii) to distinguish Carnap’s use of Ramsey sentences for expressing the content of the non-observational parts of scientific theories from that of structural realists, and (iii) to present a qualified defense of Carnap’s explicationist programme. The problem at issue is the following. The impossibility to formulate the analytic/synthetic distinction for theoretical statements prompted Carnap to look beyond the arguably defensible criterion of empirical significance he published in 1956. Using Ramsey’s method of replacing descriptive theoretical terms by variables bound by higher-order quantifiers, Carnap, in publications dating from 1958 to1966, claimed to be able to give a characterization of the cognitive content of theoretical terms so as to distinguish synthetic and analytic statements concerning them. Now according to Newman’s objection—well-known by now but not then—a ramseyfied theory is trivially satisfied once the empirical constraints set down by its observational part are met. This appears to speak not only against structural realists but also against Carnap’s avowed intention to use ramseyfication to exhibit the cognitive content of theoretical terms and to reestablish the analytic/synthetic distinction for theoretical statements. Th e question arises how much damage ensues for Carnap’s explicationist programme. First it is to be be considered whether the Newman objection does, after all, tell against Carnap and at what cost a stance of staunch resistance would come. Then the strategy of abandoning ramseyfications will be considered. Two ways of doing so appear to remain open. The first way envisages dropping the wide analytic/synthetic distinction for theoretical languages but retaining the narrow distinction between logical and descriptive terms within it (as Carnap was prepared to do before he hit upon ramseyfication). Accordingly Carnap could still distinguish, in a given logico-linguistic framework, a logical truth from an empirical truth and so retain analyticity in its narrowest sense. Here the question arises whether the bare distinction between logical and descriptive terms can still sustain enough of the explicationist programme. The second way would be that of building on Carnap’s early claim that theoretical terms can be given a direct interpretation and regarding as analytical theoretical statemen ts those that follow from the logical and semantical rules of the language in question. Here the question arises what to make of Carnap’s apparent later dismissal of this idea of providing direct interpretations for theoretical terms. A comparative assessment of these different ways of defending Carnap’s explicationist position will conclude the presentation.
Developments arising from Michael Friedman’s critical appraisal of Carnap’s project in The Logical Syntax of Language fall into three main intertwined themes. The first is logical, and has to do with the possibility of formulating the appropriate syntactic definitions without recourse to stronger metalanguages. The second is epistemological, in that the project is understood within a philosophy of empirical science. The third, more directly historical, stems for a reconstruction of Carnap’s early work in logic and his relation with Gödel. The present paper is about the second theme, and further elaborates on previous interpretations which highlight the fact that Carnap’s Logical Syntax provides a foundational framework for the ‘total language of empirical science’, and not only for controversies within the philosophy of mathematics. Indeed, as Friedman’s has argued, Carnap is concerned with the Kantian question of how mathematics, both pure and applied, is possible. Consequently, his treatment of pure m athematics is better understood as an account of the mathematical structure of the languages of natural science. I want to stress that the philosophical significance of Carnap’s project survives even granting Friedman’s Gödelian objections. As I will argue, I see this latter point present in Friedman’s own work. The system of Logical Syntax uncovers the relations between languages that are displayed at different levels of conceptual resolution within the total system of matematized, and mathematizable science. For Carnap’s system has to allow for a structure rich enough to represent the mathematics of natural science up to physical laws. I suggest some consequences of this for our understanding of Carnap’s Principle of Tolerance and his conception of empiricism. The Principle of Tolerance sets standards of conceptual clarity to alternative systems, on the face of the total language of science. Only thus can the role of mathematization in producing conceptual clarification in empirical knowledge be fully appreciated. Carnap’s liberal attitude towards richer formal languages stems from his recognition that scientific theories are systems of concepts formulated at different levels of mathematical complexity. Logical syntax has to allow for a conceptual reconstruction of empirical science as a mathematical system. The adoption of a system of classical mathematics, for instance, is justified in terms of the possibility of explicating mathematized natural science. Tolerance allows for a comparison of the mathematical structure of empirical languages. On the face of this, I will argue that Carnap’s conception of scientific empiricism as a ‘movement’ comprehended by different converging groups can be understood as an expression of a basic tolerant attitude which places demands for syntactic and methodological clarity.
The principal way that Carnap spoke of logic was as an instrument, a technology. This suggests that metalogic was for him a means of testing logical instruments for their usability as conceptual technologies. Indeed, he often used language like “safety” in discussing matters such as the possible inconsistency of a formal language. There is a branch of physics that tests instruments for usefulness, precision, limitations, etc. It is called—and was called in Carnap’s time—metrology, and the Germans sometimes used the term “Instrumentenkunde.” This brief paper outlines an interpretation of Carnap’s philosophy of science that takes Carnap at his word on these matters and presents metalogic as metrology. I gesture at Carnap’s own history in communication technology both during the war in which he worked on telephony and as an advocate of languages like Esperanto. I argue that this reading can avoid many of the problems Carnap allegedly had—the analytic/synthetic distinction is not a dogma of empiricism but a presupposition of conceptual metrology, for example—but also raises other problems. I will argue that Carnap, despite his own vision of philosophy as technology, had an impoverished (an uncritically prosthetic) vision of technology.
I analyze the historical development of Reichenbach’s ideas on causality and probability from his Ph.D. thesis in 1915 until the mid 1930s. This period is characterized, for one, by early versions of conceptions, for which Reichenbach would become well-known in the 1950s, among them causal forks and the common cause principle. More interestingly, however, one witnesses a complicated interaction between two basic principles, causality and probability inference, that change their mutual rank and epistemological status between a priori and a posteriori in basically three steps.
(1) Already in his Ph.D. thesis, Reichenbach developed two ideas that would remain pivotal to his early philosophy. First, there existed no fundamental difference between the theory of error presupposed by any measuring science and the probabilistic theories of physics. This implied that strictly causal and statistical laws were understood as lawful within the same conceptual framework. In his 1944 Philosophic Foundations of Quantum Mechanics, Reichenbach, however, abandoned this program and stressed the peculiarities of quantum mechanics. Second, the principle of causality, to become at all applicable to the description of physical phenomena, must be supplemented with a second principle, then called the principle of lawful distribution or the principle of the continuous probability function while it later appeared under the name of inductive simplicity or probability inference. Until 1920, Reichenbach considered both principles as synthetic a priori. In contrast to the categories of space and time, h e did not historically relativize causality. Still, the departure from Kant’s original doctrine was substantial because in virtue of the second principle all physical laws, at least on the empirical level, were merely probable.
(2) In the mid-1920s, Reichenbach called causality a complex of principles, the common core of which was the inductive principle of causality. This “says that by means of a functional relationship unobserved events can be predicted from observed ones.” Its concrete form presupposes inductive simplicity, applied to the functions governing the data. Other than descriptive simplicity, which guided the choice of a space-time geometry in relativity theory, inductive simplicity represented a hypothesis about nature. It was therefore possible that physics confronted phenomena that compel it to abandon causality. In a subsequent paper of 1925, Reichenbach considered the division between both principles as merely formal and proposed a conception based on “the concept of probable determination alone.” By establishing a probability topology, he attempted to ground the direction of time on the microscopic causal order, an approach he would elaborate in his postumous The Direction of Time.
(3) This microscopic definition of the direction of time prompted a polemic with Moritz Schlick that also concerned the latter’s definition of truth as unique coordination. To Reichenbach’s mind, there existed only higher or lower degrees of probability. Moreover, the coordination of a statistical theory to experience involved itself probabilisitic concepts. To counter Richard von Mises’ criticism that these probabilities could not expressed in terms of relative frequencies, Reichenbach shifted the problem to the most basic level. “Probability logic cannot be squeezed into the Procrustes bed of strict logic”. But it need not be squeezed either. For, the statement that probability laws do not hold is self-contradictory because it already presupposes the principle of induction. Reichenbach believed that in this way he had finally dissolved Hume’s problem. To my mind, he in effect treated induction – in the same vein as the principle of lawful distribution a decade before – as a condition for the possibi lity of experience, the only difference being that no transcendental argument was available to justify it. Since he granted, on the other hand, that the principle of causality could be empirically inadequate, it appears that both principles had changed rank.
We propose to explore the conceptual and explanatory connections between the concepts of chance, mechanism, and design within the context of evolutionary biology. Once a historical perspective on this topic is adopted, one quickly notices a puzzling bifurcation in contemporary philosophy of evolutionary biology. The nature and place of teleological explanation in evolutionary biology is a central topic, as is the nature and place of chance, randomness, and accident. But these topics are often treated as if they had no connection to each other (e.g. Beatty 1984, 1990; Lennox 1992, 1993). The following lines of a letter from American botanist Asa Gray to Charles Darwin, written in the wake of the publication of Darwin’s On the Origin of Species (1859), suggest this bifurcation has not always been present. If I get time to turn it over I will say a few words on the last chapter of your Orchid book. But it opens up a knotty sort of question about accident or design, which one does not care to meddle with much until one can feel his way further than I can. [Gray to Darwin, 9/22/1862] What precisely was it in the last chapter of Darwin’s The Various Contrivances by which Orchids are Fertilized by Insects (1862) that provoked this teaser from Gray? No doubt passages such as the following: Although an organ may not have been originally formed for some special purpose, if it now serves for this end, we are justified in saying that it is specially adapted for it. On the same principle, if a man were to make a machine for some purpose, but were to use old wheels, springs, and pulleys, only slightly altered, the whole machine, with all its parts, might be said to be specially contrived for its present purpose. Thus throughout nature almost every part of each living being has probably served, in a slightly modified condition, for diverse purposes, and has acted in the living machinery of many ancient and distinct specific forms. (Darwin 1862; 283-4) There is an extended discussion of chance and design in their correspondence during 1862-1864, which is carried on within the context of their studies of adaptations in different species of plants, and of the American civil war, over which they argued with the same vigor, wit, and intelligence. The philosophical significance of this discussion has yet to be fully recognized. For example, what is the import of Darwin’s use of terms such as ‘contrivance’ and ‘living machinery’ or the significance of invoking ‘ends’ and ‘purposes’ alongside ‘chance’ in these processes? The symposium papers investigate several philosophical aspects of this historical discussion between Gray and Darwin (and others). In addition to generating a deeper understanding of the historical context of this conversation, we will use that understanding as a basis for exploring the advantages and perils of discussing chance, randomness, teleology, purpose, and design as a single, tightly knit conceptual network in contemporary biology.
This paper reviews the core thematic elements in the correspondence between Asa Gray and Charles Darwin. We will start by reviewing the evidence that Darwin was a teleologist (presented in Lennox 1993). But what sort of a teleologist was he, and did his teleological perspective change over time? It will be argued that Darwin’s interactions with Asa Gray plays a significant role in modifying his understanding of the relationship between teleological explanation and chance. For both Darwin and Gray, a discussion of these concepts and their relationships had significant religious overtones. Gray’s New England Presbyterianism shaped his understanding of Darwinism and the positions he adopted in his discussion with Darwin about chance and design. Conversely, it will be argued, Darwin’s understanding of teleology and chance changed significantly as a result of his correspondence with Gray; and with that change Darwin’s theological convictions waned.
This paper concentrates on the development of Darwin’s thinking regarding the contingency of evolutionary outcomes, which went hand-in-hand with his struggles to make sense of the theological implications of evolution by natural selection. It will be argued that Darwin’s engagement with religious issues, especially through correspondence with Gray, was productive of further developments in his evolutionary thought. He did not simply derive theological consequences from previously arrived-at evolutionary premises. Darwin came to see considerable contingency in evolution by natural selection, long before subsequent Darwinians began to stress the Mendelian-stochastic sources of unpredictability in evolutionary outcomes. And he came to this highly indeterminist point of view in the process of contemplating a theology that would have made Gray’s hair stand on end, if Gray could have fathomed it, and that would bedevil Christian compatibilists up to the present day.
This paper focuses on how Gray’s understanding of teleology and theology evolved as a consequence of his discussion with Darwin, with special attention to Gray’s later writings (post-1873) and the role of his particular religious perspective (Presbyterian Christianity, in contrast to the British Anglicanism with which Darwin was familiar). Three elements of Gray’s writing are explored: (1) the interpretive framework used by Gray in his “Structural Botany, or Organography on the basis of morphology” (1879), which is significant given his 1874 Comment, “let us recognize Darwin’s great service to Natural Science in bringing back to it Teleology: so that, instead of Morphology versus Teleology, we shall have Morphology wedded to Teleology”; (2) the new essay (“Evolutionary Teleology”) written for his collection of previously published essays in Darwiniana (1876), where Gray explicitly distinguishes “purpose” from “design”; and, (3) Gray’s mature perspective on natural theology in his 1880 Yale lectures entit led “Natural Science and Religion”. Gray’s correspondence and published writing demonstrate substantial conceptual change in his understanding of both natural science and natural theology, including the famous argument about variation being led along beneficial lines that he is most associated with today. In addition to addressing the historical question of how the communication with Darwin transformed Gray’s thinking about teleology (and how this illuminates the impasse between Darwin and Gray on these topics), the significance of Gray’s final perspective on natural theology and natural selection for continuing philosophical discussions about biology, theology, and design is considered.
What is Neo-Kantianism in the history and philosophy of science? Does Kantianism prescribe necessary and universal presuppositions of our knowledge? Or can Neo-Kantianism make the case for an historical a priori that changes relative to progress in science? How has Neo-Kantianism recast the relationship between theory and empirical evidence since Kant? In the mid-19th century in Germany, 50 years after Kant’s death, the empirical psychology of Ernst Weber and others was given mathematical form by Wilhelm Wundt and Johann Herbart. During the same period, Joule, Carnot, and Clapeyron discovered the conservation of energy in France and England, and Mayer and Helmholtz formulated its principle. Kant had argued that a science of psychology was impossible, and that causal laws and spatial concepts are necessary a priori, not mathematical generalizations from empirical evidence. Kantian philosophy began to evolve, owing in large part to the fact that many of the scientists who made these discoveries were careful readers and critics of Kant’s philosophy. Helmholtz knew Kant and Fichte well (in the latter case personally), and he faults Kant for his idealistic theory of space, while Herbart evaluates Kant negatively and positively in his Psychology Textbook. The encounter between Kantian philosophy and science in the mid-19th century became a movement, wissenschaftliche Erkenntnistheorie or scientific theory of cognition, which re-assessed the elements of Kantian philosophy in response to progress in empirical science. In “In What Ways Was Helmholtz Kantian?”, Gary Hatfield examines Helmholtz’s criticisms of Kantianism over the nature of space and of a general causal law. Hatfield evaluates Helmholtz’s early and late theories of causation and of the relation between visual space and physical space. The Marburg School of Neo-Kantianism tried to reconcile Kant’s work with progress in empirical science, engaging with the work of Erkenntnistheorie. In “Empirical Psychology and the A Priori,” Lydia Patton analyzes the reaction of the founder of the Marburg school of Neo-Kantianism, Hermann Cohen, to Herbart’s and Helmholtz’s attempts to define the a priori using empirical psychology and physiology. Patton evaluates Cohen’s response, that our access to the a priori must be historical, as a means of integrating the history and the philosophy of science. In “Empirical Psychology and Marburg School Neo-Kantianism on the Object of Psychology,” Scott Edgar argues that Wilhelm Wundt’s resolution of the problem of defining the object of psychology, a major problem for nineteenth-century psychology, is endebted to the Marburg Neo-Kantian Paul Natorp. In “Function and Symbol in Marburg Philosophy of Science,” Alan Kim evaluates the evolution of the concept of “function” in the work of Natorp and Ernst Cassirer, from Natorp’s view of functions as immanent in the exact sciences to Cassirer’s broader perspective on functions in his philosophy of symbolic forms. Kim’s talk shows how the Marburg School’s reaction to the sciences contributed to their independent philosophical views.
Helmholtz prominently invokes Kant or Kant-like positions in various of his published works, including his Kant memorial lecture (1855), the introduction to the psychological part of his Handbuch (1866), and his lecture on “The Facts in Perception” (1878). At the same time, he avowedly departed from Kant's own teachings concerning the necessity that physical space is Euclidean in structure, and in his precise characterization of the status of a general causal law. Focusing on issues of causation and the relation between visual space and physical space, and taking into account changes in Helmholtz's views over time, I will examine the various ways in which Helmholtz's positions both were and were not Kantian, both in his own estimation and in critical historical perspective. In this way, I hope to help assess whether he was part of a “first wave” of neo-Kantians.
In the 19th century, the empirical psychology of Herbart, Fechner, and Wundt seemed promising as a way of cashing out the a priori. In his Psychology Textbook (first ed. 1816), Herbart argues that the mathematical method of describing the subject’s activity can give an empirical, scientific account of the central Kantian notions of the a priori, most significantly the notions of apperception and of pure intuition. Hermann von Helmholtz made similar arguments regarding the physiology of perception, interpreting the a priori in terms of the physiological conditions of perception. The neo-Kantian Hermann Cohen concedes to Herbart and to Helmholtz that Kant’s original view, that we can give a synthetic and exhaustive proof of the ways to unify our knowledge a priori, is indefensible given progress in empirical psychology and physiology. Cohen concludes that enumerating the a priori principles of our knowledge is a progressive task of reason, revealed in the history of the mathematical method in science. Cohen argues that only a philosophical history of scientific thought can yield an epistemological basis for the a priori. I examine this 19th century dialogue in the context of current debates about the psychological versus the epistemic a priori in science. I evaluate what light the dialogue between Cohen and empirical science can shed on our account of the a priori.
Wilhelm Wundt, like other psychologists in the second half of the nineteenth century, wanted psychology to be independent of non-empirical, metaphysical questions about the nature of mind and its relation to physical bodies. To do this, Wundt proposed to that psychology should be a science of “inner experience”, in contrast with the other sciences, whose domain would be “outer experience.” Psychology would describe phenomenal experience without making any assumption that phenomena represent real minds existing, as it were, behind the experience. But this created a problem: having ruled out appeals to anything beyond phenomenal experience, Wundt lost his conceptual purchase on the distinction between “inner” and “outer”. Thus he had no principled account of what psychology is about. This, I argue, is the problem of defining the object of psychology, a major problem for the conceptual foundations of nineteenth-century introspectionist psychology. This paper will present the solution to this problem offered by the Marburg School Neo-Kantian Paul Natorp, and will argue that Wundt's eventual solution owed a great deal to Natorp's. The paper will thus suggest that despite the Marburg School's strict anti-psychologism, Natorp nevertheless made significant contributions to the philosophy of psychology.
A cornerstone of both Natorp and Cassirer's philosophy is the notion of “functional concept” (Funktionsbegriff). I explore the meaning and scope of “function” in Natorp's Die logischen Grundlagen der exakten Wissenschaften and Cassirer's Substanzbegriff und Funktionsbegriff, both published in 1910. Because their publication came toward the end of Natorp's long career and the beginning of Cassirer's, they offer an interesting focal point for studying the evolution of the concept of function, from Natorp's concern with exact science as the paradigm for knowledge towards Cassirer's broader notion of function in all human symbolic practices.
In 2009, the Philosophy of Science Association, the foremost international association of philosophers of science, will have seen the 75th year of the publication of Philosophy of Science, and, so far as we know, the 75th year of its existence. The qualifier is required because no one knows precisely when, where, or by whom, the PSA or Philosophy of Science was founded—or, most importantly, precisely why it was founded. This symposium’s participants regard such ignorance as something of an embarrassment to philosophers of science generally and HOPOI particularly, and offer this symposium as a step toward a history of the PSA, Philosophy of Science, and of philosophy of science in the 20th century. Perhaps not surprisingly, it’s our view that attempts to develop the philosophy of science in the 21st century—as a discipline, a profession, or a cultural voice—absent an understanding of the history of the PSA and its journal are hobbled. This symposium proceeds chronologically, each participant taking up a particular time period. We don’t know who started the PSA or its journal, or to what end, but we know that it was not created ex nihilo; it must have emerged within an intellectual community with a certain vision of the philosophy of science. Alan Richardson suspects that that intellectual community had its locus in the sense of philosophy of science that E. A. Singer, the barely-studied University of Pennsylvania philosopher, had been articulating since the 1890s. Certainly Singer’s influence on the early PSA is undeniable: he is not just among the “Advisory Board” announced in the journal’s first issue, but three of his students, William Malisoff, C. West Churchman, and Richard Rudner, edited the journal successively through 1974. With the context thus set, Gary Hardcastle tackles the question of the proximal causes of the PSA—who founded it, when, and how—as well as the vision that this founding was intended to reflect. It’s a matter of record that Philosophy of Science began with an editor, William Malisoff, eleven members of the editorial board, and a forty member Advisory Board. What stands out is the remarkable diversity among them: there are scientists of all stripes, mathematicians, legal theorists, political scientists, and many others. It was, as Malisoff proudly crowed in his inaugural editorial, a “coalition dominated by the unorthodox.” Hardcastle will aim to tell us exactly who put the coalition together, and what they thought philosophy of science was in 1933 such that it needed such a coalition. It’s one thing to work out what the PSA and Philosophy of Science were meant to be, and quite another to work out what there were initially and what, in time, they became. Heather Douglas takes up this later set of questions, first articulating the inter-disciplinarity of the PSA and its journal in the late 1940s and 1950s, as we find it reflected particularly in successive PSA By-Laws, and then spelling out the exchange of this model for a professional, disciplinary, philosophy of science in the early 1960s, reflected again in revised By-Laws and changing relations with, for example, the American Association for the Advancement of Science.
This talk attempts to put some structure on the work of Edgar A. Singer, Jr (1873-1955), arguably the most important founder of Philosophy of Science and, thus, the Philosophy of Science Association. Singer spent his entire career at the University of Pennsylvania, receiving an undergraduate science degree there in 1892 and a PhD in Philosophy two years later. He held positions in the Penn Philosophy Department from 1896, becoming Adam Seybart Professor of Intellectual and Moral Philosophy in 1929 and holding it until his retirement in 1944. His students and colleagues at Penn included William Malisoff, C. West Churchman, Richard Rudner, and Robert Butts, all editors of Philosophy of Science. As Gary Hatfield has pointed out, Singer lectured on philosophy of science at Penn already in the 1890s. During that period he considered writing a five volume compendium of philosophy of science—no significant portion of the proposed work was ever published. His interest in philosophy of science seems to have arisen as a result of his desire to make the human mind a subject of empirical, experimental knowledge—a concern already expressed in his 1894 thesis on “On the Composite Nature of Consciousness” and made explicit in a Journal of Philosophy essay of 1912, “Mind as an Observable Object.” The framework for Singer’s philosophy of science was a sort of Machian voluntarism—many of the themes in geometrical conventionalism made prominent by Poincaré in the first decade of the twentieth century and thematized in the Relativistic setting by the logical empiricists in the 1910s and 1920s are already sketched in Singer’s 1898 lectures in terms of the mind’s choice of the simplest “economy of thought.” The talk will try to draw out the links that connect Singer’s philosophical concerns with mind, his early philosophy of science, his programmatic “philosophy of experiment” of 1930, and his late participation in Conferences on Experimental Method at Penn in the mid-1940s. Attention will be given also to Singer’s influence upon scientists interested in philosophy of science—such as Malisoff—and philosophers of science who ultimately became social scientists—such as Churchman and Russell Ackoff.
The journal Philosophy of Science debuted in January of 1934, as the self-described “organ” of the Philosophy of Science Association, the first mention of which we find in the Journal of Philosophy in 1933. The initial eleven-member editorial board for Philosophy of Science includes some of the usual suspects from the philosophy of science—Rudolf Carnap, Herbert Feigl, L. Susan Stebbing, and Morris Raphael Cohen—but a number of others whose presence is somewhat surprising: for example, the mathematicians E. T. Bell, Dirk Struik, and Alexander Weinstein, the physicist-turned-legal scholar Walter Wheeler Cook, the geneticist Hermann Joseph Muller, and the psychologist Karl Lashley. Matters are much the same regarding the Journal’s forty-two member Advisory Board, in which the names of scientists and mathematicians usually dissociated from the philosophy of science outnumber those we retrospectively identify as philosophers ten-to-one. It was, as the journal’s first editor, William Malisoff, wrote in his inaugural editorial, a “coalition dominated by the unorthodox.” There is any easy and obvious explanation for the make-up of these early instantiations of the Journals editorial and advisory boards: philosophy of science was young, just becoming established as a field or discipline, there weren’t enough philosophers of science around, and having prominent scientists (or at least their names) on board always helps. The aim of this talk is to examine the supposition behind this explanation, namely, that the Philosophy of Science Association and its journal were created to reify and advance a particular discipline or profession, the philosophy of science. I’ll be interested particularly in laying out the history of the Association and its journal (drawing on archival sources), and in uncovering the motivation and vision of those who created it. With a grasp on what the Association and its Journal were for we can evaluate contemporary histories of philosophy of science in the 20th century.
Although the subject of philosophy of science had been of interest to philosophers in North America for more than a generation by the end of World War II, what the boundaries should be for that subject and who was a legitimate practitioner was far from settled. In the first by-laws for the young PSA, the purpose of the organization was stated to be “furthering of the study and discussion of the subject of philosophy of science, broadly interpreted, and the encouragement of practical consequences which may flow therefrom of benefit to scientists and philosophers in particular and to men of good will in general.” (Philosophy of Science, 1948, vol 15, p. 176) PSA was defended as a society open to multiple approaches and definitions of philosophy of science in the pages of its journal in the late 1940s. And the practices of the organization reflected this ecumenical view. For example, the PSA met annually as part of the AAAS throughout the late 1940s and 1950s, and these meetings often involved the Insti tute for the Unity of Science, the American Philosophical Association, and the History of Science Society. On the governing board for the PSA and the editorial board for the journal, scientists were heavily involved in the running of the society. And the topics covered in the journal, Philosophy of Science, reflected this broad view of what counted as philosophy of science (as noted in Howard 2003). By the late 1950s, however, arguments had been voiced that philosophy of science as a subject should be more narrowly construed. The PSA became a more professional society partly as a response to a demand for more consistent quality in the journal, partly through more stringent vetting of membership applications, and partly through a marginalization of the voices calling for continued openness and interdisciplinarity. In this talk, I will trace the activities of the PSA from the post-WWII era to the 1960s, and discuss the pressures that become evident by the late 1950s that would push the PSA to become a more inwardly focused disciplinary organization, including the change of editors in 1958 and the move to biennial meetings by 1968.
This paper discusses two passages from Galileo's Dialogue Concerning the Two Chief World Systems, first published in 1632. The passages are significant in two respects. First, they constitute the first recorded prediction of the inertial deflection of falling bodies and projectiles. Galileo's prediction essentially presumes the diurnal rotation of the earth and the conservation of rectilinear motion, both distinctly modern positions. Yet, secondly, the same passages are strangely ambiguous, since Galileo also predicts the absence of inertial deflections in the same physical circumstances, following the standard medieval presumption that rotational motion is conserved. Galileo is thus seen making different predictions about identical situations based on two conservation principles. The puzzle suggested by the passages can be resolved by examining Galileo's representations of space. In fact, Galileo's kinematic principle of conservation is always consistent: if undisturbed, a moving body will continue moving in the same direction with uniform speed. However, the representation of space by which this principle is interpreted changes depending on the context Galileo is discussing. On the one hand, he uses an Aristotelian, spherical representation of space to describe large-scale phenomena. On the other, he employs a Euclidean, rectilinear representation of space to describe small-scale spaces. Hence, Galileo's 'core' conservation principle means conservation of rotational motion in large-scale situations and conservation of rectilinear motion in the small scale. The ambiguous predictions arise when Galileo uses different representations of space to describe the same middle-scale situations. Galileo himself reconciled his representations of space by appealing to an Archimedean approximation, which allowed a smooth transition between spatial frameworks. This approximation, while explicitly privileging the spherical, acknowledge the practical legitimacy of assuming a rectilinear representation of space. In the end, Galileo's conception of space was neither the spherical cosmos of his forbears nor the rectilinear universe of his heirs. By examining Galileo's representations of space, this paper adopts a philosophical tradition of conceptual analysis descended from Kant. The necessity of some kind of a priori conceptual framework for observation, cognition, and understanding has been defended in turn by Reichenbach, Carnap, Friedman, and others. Extended to the physical sciences, this position entails the necessity of an a priori representation of space that coordinates the descriptions of spatial and properties and relation that appear in physical discourse with actual features of physical phenomena. A representation of space is needed to interpret the extensional meaning of spatial terminology. Other scholars have shown the usefulness of investigating representations of space, especially in relation to the development of general relativity from Newtonian mechanics. Here, the approach is extended to the period before Newton, and allows us to solve an important historical puzzle related to Galileo's physical principles. At the same ti me, Galileo's work serves as a clear case study of the constitutive importance of representations of space, in particular, and a priori conceptual frameworks, in general, in physical reasoning. In other words, philosophy provides a profitable method of historical investigation, while the historical account vindicates the philosophical view.
Experimentation has been regarded as an essential part of scientific research at least since the seventeenth century. Opinions may have differed as to the epistemic status of knowledge gained from experiments and its inferential relation to theories, but hardly anyone would have denied that in order to gain knowledge of nature, it was important to make experiments. However, most all experimentalists, past and present, would also agree that experimental practice is precarious and not always successful. Some even published essays on the “unsuccessfulness of experiments” (e.g. Boyle 1661). The history of methodological concerns with the vagaries of experimentation has rarely been studied. This is unfortunate because these concerns are of great interest for the historian and philosopher of scientific method and methodology. The history of these concerns reflects the development of scientists’ concepts of nature, causation, intervention, and instruments. In my contribution, I examine methodological aspects of experiments with snake poison in the seventeenth and eighteenth centuries. A large number of people were involved in these endeavors, renowned experimental philosophers like Robert Boyle as well as doctors and apothecaries now long forgotten. They performed vivisections, dissections, and in-vitro experiments with snake poison. One motivation for undertaking such investigations was practical and straightforward: finding an antidote for snake poisoning. But these experiments had far-reaching implications. They were expected to shed light on essential phenomena of life, including the nature of blood and its circulation, the function of nerves, and the mechanism of disease. Reports of these experiments are a treasure trove for the historian and philosopher of experimental methodologies because they are full of remarkable reflections about what may go wrong in an experiment and about how to make it work well. I show that in the late seventeenth and eighteent h centuries, the most conspicuous methodological notion was repetition, the repeated performance of an experimental trial by one and the same experimenter. But repetition came in very different brands and fulfilled quite different purposes. Moreover, the meaning of repetition changed fundamentally during this period. My paper traces this change and considers the reasons for it.
The questions of meaning and its connections to scientific explanation were clearly on the mind of Robert Boyle when writing his polemical works. Indeed, in his criticisms of the Scholastics in, inter alia, The Origine of Formes and Qualities and the Excellency and Grounds of the Mechanical Hypothesis, Boyle argues that the advantage of the mechanical natural philosophy is that it employs meaningful terms, whereas the Scholastic science does not. While this criticism is well known, just what Boyle meant by “intelligible” is not. One of the reasons for this is that his views on the matter changed over his career. Early on, Boyle thought that the intelligibility of the mechanical hypothesis partially accounted for its truth and universal scope, near the end of his career, however, his claims of the truth and scope of the mechanical hypothesis were less sanguine, but the intelligibility and superiority to Scholastic explanations remained intact. In this paper I will lay out an interpretation of intelligibility and how it develops in Boyle’s works that explains why he consistently thought mechanical explanations are both meaningful and better than the Scholastic explanations, even though he changed his mind about the scope and limits of mechanical explanations. My interpretation of Boyle differs substantially from the developmental account in William Eaton’s monograph Boyle on Fire (New York: Continuum Press, 2005), and is also distinct from those of Peter Anstey, Marie Boas Hall and Rosemary Sargeant. My account will also clearly illustrate the connections between intelligibility and the scope and limits of mechanical explanations in a way that will show why important philosophers like Locke and Leibniz should be interested in Boyle’s philosophical work.
Newton claimed that he did not make the assumption in the Principia that gravity acts unmediated between two bodies, a claim he needed to make lest he be open to criticism from contemporaries such as Huygens who believed that gravitational attraction had to be mediated by an ether. In the Metaphysical Foundations of Natural Science, Kant accuses Newton of having been “at variance with himself,” having presupposed the immediacy of gravity. According to Michael Friedman’s interpretation of Kant’s criticism, Newton had to presuppose that the third law of motion applied directly between the planets in order to characterize a privileged frame of reference for the solar system. And what Newton called the ‘true motions’ of the planets are actually defined relative to this frame of reference, so Newton’s stated aim in the Principia of finding these true motions could only be carried out by assuming that the gravitational attraction is not mediated, thus ruling out an ether a priori. I consider this criticism from Newton’s point of view, in light of recent work by George Smith. Newton himself thought of the methodology of the Principia as a new way of ‘arguing more securely’ in the face of the intractability of natural phenomena, having realized that the planetary motions could be so complex that one could not hope to achieve exact agreement between theory and observation. Newton thus made sure that the argument in the Principia works even if there is only an approximate match between the deduced phenomena and observation. Thus, even if gravity was in fact mediated by an ether, as long as the third law of motion could be applied approximately, one could still find an approximation to a privileged frame of reference for the solar system. Further, Newton realized that the complexity of the actual motions of the planets could become a very strong source of evidence for a theory. According to Smith, a key feature of Newton’s methodology in the Principia is that it involves a sequence of progressively more complex idealizations, in which a match between theory and evidence is hard to come by, thus providing very high quality evidence if such a match is achieved. But you have to make certain assumptions in order first to create this evidence, and one of these assumptions is that gravity is an unmediated force that works directly between the planets. I argue that Newton is not being inconsistent, however, in leaving open the possibility that gravity could be mediated by an ether, for the method is designed so that if gravity does not, in fact, act immediately between the planets, this would turn up eventually in the sequence of successive idealizations.
In 1835 two well-known mathematicians - Sir William Rowan Hamilton and Augustus De Morgan - produced original and little known remarks about the foundations of algebra. De Morgan published a review of Peacock's Treatise on Algebra, defending "symbolical algebra" from a modern formalist perspective; and Hamilton wrote introductory remarks to a memoir [published in 1837; read in 1833 and 1835], defending a Kantian, intuitionist foundation for symbolical algebra. The two men were friends, and they defended the same mathematics; but the defenses could not be much more different. Here I will explicate the two conceptions, showing that they are interesting forerunners of twentieth century formalism and intuitionism. In addition, I will argue that, like the twentieth century versions, each defense of algebra has significant merit.
While reflecting about the relationship between geometry and experience at the of the 19th and the early years of the 20th century, Michael Friedman has recently wondered that “it is especially remarkable, in particular, how seldom a straightforwardly empiricist understanding of the relationship between geometry and experience ? according to which geometry is an empirical theory like any other whose validity is straightforwardly verified or falsified by experience ? was represented” (Friedman 1995, 127). Such an empiricist view is presented in Moritz Pasch’s Vorlesungen Über neuere Geometrie(1882), in which the first axiomatization of projective geometry (considered as the most general geometry from which Euclidean and non-Euclidean geometry could be obtained) is presented. When Hans Hahn reviewed the second edition of Pasch’s lectures in 1921, he noted their remarkable historical influence and that they were well-known to everybody with an interest in geometry (e.g., they are taken into consideration in Carnap’s dissertation Der Raum, 1922). Moreover, they were regarded important enough to be reprinted in a revised edition in 1926, 44 years after their original publication, but no English translation was ever made. Nowadays Pasch is remembered ? if at all ? only for his axiomatization of the order relation and ‘Pasch’s axiom’, i.e., as a precursor of Hilbert’s Foundations of Geometry (1899), and his empiricism is either brushed aside completely or considered inappropriate for a mathematical work. What makes Pasch’s lectures on the foundations of projective geometry, particularly interesting from a history of philosophy of science perspective is that he held two views that later became cornerstones of logical empiricism, namely that our knowledge rests (a) on empirical origins and (b) on purely logical deductions. For Pasch, geometry is a natural science, whose axioms (“Stammsä”) are based on immediate observations and justified by our most basic experiences. Thus, geometric terms originally refer to physical bodies; for example, a ‘point’ is a body which cannot be divided within the limits of our observations, and ‘lines’ are finite segments of points, since these are observable, while infinitely extended lines, as they are usually conceived, are not. In regard to the nature of the inferences Pasch explicitly states that they must “always be independent of the meanings of the geometric terms” (1882, 98), thus adopting a rather modern view that became generally accepted only later. In this paper, Pasch’s axiomatic approach to geometry and his views on nature of geometry are critically discussed, with particular focus on the tension between his strong empiricism, aimed as providing an epistemological basis for geometry, and his goal to provide a rigorous mathematical foundation of geometry. In addition, the influence of this work on further developments is assessed.
Howard (2004) has recently criticized the general attacks against the “Copenhagen interpretation” (e.g. Cushing (1994), Beller (1999)) on the ground that the members of the “Copenhagen school” – Bohr, Heisenberg, Rosenfeld, etc. – never shared a common interpretation of quantum mechanics. While these physicists saw quantum mechanics as a complete theory unearthing the strange behaviour of particles through its notions of ‘indeterminacy,’ ‘complementarity,’ and ‘entanglement,’ they did not all believe that quantum phenomena should be explained through the ‘wave packet collapses’, ‘subjectivism’, and ‘positivism’ nowadays viewed as core elements of the Copenhagen interpretation. According to Howard, while Heisenberg advocated such interpretive tactics, Bohr himself forcefully rejected them. The current belief that Bohr endorsed Heisenberg’s views is, according to Howard, only a “myth” created by Heisenberg publicly christening his own position the Copenhagen interpretation. I here contend that Heisenberg is not to blame for our belief that Bohr and his colleagues shared the view we now label the “Copenhagen interpretation”. True, Heisenberg did introduce this expression in a 1955 paper defending his interpretation of quantum mechanics, but he did so when the belief in the existence of a unitary, dominant view already pervaded the scientific literature. In the early 1950s, researchers commonly referred to the “usual interpretation” of quantum mechanics (e.g., Bohm, 1952) and the “official philosophy of quantum theory” (e.g., Bunge, 1955). One could argue that this early association between Bohr’s and Heisenberg’s interpretations made it especially important for Heisenberg to distinguish between the two views, but this would be a premature conclusion. Even if Heisenberg knew his and Bohr’s positions differed (and we lack the historical evidence to prove so), his decision to group the two views together would be a legitimate way to shed light on the interpretational debate, as long as his view share significant interpretational principles with Bohr’s. This argumentative technique, after all, is a standard and useful way to clarify scientific issues (though a sometimes frustrating one to historians and philosophers). The central question is therefore whether Heisenberg really defended, as Howard claims, the positivism and subjectivism nowadays associated with the “standard” interpretation but inconsistent with Bohr’s standpoint. I do not believe he did. Using Heisenberg’s early lectures on quantum mechanics, I argue contra Howard that, like Bohr, Heisenberg did not believe wave packet collapses represented actual physical processes induced by the observer, an idea that would have directly led him to subjectivism. On the contrary, similarly to Bohr, Heisenberg saw the solution to the measurement problem as passing through the contextualisation of observations and an analysis of the entanglement between quantum systems and measuring apparatuses, an approach closely associated to his anti-positivist metaphysics. I conclude that while today our use of the phrase “Copenhagen interpretation” is, as Howard claims, close to that of a myth, physicists legitimately used the expression in the 1950s, especially given the fact that the view it truly referred to – Heisenberg’s - was closer to Bohr’s than usually recognized.
Central to Niels Bohr’s philosophy of science is his well-known “correspondence principle.” This principle is typically believed to be the requirement that in the limit of large quantum numbers (n®8) there is a statistical agreement between the quantum and classical frequencies. A closer reading of Bohr’s writings on the correspondence principle, however, reveals that this interpretation is mistaken. Specifically, Bohr makes the following three puzzling claims about the correspondence principle, which show this traditional interpretation to be problematic. First, Bohr claims that the correspondence principle applies to small quantum numbers as well as large (while the statistical agreement of frequencies is only for large n). Second, he claims that the correspondence principle is a law of quantum theory; and third, Bohr argues that formal apparatus of matrix mechanics (the new quantum theory) can be thought of as a precise formulation of the correspondence principle. With further textual evidence, I offer an alternative interpretation of the correspondence principle in terms of what I call Bohr’s selection rule: a quantum transition between stationary states n and n’ whose separation is the number t is allowed if and only if there is a tth harmonic in the Fourier series expansion of the electron’s classical motion in the stationary state. In other words, the correspondence principle is the claim that there is a one-to-one correspondence between the harmonic components of the classical motion and the possible transitions between stationary states—not a claim about the frequency of the radiation given off in those transitions. I conclude by showing how this new interpretation of the correspondence principle readily makes sense of Bohr’s three puzzling claims.
The argument is aimed at analyzing the impact of underdetermination on the philosophy of science. I wish to expound a constructive role for Duhem-Quine underdetermination which serves to bring to light the non-empirical epistemic commitments which prevail in the scientific community. Scientists faced with empirically equivalent alternatives cannot make their choice by invoking empirical adequacy as a yardstick. Yet the observation is that scientists do prefer an account under such conditions. The non-empirical virtues operating in science are laid open in the choice between Duhem-Quine alternatives. First, I go through phases of the career of the notion of underdetermination in 20th century philosophy of science. I then present some of the strategies intended to buttress underdetermination and make it more concrete, and outline some of the more far-reaching claims tied to underdetermination. The first part of the historical survey concerns the articulation of the underdetermination thesis, the second part has to do with exploring its consequences. Underdetermination does not open the floodgates to relativism but rather plays a positive and fruitful role in epistemology by pinpointing the impact of non-empirical virtues on theory choice. Rather than being a threat to scientific rationality, it contributes to illuminating what our understanding of scientific knowledge and scientific rationality is.
For the ancient author of the Mechanica, mechanical problems are "neither entirely the same as natural problems nor entirely separate, but share in both mathematical and natural studies.” What exactly is shared and what constraints must be placed on this sharing are questions that have vexed thinkers in the Aristotelian tradition from antiquity to the early-modern period. In this symposium, we examine some ways of articulating this `sharing’, the constraints put on it, and how these matters bear on questions in the larger philosophical landscape. In particular, we are interested in how the curious status of mechanics---as a discipline---and mechanical demonstrations---as instances of valid reasoning---were justified by and how they impacted on Aristotelian (and sometimes non-Aristotelian) ideas concerning abstraction and separability, the distinction between art and nature, theories of demonstration, and the structure of the sciences.
The question of the philosophical impact of the mechanical theories of late antiquity has received little systematic attention, and with it the reasons why ancient philosophers were unreceptive to those theories. Aristotelian commentators debated the validity of several assumptions prevalent in the ancient mechanics tradition, including the idea that the relationship between different parameters involved in causing motion can be extended indefinitely. Simplicius rejects Archimedes' claim to extend indefinitely the power of a mechanical device because of a commitment to the idea of a minima in nature, not from any systematic distinction between art and nature, as is sometimes held.
The theory of demonstration set out by Aristotle in the Posterior Analytics dictated that some sciences were `subalternated’ to other, `higher’ sciences. These subalternated sciences borrowed principles from the higher, subalternating sciences and applied them in their own domain, thus crossing, or at least straddling, disciplinary boundaries. Although the exemplars for this sort of disciplinary straddling were traditionally taken from those natural sciences that seemed to borrow from mathematics (i.e., astronomy, optics, harmonics, and mechanics), several early-modern commentators asked whether all the natural sciences (mathematical or not) were subalternated to metaphysics. Although answers to this question differed, in this paper I argue that conceptualizing the relationship of metaphysics to natural philosophy in this way offers a way of understanding the relationship of the disciplines in Descartes’ Principles. Descartes was well aware of philosophical background to mixed-mathematics, and there is re ason to believe that he had encountered discussions regarding the subalternating role of metaphysics shortly before composing the Principles. I offer this evidence and reconstruct the foundational role of Cartesian metaphysics in relation to natural philosophy in terms of subalternation.
In this paper I examine Giovanni de Guevara’s 1627 commentary on the Quaestiones Mechanicae in relation to the scientific method Descartes employs in the Meteorology he published with his Discourse on the Method in 1637. Historical evidence reveals that Descartes came into contact with Guevara’s work during his years in Paris through his discussions with Marin Mersenne and other members of his circle. I show that Guevara’s attempt to reconcile the kinds of demonstrations employed in Aristotelian mechanics with the theory of demonstration Aristotle lays out in the Posterior Analytics leads to a reconceptualization of both the object and proofs of this mixed mathematical science. While squarely within the Aristotelian tradition, Guevara’s conciliatory efforts point in the direction of early modern approaches to scientific demonstration. In particular, they allow us to make sense of the form of demonstration Descartes employs in his Meteorology, which he labels ‘mathematical’ despite its apparent lack of features that we associate with mathematical proofs.
How should we place Guidobaldo del Monte in the changing landscape of late sixteenth, early seventeenth century knowledge? At once a faithful Aristotelian and early patron of Galileo, Guidobaldo seems to defy some naïve conceptions about the nature of the so-called scientific revolution. As is well known, one of the ways in which the mathematician Guidobaldo can be considered to have been a faithful Aristotelian is exactly that he is almost completely silent on philosophical issues, thus respecting the disciplinary boundaries which had become deeply engrained in the field of knowledge. But this leaves us with really not much to go on if we want to ascertain how he would have understood his own endeavours, and possibly what connected or separated them to these of his younger friend, Galileo. In my paper I will focus on Guidobaldo’s detailed exposition of the Archimed ean Equilibrium of plane figures to fill in part of this gap. I will especially try to ascertain how Guidobaldo’s commentary implicitly posit ions the science of mechanics with respect to the conditions for legitimate sciences that were elaborated in scholastic philosophy. It will turn out that Guidobaldo’s sophisticated understanding of the Archimedean proof of the law of the lever brings him to a view on the conditions for a successful mathematization of the balance which actually transgresses important peripatetic boundaries and in doing so prepares the way for Galileo’s new philosophy of nature.
A dominant historiography employing the dualism of analytic and continental philosophy (of science) ignores the concrete historical development since the beginning of the 20th Century with interactions between philosophers, philosophers of science and scientists. One striking counterexample is the significance of Neokantian philosophy for the emergence of Logical Empiricism and vice versa. Another exception is the convergent development and communication between Central European “Wissenschaftslogik” and North American pragmatism, from the Mach-James interaction, via Logical Empiricism of the Vienna Circle and the Berlin School leading up to the Unity of Science movement since the 1930s (with participating John Dewey, Charles Morris, Rudolf Carnap, and Otto Neurath). The papers of the double symposium will explore the mutual influences and the neglected scientific communication between these “schools” and two continents from an historical and/or theoretical point of view: dealing with Bertrand Russell’s impact on Neokantianism (esp. on Cassirer); focussing on the early contacts of Carnap with pragmatist philosophers (Quine, Morris, Nagel), his elaboration of inductive logic, and his late visits in Europe 1964ff. (rereading Dilthey and Heidegger); the Austrian College/Forum Alpbach as an institution for a pluralist philosophical encounter after World War II (with Rudolf Carnap, Herbert Feigl vs. Ernst Bloch, Philipp Frank, Karl Popper, Wolfgang Stegmüller and Paul Feyerabend); leading up to the 14th International Congress of Philosophy in the context of a (new) Cold War in 1968. This event marked the emergence of an already transformed (analytic) philosophy of science as a result from a delayed and late intellectual return of Logical Empiricists and Popperians to their fo rmer home countries Austria and Germany before the historical and pragmatic turn.
The emigration of Rudolf Carnap to the United States at the end of 1935 and therefore the change of his cultural and philosophical environment brought him in closer contact with pragmatist philosophers who played only a marginal role in his earlier European writings. A dialogue between Carnap and philosophers close to pragmatism was already initiated in Prague (so with Quine, Charles Morris and Ernest Nagel) and in America Carnap was explicitely influenced by pragmatic positions and responded to pragmatic criticisms of logical empiricism. It the purpose of this paper to show, how Carnap reacted to this interaction with pragmatism, specially concerning his response in Testability and Meaning (1936/37) to pragmatic criticisms of verificationism. Carnap from 1934 on took more and more into consideration criticisms of verificationism that was formulated by American philosophers, who, through the mediation of Herbert Feigl discussed the positions of logical empiricism. Of special interest are here criticisms by Clarence I. Lewis, Ernest Nagel, Charles Morris and Curt Ducasse to which Carnap responds in Testability and Meaning. All these philosophers analyzed between 1934 and 1936 the relation in logical positivism between meaning and verification with special reference to the danger to formulate a too restrictive criterion of meaning. It is in Carnap´s early position on verification that they saw such a much too restrictive criterion. During the debate on protocol sentences and on physicalism, Carnap abandons his earlier view that all sentences can be reduced to propositions about an observational basis and that they can therefore be definitely verified. Although the Viennese debates are essential to understand and explain this shift, it took Carnap till Testability and Meaning to formulate “a more liberal criterion of significance” (Carnap, 1963). The paper will show the role that pragmatism and the criticism of these american philosophers plays in Carnap´s passage from verification to a theory of confirmation, in his passage from the requirement of definability of all concepts in an observational language to a new form of reduction exposed in his reduction sentences and chains, and in his acceptance of dispositional concepts introduced and permited by these reduction sentences.
At a first glance, in the classic program of Logical Empiricism there is no place for induction. Though Reichenbach presented an account for an inductive logic already in the early 1930s, in the mainstream of the logical empiricist movement the attitude towards induction is either skeptic (Neurath, Carnap) or radically negative (Popper). Thus it is quite surprising that Carnap started in the early 1940s developing his own account of induction, a project that should occupy him for the rest of his life. In my talk I shall state that Carnap’s program of an inductive logic is not only compatible with Logical Empiricism but also that it is a crucial improvement of it. Roughly speaking, without induction the rational reconstruction of the sciences is necessarily incomplete. In a sense, scientists who consider only the classic hypothetical-deductive method of the Logical Empiricists simply are not aware of this background. I shall describe the roots of such a position as we can find them in the earlier developme nt of Logical Empiricism, esp. in the years before 1941.
Only one week after the violent suppression of the Czech reform movement by the troops of the Warsaw Treaty States the large XIV. International Congress of Philosophy with some 3000 participants and around 1000 speakers Vienna took place at the University of Vienna, 2nd to 9th September 1968. The context was a new wave of Cold War as well as the path breaking year 1968, with a worldwide rebellion of students, intellectuals and also workers against the established political and academic elites and the authoritarian and martial policy in East (Communism, Marxism-Leninism) and West (Anti-Vietnam and anti-capitalist movements) took place. Although the printed program did only generally address the problem of cultural freedom and political liberty, during the Congress there occurred heated disputes, controversies and protests on the function and role of philosophy just at the edge of a decisive world political event. It was at the same time the first critique of the war generation and the scientific community by students in Europe and America. These unexpected circumstances irritated and influenced strongly the traditional academic setting of the Congress as projected by the Viennese organizers and provoked a dialogue, but also a lot of confrontation between philosophers from all over the world (like Ernst Bloch, F.W. Konstantinow, Gabriel Marcel, Adam Schaff, whereas G. Lukacs and B. Russell declined). In addition, the participation of renowned and younger analytic philosophers and philosophers of science manifests a sort of starting point for the emergence of philosophy of science in Central Europe from the 1970s on and, by the way, for the convergence of “analytic” and “continental” philosophy (A.J. Ayer, Y. Bar-Hillel, M. Black, M. Bunge, R. Chisholm, J. Hintikka, T. Kotarbinski, A. Naess, W.V.O. Quine, K. Popper, N. Rescher, G. Ryle, G.H. von Wright, amongst others). This process will be analysed and interpreted with reference to the Proceedings (5 Volumes) of the Congress and the unpublished sources as well as press releases and newspaper articles. A special focus is laid on the contributions of philosophers of science and their subsequent comments (e.g., Max Black, Bela Juhos, Karl Popper and Hans Lenk) as a reference frame for the late return and emergence of history and philosophy of science in the German speaking European countries after the banishment of the Vienna Circle and the short renaissance of the “Third Vienna Circle” around Viktor Kraft in the 1950s. (cf. my contribution to HOPOS 2006). A continuing divergence of (school) philosophy and (analytic) philosophy of science without a socio-cultural framing can be traced back to this so far neglected Congress as a forum and indicator for the cultural meaning of (scientific) philosophy and philosophy of science.
Philosophy of biology, as we know it now, represented by its graduates schools, its chairs, its academic societies like ISHPSSB and journals like Biology and philosophy emerged in the 60s. During four decades philosophers of biology handled various topics, among which major issues were related to evolution and genetics: adaptationism, units of selection, concept of species, reduction of classical genetics to molecular biology, and lastly the integration of development within evolutionary theory. The importance of evolutionary theory in this field indicates that understanding modern evolutionary theory might be of relevance to the foundations of the discipline. It is indeed proper to the philosophy of biology that its main issues and theories have been devised by philosophers as well as by biologists interested in the understanding of the foundations of evolutionary theory, like Mayr, Lewontin, Williams, Dawkins or Gould. Our hypothesis is that the various ways biologists themselves made sense of the emergence of neo-darwinism as a unifying framework for biology widely impinged upon the constitution of philosophy of biology as a distinctive academic discipline. The Modern Synthesis was initiated by population geneticists Fisher, Haldane or Wright and pursued by Ernst Mayr and G.G. Simpson, E. Ford and T.Dobzhanski among others. This synthesis combined Darwin’s theory of evolution by natural selection, with Mendelian genetics understood as the sole theory of inheritance that were able to provide concepts of heredity and variation on which natural selection could rely. Genes were identified as the substrate of heredity and evolution was precisely defined as the change of allelic frequencies in populations - rather than the transformation of the form of organisms, considered by Darwin. The nascent philosophy of biology inherited from the specific neo-Darwinian interpretation of evolutionary theory, widely written by some of the Synthesis contributors like Mayr or Simpson, and elaborated on it – for example, Hull and Ghiselin’s thesis on the individuality of species is supposed to best account for Mayr’s and Dobzhanski’s work on speciation. How important concepts such as biological species or population thinking, used to characterize the epistemological specificity of the Modern Synthesis, contributed to settle the context for philosophy of biology? On the other hand, recent trends developed by today’s evolutionary biologists consider new theoretical options, such as the specificity of development and a rehabilitation of non-orthodox approaches – possibly “typologist”, as suggested by Amundson (2005) -, a reevaluation of the status of organisms as a meaningful causal level in evolutionary explanations (e.g. Odling-Smee, Laland, Feldman 2003, West-Eberhard 2003) etc. Philosophers of biology who address those issues may therefore not share the classical neo-Darwinian framework within which their discipline emerged. This suggests that such a reorientation of philosophical questioning might benefit from a reexamination of the classical readings of the Modern Synthesis, and their role in the constitution of philosophy of biology as we know it. This workshop aims at understanding the close ties that related the constitution of philosophy of biology as an academic field, and the self-understanding of the Modern Synthesis by evolutionary biologists involved in it.
By 1985 it was clear that the major advocates of the Evolutionary Synthesis believed that developmental biology was irrelevant to the mechanisms of evolution. Methodological arguments were clearly stated by mainstream Synthesis evolutionists against the relevance of development. Advocates of developmental constraint were regarded as iconoclastic opponents of mainstream views. In later years, historians and philosophers on the developmentalist side of the debate (including the present author) asserted that the bias against development had been present from the very inception of the Evolutionary Synthesis. There is a case to be made, however, that the constraint-versus-adaptation debate was (during the 1980s) an artifact of recent historical work, not of the science of the 1930s. Indeed, it may have been an artifact of the very historical reconstructions that produced Mayr and Provine's canonical The Evolutionary Synthesis (Mayr and Provine 1980). This paper will attempt to isolate the evidence on both side s. What is the evidence that a) the Synthesis was biased from its inception against the relevance of embryological development, versus b) the debates blossoming in the 1980s were generated by the methodological reconstructions of later years. Was "constraint versus adaptation" an implicit tension within the Synthesis itself, or an innovation of the Viet Nam generation?
A persistent divergence exists between American and British conceptions of the modern evolutionary synthesis (Sterelny 2001; Depew and Weber 1995, Grene and Depew, 2004; the difference is not about how many partisans were born on one or another side of the Atlantic). The American tendency (Dobzhansky-Mayr) is organism-centered, focused on speciation and was protective of an autonomous role for culture in human evolution, the British (Fisher-Ford) is genocentric, focused on adaptation and was quick to extend the sway of genes to human evolution (Sterelny 2001, 4-5). In this paper, I review evidence for and against a stronger claim--that there have been two rival, if successive, syntheses, not two tendencies within a single synthesis. The implication is that historians and philosophers, most of the time amplifying one of the traditions, should question the very notion of a single synthesis open to two interpretations. I begin by reviewing lines of argument against my claim. The most important is that the synthesis unified biology by moving from a developmental to a populational conception of adaptation, speciation, and other phenomena. This may be true, but not true enough. The exclusion of development was conducted largely by means of name calling about “essentialism” “vitalism,” and “Lamarckism.” My main point, however, is that the American synthesis was forged only in part when Julian Huxley brought the Columbia/Museum of Natural History school to Britain in the 1942 book whose title named the modern synthesis; the penny dropped when Huxley’s framing allowed the empirical work of Ford and others in turn to bring a stronger adaptationism to America. It was only with the consolidation of the revolution in molecular genetics that the Fisher-Ford legacy was able to frame a synthetic theory of its own. Americans reacted negatively; Gould construed the earlier ‘hardening of the synthesis’ that I have treated as integr al to the American synthesis as an imposition on the native tradition. Nonetheless, in spite of retrogression on the issue of speciation, game theoretical genocentrism has proved able to put the American synthesis into eclipse. It is today’s Darwinism. Differences between the two syntheses appear in matters of social and human evolution. The American synthesis incorporated these topics by bringing anthropology, with its culture concept, under its sway (in the work of Washburn and Tax). The British tradition, after inconclusively flirting with sociology, is built on cognitive psychology, with its functional-modular conception of the brain. Far from being peripheral, these differences paradigmatically exemplify more general differences about what organisms are, what kinds and degrees of agency they have, what phenomena evolutionary theory is to explain, and what methods and tools are needed. Ontological, epistemological, and methodological controversies about these issues, largely waged by philosophical surrogates for the contending sides, generally have the effect of defining a preferred orientation as the only possible synthesis by portraying the opposite orientation as incoherent. But the possibility that the idea of a single synthesis to some degre e follows from such a priori tactics is sufficient to call that idea into question.
According to Betty Smocovitis, philosophy of science a played a major role in the development of the Modern Synthesis. Reversely, a number of those biologists and paleontologists who developed the Modern Synthesis wrote a rather impressive number of "philosophical" papers and books. The aim of this paper is to provide an overall account of these philosophical productions. These productions will be divided into two categories: those that were contemporary to the period of the emergence and stabilization of the synthesis (1930-1950), and the period that followed (1950 to now). The first category testifies for the kind of philosophies that were favored by the pioneers of the synthesis. The second category is crucial for understanding how much Modern "Philosophy of Biology" was influenced by the syntheticists' concerns. Authors considered will be: Fisher, Haldane, Wright, Huxley, Dobzhansky, Rensch, Mayr, Waddington, Simpson.
Besides his activity as a naturalist, Ernst Mayr devoted a great part of his work to a systematic understanding of the epistemology and methodology of evolutionary theory. He pinpointed several major themes proper to the new evolutionary view of biology: the biological concept of species defined by the possibility of fecund hybrids, the population thinking - namely the view of set of organisms as varying individuals rather than copies of a same type- (Population, animal species and evolution, 1954), a specific understanding of causality at several scales illustrated by the difference between ultimate and proximate causes (“Cause and effect in biology”, Science, 1961), the proximate causes belonging to the lifetime of the organism and being studied by “functional biology”, namely molecular biology, cell biology, physiology, etc., and the former belonging to a population of individuals which are ancestors of the present organism, and being studied by “evolutionary” disciplines such as ecology, paleontolog y, population genetics, etc. Those concepts were used to illustrate both the specificity of evolutionary biology within biology, and its utmost importance as the field of biology being the most irreducible to physics. He used those concepts successively to vindicate the irreducibility of the systematist’s and paleontologist’s contribution to the Modern Synthesis (as compared to the population geneticist’s), and later (after the discovery of DNA) the specificity of evolutionary theory against molecular biologist’s pretentions to discover the “essence” of life (and receive the most important funding). I claim that those concepts have been widely used by philosophers, somehow justifying the distinctive importance of evolutionary issues for philosophers of biology, and explaining the persistence of “anti-reductionists consensus” (K.Waters) in the discipline when it comes to evolutionary theory – for example, about the relation between classical genetics treating genes that code for traits, and molecular genetics treating DNA sequences. Finally I will analyze how those concepts might be at odds with recent advances in evolutionary theory : trends in Evo-Devo that could make sense of a “typological” view, for example in the studies of across-species entities (e.g. the “tetrapod limb); and trends in ecology that emphasize the evolutionary role (i.e. “ultimate”, sensu Mayr) of causes pertaining to the lifetime of the organisms (hence, “proximate” causes sensu Mayr), like in the niche-construction or ecosystem engineering theory (Jones et al. 1994, Odling-Smee, Laland, Feldman, 2003). Those latter theories actually consider and model the changes that activities of individual organisms in their environment bring to the selective pressures exerted on themselves and other species, whereas Evo-Devo investigates how changes in developmental pathways – which are by definition proper to the individual organism’s lifetime – can bring about major evolutionary changes, hence challenging the same ultimate/proximate distinction. Taken together, those reaso ns call for a reevaluation of the long lasting Mayrian framework for evolutionary theory.
A common belief concerning the Mach-Boltzmann debate on atoms is that the new experiments performed in microphysics at the turn of the 19th and 20th centuries disproved Mach’s view and confirmed the one of Boltzmann. In the first part of the paper, I argue that this belief is partially unjustified. Mach’s view on atoms can be spelled out in three points (see Mach 1883, 1886): (1) atoms do not exist in themselves (i.e. as ontological building blocks of the world), (2) atoms are “thought-symbols” for “complex of sensations”, and (3) the hypothesis of atoms is “artificial”. Yet, the fact that all the new experiments of microphysics can be accounted for by means of the concept of atom does not contradict point (1): it does not imply, strictly speaking, that atoms exist in themselves. Neither does it deny point (2): Mach could reply that atoms are thought-symbols for a large set of complex of sensations. Nevertheless, since the hypothesis of atoms enables to account for many experiments, it can no more been co nsidered as “artificial”, refuting point (3). This partially unjustified disproof of Mach’s view on atoms hides another criticism, as I attempt to show in the second part of the paper. Mach tacitly assumes that the sensations to which the concept of atom is related are pure (i.e. independent of any prior knowledge), and likewise for any scientific concept. Based on this assumption, the logical positivists of the Vienna Circle have first advocated for a sensualist foundation of science. However, as it is well-known, it is impossible to describe our sensations in a neutral manner, that is, without presupposing some theories (e.g. of how our sensory organs are working). Furthermore, reducing the scientific concepts to sensations leads to solipsism (Mach’s rejection of this consequence is far from being clear and convincing), or at least, suggests a spurious image of the practice of physics, as if physicists could make research alone by collecting passively sensorial data and assigning to them symbolic labels. As a matter of fact, physicists don’t work alon e but collectively; they don’t behave like spectators but like actors: they get information on the world by making experimental manipulations, that is, by interacting with it. In the third part of the paper, I propose to replace Mach’s sensualist conception of atoms by a pragmatist one (in the spirit of Peirce, James and Dewey): the meaning of the term “atom” is given by the set of practical effects that can be deduced from it by the physicists when they are using it in their research. My argument in support of this conception goes as follows: since the term “atom” has been coined in the physicists’ practice, it is in the physicists’ practice that we can found its meaning. More concretely, I stress that each kind of “atom” enables the physicists (i) to conceive the link between preparations and measurements (see Hughes 1989), and (ii) to refer to a set of equivalent preparations (see Peres 1995).
In spite of the historical complexity and the (partly consequent) recurrent logical imprecision which surround this topic, the problem of (in)determinism has not lost its currency. Actual debates seem to concentrate around two major “poles”: 1) scientific (in)determinism and quantum mechanics, and 2) (in)determinism and the free will problem. Edgar Zilsel’s (Vienna 1891 – Oakland 1944) more philosophical writings – from his doctoral thesis on Das Anwendungsproblem (1916) up to his later articles in the 40s - can still be of some historical and philosophical interest in both these interrelated sub-domains. Furthermore paying attention to them could shed some light on a quite neglected figure within the Vienna Circle. Indeed, with respect to 1), even if Popper’s idea to reduce the problems of the interpretation of quantum mechanics to the interpretation of probability may be the expression of an overly strong thesis, the significance of the latter with respect to the former can hardly be denied. Insofar, even Zilsel’s earliest inquiries into the relationship between mathematical probability and empirical observed frequencies – written before the development of quantum mechanics and the debate around it - can turn out to be interesting both from a historical and from a philosophical point of view. The question of indeterminism in relation to quantum mechanics would be later (1935) addressed by him directly. With respect to 2), Zilsel’s earlier analysis of the law of large numbers in Das Anwendungsproblem (1916) contains already some epistemological assumptions that are very important for the interests he developed later on as an historian and a sociologist. Thinking of the law of large numbers as governing both social and physical macro-systems in which the individual behaviour of the components cannot be predicted, was an important first step to put human and physical world into a common frame. In this regard, he was moving from the 19th century problem of statistical determinism towards a unifying perspective which was meant to bridge the gap between natural and social sciences (and which is an important assumption made in regarding scientific determinism and the free will problem as two sides of a same question). In his own way, Zilsel pursued that unity of science which was of the most importance both to him and to the Vienna Circle. This unifying intent indicates by itself the relevance of Zilsel’s quite n qeglected philosophical and epistemological inquires and ideas for his better known historical and sociological work. Elisabeth Nemeth wrote in 2000: “though many surprising features of the Vienna Circle’s philosophy have been re-discovered and re-appreciated during the last quarter of the last century, Zilsel has remained relatively unknown among philosophers until now”. Since then, unfortunately, not much work has been done in this direction: my contribution would intend to take up Nemeth’s line of thought and to provide a new appreciation of the philosophical side of Zilsel’s work.
Neurath’s economic writings have been re-discovered during the last 20 years (for instance: Martinez-Alier 1987, O’Neill 1993, Uebel 2004). Today we know that Neurath suggested a deeply heterodox, pluralistic version of socialism (O’Neill:„associational socialism”) in which ecological questions were to play an important part. His theory of life-conditions prefigured some essential features of Amartya Sen’s welfare economics (see Lessmann 2007). But not only economists, sociologists, and ecologists have good reasons for taking a closer look on Neurath’s economic writings (Neurath 2004). Also from the viewpoint of philosophy of science, those writings are highly interesting. Neurath’s economic thought took its shape during the first two decades of the 20th century, when German and Austrian Economists were still involved in the well known controversies on methods and value statements in social science. In Neurath’s view, it was wrong to conceive of the debate as an irreconcilable dichotomy between the German Historical School and the Austrian School. He suggested to develop a theoretical framework in which the kind of historic-empirical investigation the “Historians” were interested in could be related to some of the highly general theoretical considerations the “Austrians” were concerned with. In opposition to the Austrians’ “methodological individualism“ though, Neurath held that the question “how the total situation of a group of people is conceiv ed of” ought to be the central concern of economic theory. In this regard his views were rather close to the Historical School. However, Neurath was aware of the conceptual and methodological deficiencies of the Historians’ “holistic” approach. His aim was to transform that approach into one that would meet the highest scientific standards of his days. The way he tried to achieve that aim is a remarkable case of philosophy of science. For it was Mach’s epistemological reflections which Neurath applied to the question how to conceive of the subject matter of economics in a more comprehensive way. In a letter to Mach Neurath explicitly addressed the manner in which Mach’s Mechanics had influenced his economic thought: „It was your tendency to derive the meaning of particulars from the whole rather than the meaning of the whole from a summation of the particulars, which has been so important. It is in value theory in particular that these impulses have benefited me through indirect paths.“ In this paper I would like to explore only one facet of Mach’s deep influence on Neurath’s economic thought, namely what Mach called the “method of variation”. So far as I can tell, Neurath never explicitly used that term. But his project of “comparative economics” can be viewed as an attempt to apply Mach’s ideas on variation to economic theory.
Recent scholarship on Quine indicates that “Truth by Convention,” composed in 1935, should not be read as a direct assault on the very idea of analytic truth. This prompts a further historical question: what caused the radicalization of Quine’s attack on analyticity? That is, how did we get from “Truth by Convention” to “Two Dogmas”? I offer a partial answer to that question here. Richard Creath argues that we should not read the radical criticism of analyticity found in “Two Dogmas” back into “Truth by Convention,” written fifteen years earlier (Creath 1987). One piece of evidence for Creath’s position comes from Quine’s “Homage to Carnap,” in which Quine states that he was “very much Carnap’s disciple” from 1932-1938 (in Creath 1990, 464). Since “Truth by Convention” was written during that time period, it is probably not a radical attack on one of Carnap’s cherished doctrines. Quine’s claim in “Homage” immediately prompts a further question: what happened in 1939 to end Quine’s discipleship? Later in the “Homage,” Quine says: In 1939 Carnap came to Harvard as visiting professor. These were historic months: Russell, Carnap, and Tarski were here together. Then it was that Tarski and I argued long with Carnap against his idea of analyticity. (ibid., 466) Quine’s memory is inaccurate: Carnap, Tarski, and Russell arrived at Harvard in Fall 1940. Nonetheless, this ‘historic’ meeting certainly may have caused Quine to reconsider his discipleship under Carnap—especially since Quine recalls arguing with Carnap over analyticity then. This suggests the following hypothesis: an important part of Quine’s transition from “Truth by Convention” to “Two Dogmas” can be traced to these conversations at Harvard during 1940-41. Fortunately, we know what they discussed then, because Carnap took detailed dictation notes, which have recently come to light. This talk presents two factors that could have pushed Quine during 1940-41 towards the “Two Dogmas” view. First, in 1934’s Logical Syntax of Language, Carnap’s preferred explication of logico-linguistic notions, including analyticity, was extensional and syntactic; by 1940, however, Carnap has switched to an intensional and semantic account of analytic truth. Quine, on the other hand, harbored throughout his career a deep and vocal antipathy towards intensional notions, especially modality. Additionally, Quine preferred syntactic approaches to semantic ones—though Quine is never as hostile to semantics as he is to modality. (Thus, the Quine-Carnap debate is not simply Quine breaking away from a static Carnapian position, but rather both men moving away from a position they shared in the mid-1930s.) Second, in these notes, Quine is presented with a philosophically motivated rationale for considering certain statements of arithmetic—paradigmatic analytic truths for the logical empiricists—empirical. For in these conversations, Tarski pushes a strictly nominalist project whereby many arithmetical claims would be empirical. This lines up with Quine’s suggestion, at the close of “Two Dogmas,” that logic and mathematics should be considered empirical enterprises. In sum, though these two factors do not entirely account for Quine’s radicalization, the 1940-41 discussions likely played a critical role in his development.
Something momentous happened with phrenology regarding the ‘nature’ of women during the course of the nineteenth century: whereas generations of physicians used to maintain that ‘Tota mulier in utero’, the new ‘cerebral physiology’ of Gall and his associates claimed that ‘Tota mulier in cerebro’. This shift did not escape Auguste Comte’s notice, whose own rationale for the subjection of women relied heavily on what he took to be phrenological justifications of female intellectual inferiority. But this did not escape the notice of one of Comte’s one time correspondent either, namely John Stuart Mill. For Mill perfectly grasped the crucial role phrenology played in Comte’s case for women’s subjection, and that was why the discussion on the scientific status of phrenology cropped up in their correspondence, a fact hitherto unaccounted for by most commentators. In fact, Mill intended to defuse Comte’s sexist argument by demonstrating that phrenology, the allegedly scientific basis on which it was grounded and to which it conferred some sort of naturalistic prestige, did not deliver what Comte needed. So, when Mill tackled phrenology with Comte in their early 1840s correspondence (and incidentally in his 1843 System of Logic), he had a twofold aim in mind. Firstly, Mill wanted to show that the claims of phrenology had not been substantiated (hence one could not rely on it as Comte did). Secondly, Mill wanted to prove that one could not explain “moral” phenomena, and especially individual differences in mental endowments, by reference to biological facts alone, as Comte had it in the case of sexual equality. It is to this first aspect of Mill’s critique of phrenology that I will pay attention in my paper. Now, whereas Comte’s estimate of phrenology is easy to analyze since it explicitly appeared in various of his works (most notably in his Cours de philosophie positive[1830-1842]), Mill’s judgment on the new “physiology of the brain” is more to difficult to assess since he never broached the topic directly. However, it is likely that Mill was familiar with the basics of phrenology, given that the first years of his intellectual career (the 1830s) were coeval with an intense period of phrenological diffusion in England. Accordingly, in the first part of my paper, I will document the sources of Mill’s phrenological knowledge, a review that will enable me to draw at least two conclusions: on the one hand, Mill was far from being totally ignorant of the main tenets of phrenology when he started corresponding with Comte; on the other hand, his reluctance to accept phrenological conclusions was due to the fact that what he read convinced him that the scientific status of phrenology had not been established, and tha t none of its specific claims had yet been vindicated. Secondly, I will show that what was at stake in Mill’s criticism of phrenology was not the cogency of the phrenological hypothesis itself, but the absence of justificatory instance for it. Mill argued that most of the claims of phrenology were not empirically vindicated, and that this was due to the unreliability of the methods used to determine what were the most elementary faculties, the irrelevance of the majority of the correlations established between mental capacities and their alleged material substratum, and the absence of precise knowledge about nervous states themselves. In brief, the phrenological hypothesis was not borne out by the facts. Accordingly, no support could be drawn from phrenology as evidence for the settlement of the sexual equality issue, pace Comte. Finally, I will conclude by claiming that Mill’s scientific culture was, contrary to what is assumed by many scholars, neither poor nor superficial, as his acquaintance with pre-1850 brain science in the context of his discussion with Comte about sexual equality illustrates. Surely, Mill was no practitioner, historian or philosopher of biology, but when he came across issues related to biological knowledge, he was not tyro either.
In 1877, the British philosopher and adventure novelist Grant Allen published his only full-length theoretical work, Physiological Aesthetics. This work, explicitly modeled on the philosophy of Allen’s mentor Herbert Spencer, expounded a view of artistic creation and appreciation based upon the physiological details of human sensation and perception – treating not only visual, but also tactile, auditory, olfactory and gustatory phenomena. Allen also forwarded this position in the early volumes of the journal Mind, the most active forum for debate over mental philosophy in late Victorian Britain. Allen’s work, at one level, represents a conceptual contribution to a then-active program of psycho-sensory research more commonly associated with figures such as Mach, Helmholtz, and others. However, as I shall emphasize in this talk, it also indicates important interactions between nineteenth century theories of knowledge and theories of art. While epistemology and aesthetics are often treated as divergent domains of discussion in this period, the models of Spencer and Allen demonstrate significant ongoing crosstalk – both affinities and tensions – between these fields. As one element of the broad tradition of mental philosophy in late Victorian Britain, their program understood human perceptual modalities and the qualities associated with them as a common basis for knowledge and aesthetic preference, as well as a range of other mental capacities. This perspective, as revealed through Allen’s discussion of the various senses and their idiosyncracies, also presents a conspicuous challenge to the visualistic bias common in contemporaneous theories of both science and art.
In a previous HOPOS paper, I traced the revival of a Platonist philosophy of science at early nineteenth-century Harvard and showed how this revival prepared a triumphal mid-century acceptance for Louis Agassiz’s transcendental biology. In this paper, I will connect this same revival of a pre-modern philosophy of science at Harvard with a larger revival of Renaissance ideas in New England culture and connect both with nineteenth-century trends in the history of psychology. While Harvard students and faculty were rejecting the hypothetico-deductivism and linguistic nominalism in official college texts by Stewart and Brown, they were drawing instead a pre-modern philosophy of science from authors such as Cudworth, Norris and Malbranche. At the same time, a notion of mental power was playing an increasing role in psychological theories. Concurrent with both developments, there was a widespread revival of Renaissance theosophy in New England culture that held human ability to understand nature to be an in tuitive ability to grasp archetypes in the mind of the creator. Furthermore, the power by which this was done, called “imagination” in Renaissance theosophical literature, was also the power by which God formed the archetypes. As a result, this dual power gave humans a divinely granted ability to both understand and control nature and it was through this mental power that humans exercised their will over nature and each other. The apparent success of Agassiz’s biology together with contemporary demonstrations of animal magnetism seemed to confirm both sides of the theosophic system for even some academic leaders such as Brown University President, Francis Wayland. Given these historical connections between philosophy of science, psychology and Renaissance theosophy in nineteenth-century New England, I will conclude that the soul power that dominated mid- century psychological theories was also central to the pre-modern philosophy of science that triumphed in Agassiz’s biology and vanished in Darwin’s. P lease note that in my presentation I will be using the term “philosophy of science” somewhat anachronistically to mean philosophical ideas about the proper study of nature and “psychology” to mean theories of mind and soul, even before these were gathered into a separate discipline.
In the first of four arguments in his “Metaphysical Exposition of the Concept of Space,” Kant reasons that space can’t be an empirical concept derived from outer experience because one can represent objects as spatially distinct only if they have “the representation of space as their ground.” (B38; Guyer & Wood translation) Though its brevity -- and particularly the fact that Kant says little to clarify the precise meaning of the lone premise -- has long made it a source of puzzlement for Kant scholars, the last twelve years have seen significant advances in our understanding of this argument. Most notably, Lorne Falkenstein (1995) and Daniel Warren (2000) both showed that one could understand the premise of this argument in a way that was textually grounded while still answering Strawson’s charge that it was nothing more than a pernicious tautology. Though they differed in the details, their basic interpretive strategy was to suggest that Kant’s point was the following: objects could be represented as sp atially distinct only if they are situated in space (i.e, a spatial order or medium that is independent of the objects situated in it). According to both Falkenstein and Warren, Kant’s point was that our representation of space could not ultimately be a mere concept that was derived from our experiences of spatially related objects for the simple reason that our experiences of spatially related objects were only possible when grounded in space. While the interpretations of Falkenstein and Warren have improved our understanding of the lone premise of Kant’s argument, significant work remains to be done advancing our understanding of how Kant might have justified this premise. This paper addresses this further issue by formulating an argument for the premise that is consistent with Kant’s epistemology and philosophically viable in its own right. In outline, the argument proceeds as follows: 1) it is impossible to imagine the conditions under which the premise of Kant’s argument would be false and 2) this modal constraint on the imagination reveals a corresponding modal constraint on outer sense; therefore, 3) it is impossible for this premise to be falsified by experience. Premise 1) is defended by an introspective argument that is in turn justified via the specification and defense of criteria for the legitimate use of introspection to reveal modal constraints on the imagination. Specifically, it is argued that our failure to complete an imaginat ive task reveals a modal constraint on the imagination provided the imaginative task is a) clearly defined, b) simple and c) merely organizational. With respect to premise 2), it is noted that there is reason to doubt the general claim that all modal constraints on the outer imagination reveal modal constraints on outer sense; nonetheless, it is argued that the correlation holds for imaginative tasks satisfying conditions a) – c).
Isaac Newton (1643-1727) is a towering figure in the history of physics. There has also been increasing recent recognition of his importance as a philosopher, an unsurprising fact given the inseparable relations in the early modern period among physics, theology and the then-extant discipline of metaphysics. The slow separation of these threads into distinct disciplines occurred throughout the seventeenth and eighteenth centuries with the birth of modern science as we know it occurring sometime in the late eighteenth to early nineteenth centuries. The adoption, dissemination, and adaptation of various aspects of Newton’s natural philosophy through the long eighteenth century contributed much to this transformation in the intellectual scene in Europe. The papers in this symposium address aspects of this history. This panel examines the reception of Newton and propagation of Newtonianism in the eighteenth century – both on the Continent and in Scotland – through a consideration of the natural philosophies of Pierre Louis Moreau de Maupertuis (1698-1759), Colin MacLaurin (1698-1746), and Émilie le Tonnelier de Breteuil, marquise Du Châtelet-Lomont (1706-1749). Specifically, we address the relation between physics and metaphysics in these three thinkers, and how the relation between metaphysics and physics is shaped in all three by prior epistemological and methodological commitments which are, to varying degrees, Newtonian in spirit. For example, we examine Du Châtelet’s sophisticated account of the importance of hypotheses in physics which uncovers her interpretation of Newton as one who (rightly, by her lights) did not eschew the use of hypotheses at all. Similarly, MacLaurin takes issue with Spinoza’s methodology (shared by rationalists and empiricists alike) of inspecting ideas, championing something much more in line with what he claims to be Newton’s methodology. For his part, Maupertuis has an especially nuanced understanding of Lockean skepticism regarding the knowledge of real essences which is brought to bear in his consideration of whether or not attraction might be an inherent property of matter. These concerns with epistemology and method have interesting consequences for metaphysics. Perhaps most notable among these consequences is that while there was a sharply constrained role for metaphysics in the natural philosophy of these eighteenth-century Newtonians, this does not mean there is no role for metaphysics in their philosophies. We are therefore also interested in elucidating the kind and limits of their metaphysics, as well as the relation between metaphysics and physics in their natural philosophies. The goals of this panel are, thus to contribute to an understanding of: (a) how Newton was interpreted in the eighteenth century, including the fact that he was not always taken to reject wholesale the use of hypotheses in physics as might be suggested by a crude reading of the hypothesis non fingo doctrine; (b) the degree of Newton’s influence in the eighteenth century, including the hitherto ignored fact that MacLaurin’s Newtonian interpretation of Spinoza was known to the Scottish Enlightenment thinkers; and (c) the progressive separation of metaphysics from physics in the eighteenth century, even while metaphysics continued to play some role in physics, such as the sophisticated account offered by Maupertuis.
Pierre Louis Moreau de Maupertuis' famous and influential Discours sur les différentes figures des astres, which represented the first public defense of attractionism in the Cartesian stronghold of the Paris Academy, often suggests a metaphysically agnostic, regularity-based defense of Newton reminiscent of 'sGravesande's straightforward claim that gravity should be treated simply as a law of nature, and laws of nature simply as regularities. However, Maupertuis' final position in the essay, I argue, is considerably more subtle. The Discours, in the end, veers closer to a genuine dynamicism or realism about attraction than at first appears. And, while it maintains that physics can function separately from metaphysics, it suggests that each may still have implications for the other. This paper undertakes to analyze Maupertuis' position, while showing how it is generated by an extended consideration of the possibility of attraction as an inherent property and fuelled by a reading of Lockean skepticism about knowledge of real essences that is more nuanced than 'sGravesande's, and perhaps even than Locke's own.
In this paper I discuss the philosophic and historical significance of Colin MacLaurin’s brief but stinging attack on Spinoza’s metaphysics in his posthumously published, An Account of Sir Isaac Newton’s Philosophical Discoveries (London, 1748). The main point of the paper is to illustrate how Newton’s challenge to the independent authority of philosophic reflection was perceived at the start of the 18th century. In his Account, MacLaurin argues from the (perceived) empirical inadequacy of the consequences of Spinoza’s doctrines to the claim that the Cartesian method of inspecting “true” ideas (MacLaurin cites Ethics Ip8s2 and The Treatise on the Emendation of the Intellect) leads to absurdity even in the context of an otherwise coherent and intelligible system. Incidentally, the existence and particular details of MacLaurin’s treatment undermines a widely accepted historiographic myth that members of the Scottish Enlightenment (Hume, Adam Smith, Reid, etc) only knew and thought of Spinoza through Bayle’s treatment of Spinoza (cf. N. Kemp Smith). For MacLaurin was probably the most influential and widely read Scottish Newtonian of the first half of the 18th century. This paper follows, thus, the recent trend to re-integrate the history of philosophy with science in the early modern period (e.g., G. Hatfield, D. Garber, M. Friedman) as well as contribute to a better understanding of the background assumptions necessary to recover a proper interpretation of Hume’s reception of Newton (and Spinoza). The paper is divided in two main sections. First, I analyze the nature and context of MacLaurin’s arguments against several elements of Spinoza’s system. MacLaurin’s criticism offers a series of inferences to the best explanation which all rely on the empirical success of Newton’s physics. (MacLaurin is explicit that he is using his refutation of Spinoza – and related criticisms of Leibniz -- to undermine Cartesian philosophy.) In so doing, MacLaurin aims to show what he perceives to be the absurdity and bankruptcy of a) a philosophical methodology accepted not just by Rationalist followers of Descartes but also by their Empiricist critics, that is, the inspecting of ideas as objects of the mind; b) the metaphysical position that the universe can be best compared to a machine in which some general quantity is conserved (in Spinoza the proportion between rest and motion). Second, to make the full implications of MacLaurin’s treatment (of a and b) clear it has to be understood in light of two contexts: 1) MacL aurin’s debate with Berkeley over Berkeley’s criticism of the lack of proper foundations to Newton’s mathematical physics; 2) MacLaurin’s attempt to dismiss the Spinozistic attack on final causes in order to defend the legitimacy of natural religion (probably most familiar to the reader through the character Cleanthes in Hume’s Dialogues). For MacLaurin, Berkeley and Spinoza privilege illegitimately first principles and norms of enquiry beyond those implicated in the practice of experimental philosophy.
Émilie du Châtelet is often read as an early French advocate of Newtonian natural philosophy, one who eventually abandoned aspects of her Newtonianism in favor of certain elements of Leibnizian thought. But Du Châtelet herself saw Newton’s physics and Leibniz’s metaphysics as compatible -- she saw her project in her Institutions de physique as one of reconciliation of these two thinkers. In this paper, we examine this supposed project in reconciliation through a consideration of (a) her theory of the role of hypothesis in physics, including how she interprets Newton on this question, (b) the consequences of this conception of hypotheses for metaphysics and the relation between metaphysics and physics, and (c) how these first two points are in evidence in her position in the vis viva controversy. Du Châtelet’s ideas on the role and use of hypotheses in physics are remarkably similar in form to Descartes’ ideas. Nonetheless, she diverges from Descartes on what she takes to be the rationally intuited first principles that set initial limits for any hypothetical thinking in physics. Specifically, while Descartes takes such principles to be metaphysical, du Châtelet takes them to be methodological: for instance, one of her crucial first principles is her specific version of the Leibnizian principle of sufficient reason. As a result, du Châtelet diverges notably from Descartes on the degree of metaphysics permitted within her natural philosophy, and on the fact that she, unlike Descartes, takes metaphysical principles to be merely hypothetical. She likens her approach to hypotheses to what she takes to be Newton’s attitude. Despite the fact that she takes most metaphysical principles to be merely hypothetical, she nonetheless believes that one must suppose a metaphysics in order to ground Newtonian physic s. In that regard, she prefigures a crucial aspect of Kant's critical philosophy later in the century. We see du Châtelet’s approach to hypotheses at work in her defense of active forces in the material world. Indeed, her approach to hypotheses leads her to a broadly Leibnizian account of material substance, for she thinks that substance must be inherently active. Moreover, she believes that this was the direction Newton himself suggests in the last query of his Optiks. Thus, with respect at least to the vis viva controversy, du Châtelet holds a Leibnizian metaphysics of active substance both to ground certain aspects of Newtonian physics as presented in the Optiks, and to be compatible with Newton’s own suggestions in that work. If this reading of du Châtelet is correct then she proves to be an exceptionally astute interpreter of Newton, exhibiting enough insight into Newton's thought to realize that he, too, would take activity to be a basic criterion for substance-hood. This is precisely the argument that he makes in “De Gravitatione”, a work she could not have read given that it was unknown until the twen tieth century.
Although Logical Empiricism was not longer a topic at Austrian universities after the banishment of the Vienna Circle there were a few occasions for a short-time comeback for some of its representatives. The “Forum Alpbach”, founded in 1945 as a platform for discussing contemporary problems of the sciences and politics also turned into a stage for the dialogue of all kinds of philosophical directions. Karl Popper, who made his first visit in Austria after the war came to Alpbach in 1948 meeting with young scholars like Paul Feyerabend and Wolfgang Stegmüller. This and further meetings also with visiting logical empiricists like Philipp Frank (1955), Herbert Feigl (1961 and 1964) and Rudolf Carnap (1964) was highly influential for a second generation of philosophers who were influenced by the philosophy of science that was developed in the countries of its immigration. Herbert Feigl was in Austria for another occasion in 1964: he was invited as a lecturer, together with Popper and Carnap, for the recently founded Institute of Advanced Studies (IHS) in Vienna, a project subsidized by the Ford Foundation and based on ideas by Paul Lazarsfeld and Oskar Morgenstern to develop empirical social studies in Austria. That year Alpbach was the location for a famous controversy between the Marxist Ernst Bloch and Herbert Feigl on the relevance of “positivism”, shortly after the legendary Positivismusstreit occured in Germany with Adorno and Popper as its main antagonists. Bloch, who also had emigrated to the U.S. returned to Germany after a short abidance in the GDR where he was one of the hardliners defending Marxism there before his dissenting position after the events in Hungary 1956. This contribution will provide insights in the occasions of a temporary return of Logical Empiricism and will show the main local defiances against it.
It took almost three decades, before Rudolf Carnap visited old Europe again, since he left it after his last stay there in 1937. Only after the death of his wife Ina he went to see his former family and old friends like Wilhelm Flitner, Franz Roh and others, whom he met in the Tyrolian Alps in Alpbach in summer 1964. These encounters with his co-students from his Jena years may have contributed to a rereading and reassessment of some prominent German philosophers like Dilthey and Heidegger. But surely that process was mainly prompted by Arne Naess´ Book Four Philosophers with its chapter on Heidegger and Günther Patzigs “Nachwort” to a new edition of Carnaps Scheinprobleme in der Philosophie (pseudoproblems in philosophy) with its comments on the influence of Dilthey’s “Lebensphilosophie” on the young Carnap. Judging from the correspondence with Naess and Patzig and from the remarks in the margins of Carnaps copies of these books it seems that he now for the first time in his life started a serious readin g of these typically continental thinkers. I will work out in some detail whether this led to a more sympathetic reaction to Dilthey and Heidegger or to a confirmation to Carnaps old polemics.
C. S. Peirce has had a diffuse but powerful effect on the philosophy of science. His introduction of the type/token distinction and concept of abduction, to cite two examples, have had enormous influence in philosophy of science and also in fields as wide-ranging as linguistics and computer science. No less, Peirce’s stance on the nature and use of scientific concepts as well as the fixation of belief and the truth of scientific theories was central to the development of pragmatism in the philosophy of science. This session will reflect on the variety of ways in which Peirce influenced the philosophy of science, each paper focusing on a specific aspect of his work.
Peirce introduced the term “abduction” to refer to a unique type of inference, one he considered distinct from either induction or deduction. He wrote, “A hypothesis, then, has to be adopted which is likely in itself, and renders the facts likely. This step of adopting a hypothesis as suggested by the facts is what I call abduction…” and, “Abduction is the process of forming an explanatory hypothesis….Deduction proves that something must be; Induction shows that something actually is operative; Abduction merely suggests that something may be.” (Peirce, 1932-63, Sections 7.202, 5.171) Though Peirce changed his views over time, today it is common to understand his notion as referring to inferences to the best explanation. Clearly, such inferences are important in the history of science; from Kepler’s inferences regarding the motion of Mars to Darwin’s inferences about common inheritance, it seems that inference to the best explanation is a common method of reasoning in both science and ordinary life. Ho wever, it has proven extremely difficult to explicate this method. Three papers by G. Harman in the 1960s introduced a controversy over abduction; was it, as Peirce had claimed, distinct from induction? Another controversy in the history of the literature concerns the criteria for “best” explanations? Clearly the best explanation available is not necessarily the true one! What criteria do scientists use in IBE, and are these the criteria they ought to use? Most recently, formal epistemologists and “explanationists” such as Lipton, have attempted to join forces in giving an account of IBE. Yet, IBE has proven extremely difficult to formalize. This paper reviews the history of these controversies, and attempt to reconcile them with what appears the extremely common practice of abduction in the sciences.
What is the transmission path for a commonplace metaphysical concept in twentieth century philosophy and science? Peirce first distinguishes between types and tokens in a 1906 article in Mind, and the concept is next picked up by Ramsey in his 1923 review of the Tractatus—complete with attribution—whereupon Moore employs the distinction in his 1925-26 Cambridge lectures on “Propositions and Truth”. From there, the diffusion of the distinction grows significantly. Unsurprisingly, use of the concept rapidly moves from philosophy of language into linguistics, specifically general semantics (Zipf 1932). The path in linguistics had already been cleared by Sapir (1925), who exploited a similar distinction between variants and norms. Perhaps more surprising is the fast transmission of type-token analysis to behavioral psycho-linguistics (Skinner 1937 and Carroll 1938), speech pathology (Johnson 1939), and psychology of perception (Brunswik 1944). Three points are of interest:
1. The rise of type-token ratio (TTR) as a significant measure in psychology and linguistics statistics.
2. The parallel development of type/token analysis in ecology in the species-area problem, beginning in the 1920s.
3. Recent use of the concept in philosophy of science, via philosophy of language and mind.
The mere distinction of abstract types and their instances is ancient and entirely common in the histories of philosophy and science. Yet the Peircean formulation represents an attempt to lend formal rigor to the distinction, which concept found favor and was elaborated upon in the sciences as a methodological tool—coming full circle in philosophy of science of the last three decades.
Peirce is known to have argued that truth is what would be agreed by a society of inquirers after all empirical and methodological questions have been answered. Putnam has argued similarly, but suggests argues that this supports internal realism rather than metaphysical realism. Peirce, for his part, was a metaphysical realist with Scholastic inclinations. This creates a puzzle: either Peirce was mistaken, or else his conception of truth was more subtle than the one Putnam adopted. I argue for the second possibility. Peirce, unlike Putnam, believed that meaning was subject to empirical input, and was not something that we, if anything, determine. He viewed the ideal theory not as some end state determined in its own terms as characterized by Putnam, but as the perhaps unreachable result of a process of inquiry involving an interplay of empirical evidence, methodology and meaning, in which advances in each could affect the other. I maintain that this openness of the process of inquiry always leaves truth as outside of our control, and that the ideal theory is still in principle falsifiable, as is every step along the way. I show how Einstein’s adoption of relativity theory fits the Peircean model of inquiry, though Einstein had no direct influence from Peirce. I use this demonstration to show how Peirce’s approach to inquiry avoids internalist and instrumentalist views of truth, and why Peirce was right in claiming that the truth was what is SO.
This paper examines the idealist philosophy of science of Hugo Muensterberg and its largely unsympathetic reception by philosophers such as William James, G.E. Moore, Max Weber, and Hans Vaihinger. Muensterberg, the German born Harvard philosopher and psychologist, built a philosophy of science on the principles laid out in Hermann Helmholtz’s late epistemology, combining motor physiology, the sign theory of sense perception, and Fichtean idealism into an account of laboratory experimentation in the “causal-mechanical” sciences. Yet Muensterberg went further, joining his account to a neo-kantian philosophy of values, and insisting that science without an apriorist framework was empty and philosophy without the laboratory blind. Although many German philosophers were sympathetic, Anglophone philosophers mostly opposed Muensterberg’s philosophy. William James, in particular, calibrated key elements of his pragmatism in opposition to Muensterberg’s philosophy.
The influence of the early 20th century continental biologist Jakob von Uexküll played a crucial role in the development of ethology as a new branch of science. He was the source of foundational concepts and methodological innovations in ethology, some of which helped to define the research program subsequently formulated by Konrad Lorenz and Niko Tinbergen. Perhaps best known for his Umwelt theory, Uexküll sought to create a science (Umweltforschung) wherein the world can be investigated and described as it appeared to animals. Of central importance to this task is his observation that animals respond in very specific ways to a small subset of total stimuli available to them. Moreover, such stimulation appears to be coupled to whole sets of behavioral patterns that emerge and are directed in a non-arbitrary manner toward these sources. In his major work Theoretical Biology, published in 1920, Uexküll attempted to lay the groundwork for his new science of behavior. Here he argued for an anti-reductionist, holistic approach to biology, stressing the view that animals should be investigated as subjects, each of which inhabits a qualitatively unique world. The starting point for his argument is the philosophy of Immanuel Kant as it appears in The Critique of Pure Reason. The presence of Kant’s thought can be seen throughout Uexküll’s work, and it is crucial to the success of Umweltforschung as conceived in Theoretical Biology. Specifically, Uexküll appropriates Kant’s theory of intuition and argues that it allows us both to posit non-human forms of intuition (constituting the Umwelt of animals) and to investigate them. However, Kant explicitly denies that his results can be applied to non-human beings (CPR, A26-27/B42-43). Surprisingly, Uexküll only acknowledges this problem implicitly and dedicates no space to its resolution. This poses a devastating methodological problem for Umweltforschung, one for which Uexküll’s treatment is at best unclear and at worst inconsistent with Kant’s theory. Kant’s theory of the a priori played a pivotal, yet problematic, role in the formulation of Umweltforschung. It also played an important role in shaping the formal structure of classical ethology. Konrad Lorenz, for whom Uexküll’s views were immensely important, saw ethology as the science explicating Kant’s theory of the innate structure of experience. However, Lorenz differed from Uexküll on the extent to which Kant’s actual results could inform the biology of behavior, especially in light of evolutionary theory. These differences can be readily seen in the subsequent establishment of ethology as a robust research program. Lorenz, for example, incorporated Uexküll’s theory of sign-stimuli, as well as that of the innate-releasing mechanism, without cashing them out in terms of animal subjectivity. In this and similar ways, classical ethologists appropriated a rather benign version of Umweltforschung, one that did not rely upon such a faithfully Kantian line and, therefore, avoided the methodological d ifficulties inherent in Uexküll’s research program.
This paper is part of an emerging trend to revive naturalistic interpretations of Hegel from the early German tradition (Siep, Habermas, Wildt) to counteract the hegemony of the American rationalist approaches (Pippin, Pinkard, et alia). To resist the prevailing trend to domesticate Hegel by reading out of him the themes of speculative Naturphilosophie and metaphysics of infinite spirit, I trace the way in which naturalistic elements of Hegel’s thought, in particular, his organic view of life and naturalized dialectical method, have a scientific orientation to the empirical sciences of his day. To ensure that the naturalism of his scientific orientation is not written out, I attend to the developments in natural science in his neglected Naturphilosophie. However, I do not wish to secure his naturalistic orientation, at the cost of losing what’s most radical and interesting about his metaphysics. One of the main problem confronting a naturalistic reading of Hegel is the idea that the excesses of his metaph ysics have to be read out of him in favor of a strictly non-metaphysical or bare naturalistic reading. I argue that a naturalistic turn can accommodate Hegel’s organic view of life and concepts, without significant cost to what’s most radical about his metaphysics. This need to steer a middle course between Hegel’s naturalism and metaphysics arises more generally as a concern that has afflicted transcendental philosophy in general. The tensions afflicting a naturalistic reading of Hegel turn out to be closely related to the middle ground provided by Kant’s transcendental idealism. This paper is meant to serve as a pilot to test on a small scale some ideas concerning the revival of the naturalistic turn in transcendental philosophy to be explored on a much grander scale at a conference I’m organizing in Montréal at Concordia University: on the special topic “Nature, Naturalism, and Naturphilosophie” (on October 11-12, 2008), as part of the “Von Kant bis Hegel” conference series. Guest speakers will include Michael Friedman, Rolf-Peter Horstmann, Paul Guyer, Ludwig Siep, Sally Sedgwick, and Frederick Neuhouser: http://alcor.concordia.ca/~shahn/kant-hegelconference/main.htm.
Although it is well known that Jean Piaget was interested in epistemological questions as well as the psychological issues he worked on, the connection between the two areas of interest has yet to be understood adequately. This connection can be made clear if we examine a notion that Piaget himself explicitly regarded as the cornerstone of his genetic epistemology – namely the philosophical principle of psycho-physiological parallelism. Although Parallelism (in its various forms) was a highly influential position with respect to the mind-body problem from the late-nineteenth century until the second world war, its influence within psychology and philosophy has often been overlooked. According to its first proponent, Gustav Theodor Fechner, parallelism is a heuristic principle according to which one should be able to find a physical concomitant for every mental event. This formulation does not address the issue of causality. A subsequent version proposed by Fechner seeks to interpret the principle by denying any causal relation between the mental and the physical and espousing a dual-aspect theory, which explains the difference between the mental and the physical as resulting from a difference between an inner and an outer perspective. In his major theoretical work “Introduction á l’Épistemologie Génétique”, Piaget cites Théodore Flournoy and Harald Höffding as authorities on parallelism. He was in all likelihood also exposed to the notion while working under Théodore Simon in the lab founded by Alfred Binet, who subscribed to the version of parallelism espoused by Ernst Mach and Ewald Hering – namely psycho-physiological parallelism. Piaget advocates parallelism definitively as a heuristic for psychology, and also offers a provisional interpretation of it. According to Piaget, motor-schemas of action are supplemented during the course of development by operational schemas, and these in turn by abstract-formal schemas. Since the latter are built upon and abstracted from motor-schemas, they are structurally isomorphic with them: the implicative relations at the psychologically characterizable level of the formal-abstract schemas are parallel to the causal relations at the physiologically characterizeable level of the motor-schemas. Since th e motor-schemas are assimilated to the rational structures characterizeable only in psychological terms, physiology cannot do without psychology when it comes to explaining actions. But psychology must also take into consideration the physiological origin that shapes higher-order thought processes. Piaget in fact goes so far as to regard science in general as rooted in thought processes that are progressively abstracted from motor-schemas. This link between scientific thought and physiology, according to Piaget, “closes the circle of the sciences” and provides the basis for his genetic epistemology.
The idea of organizing knowledge transfer and communication among scientists and the public has always been a crucial element in Otto Neurath’s (1882-1945) work. However, he rejected the term popularization and preferred instead the idea of “humanization.” The democratic process of “humanization” of knowledge, as he put it, stood in sharp contrast to the (traditional) approach of popularization from the top to the bottom. While popularization represents a kind of translation from the complicated to the simple, humanization, on the other hand, means proceeding from the simplest to the most complicated. One of its central aims is to avoid an “inferiority complex” and all kinds of frustrations that, as he pointed out, often appeared when people try in vain to grasp some piece of knowledge. Therefore, “humanization of knowledge” represents not only a model of communication between science and the lay public but also a democratic and even participatory process. Neurath’s committed work in the field of visual education in general and his well-known pictorial statistics approach in particular can be said to have constituted the centerpiece of his pedagogical efforts. Especially during his years at Oxford (1941-1945), he often participated in projects that aimed to translate scientific knowledge as well as social, economic and political information into the languages used in popular media such as exhibitions, books, booklets and even films (e.g. with Paul Rotha). During this period, he even considered the question of which kind of (political) education would be needed during the post-fascist era in countries such as Germany and Austria after their eventual defeat in World War II. The presentation focuses on three points in an effort to analyze and to contextualize Neurath’s project. First, it differentiates among the several (scientific, pedagogical, political etc.) elements that formed his approach. Among other things, I will demonstrate that the conceptualization of the project was deeply influenced by the respective countries in which Neurath worked and lived. Austria, Germany, the Netherlands, and England formed not only the background of his activities; in addition, their different scientific and political cultures played an important role in influencing Neurath in a number of ways, as he himself repeatedly analyzed. Second, it focuses on the details of Neurath’s political approach. I will demonstrate a process of change from his rather apolitical youth over the course of the Marxist decades to a more liberal approach in his final years. Third, I will discuss whether the Neurath project has any current political importance or relevance. It seems to me that whereas his status in fields such as the philosophy of science and visual education has been widely accepted, his political approach has rarely been seen as an important part of his intellectual work. Is it even appropriate to refer to Neurath as a political thinker? To answer this rather difficult question, I will finally discuss Neurath’s “democratization of knowledge” project in the context of political science and the theory of democracy.
In this paper John von Neumann’s (1903-1957) standpoint on the methodology of science will be described. It is well known that he was a young but prominent mathematician in the Hilbert’s School during the Twenties. In 1930 he went at the Königsberg Congress, organised by the Gesellschaft für empirische Philosophie from Berlin, to present Hilbert’s ideas in the foundations of mathematics and sciences. He was stricken by Kurt Gödel’s announcement of incompleteness theorems: in an independent way he reached the second incompleteness theorem and declared the end of Hilbert’s Program (Sieg 2003; Sieg 2005). After Gödel’s theorems he starts a reconsideration of the idea of mathematical rigor which led him to set the principles of an opportunistic methodology of science in the later foundational reflections. This methodology was also applied in his scientific investigations: i.e. foundations of quantum mechanics and axiomatization (von Neumann 1932) of game theory and economic behaviour (von Neumann, Morgenstern 1944). All of this has been clearly shown by recent literature (Stöltzner 2001; Rédei 2005; Stöltzner, Rédei 2006). Nevertheless, there is something that this literature missed: all the principles of this new methodology of science can be traced back to the ideas of the axiomatic method and the foundations of mathematical sciences pursued in the Hilbert’s School since the early beginning of the XX century. We will claim that: (i) von Neumann’s standpoint originates in the attempt to reconsider the Hilbert’s axiomatic method in the light of Gödel’s incompleteness theorems; (ii) von Neumann’s Program in the foundations of mathematics and mathematical sciences – heuristic and pragmatic in its character – is completely thought in Hilbert’s spirit. We will briefly argue in favour of these thesis taking into account five evidences: (a) relevance of mathematics and axiomatic approach in von Neumann’s scientific investigations (Halmos 1973); (b) von Neumann’s continuous conservative attitude in the mathematical rese arch (Hilbert 1922; von Neumann 1925; von Neumann 1954); (c) success as epistemological criterion in von Neumann’s later foundational reflections (Zermelo 1908; Hilbert 1925; von Neumann 1947; von Neumann 1955); (d) two principle in the von Neumann’s methodology of science (Hilbert 1922; von Neumann 1947; von Neumann 1955); (e) the empirical and abstract character of mathematical research (Hilbert 1900; von Neumann 1947).
The Vienna Circle and the Berlin Group (Kurt Grelling, Walter Dubislav, Hans Reichenbach) were twin-schools of scientific philosophers who fought common enemies: the philosophical idealism, the religious obscurantism and the political reaction. Some members of the Berlin Group — Carl Hempel and Richard von Mises, for example — also visited sessions of the Vienna Circle. Nevertheless, there are considerable theoretical and organizational differences between the two. The influence of Wittgenstein on the philosophy of the Vienna Circle helps to understand the central role such problems as that of demarcation of science from metaphysics and the dismissal of the latter, the principle of verification, and the unity of sciences played in it. These problems were scarcely discussed in Berlin, where Wittgenstein’s influence was rather limited. This explains why, in contrast to Carnap, Reichenbach was not shy to call himself a “philosopher”. Another difference between the two philosophers was that Reichenbach showed willingness to discuss with “speculative”, or idealistic philosophers (Adorno, for example), something unthinkable for Carnap. Reichenbach had in mind exactly these differences with his Vienna friends when he claimed that the members of the Berlin Group “avoided all theoretical maxims like those set up by the Vienna school and embarked upon detailed work in logistics, physics, biology, and psychology.” (Reichenbach 1936: 144) Besides the differences in theory of the two schools, there were also important organizational differences between them. The Vienna Circle was, even in the “public phase” of its development after 1929, primarily a closed community around Moritz Schlick. In contrast, the Berlin Group was an open society: it organized 10 to 20 sessions in the year which were visited from 100 to 300 persons. Leading members of the Group published reports and short papers in the local press; Reichenbach even read lectures on radio Berlin. The differences between the Vienna Circle and the Berlin Group can be explained by referring to their pre-history. Whereas the founding fathers of the Vienna Circle insisted that they follow the ideas of Ernst Mach, the members of the Berlin Group had no sympathy with this philosopher. Reichenbach, in particular, claimed that among the philosophers who influenced the Group most were Ernst Cassirer and Leonard Nelson. (Cf. Neurath 1930: 312; these notes were written by Reichenbach.) This avowal shows the deep roots of the Berlin Group in the German philosophy, and more specifically, in the philosophy advanced by Kant in an attempt to bring together science and philosophy. In the nineteenth century it was developed by philosophers like Fechner, Fries, Herbart and von Helmholtz. The neo-Kantian Cassirer and the neo-Friesian Nelson pertained to the young generation of philosophers following the same direction at the beginning of the twentieth century.
It is fairly well known that Emmy Noether credited Dedekind with much of the background for her own mathematical work, for instance in ideal theory, famously remarking ``Es steht schon bei Dedekind", or ``It is already in Dedekind." Noether's formulation of ideal theory surpassed and generalised Dedekind's, mathematically speaking. But we can also find reflected in Noether's more general methodology a correspondingly more sophisticated way of conceiving of mathematical objects, characteristic of a general change in mathematical ontology that Dedekind had not yet fully embraced. This is a transition from objects to structures. Mathematical structuralism can be characterised as a view that mathematics is primarily concerned with the investigation of formal structures, and also as a view that mathematical objects have no properties other than those they possess in virtue of being part of a formal structure. If we see Noether as a kind of successor to Dedekind, we might ask to what extent she also employed structuralist methods in her mathematics. To that end, we notice that where one of Dedekind's key insights in ideal theory was the definition of an ideal as a set of a certain kind, Noether's was to articulate a particular condition on a system of ideals, treating that system as itself an object of study. In other words, even as a methodological approach, Dedekind's version of structuralism is one in which mathematical objects are structurally defined entities, and Noether's is one in which we study systems of objects, or structures. Dedekind may have laid the conceptual foundation for Noether's work in ideal theory. But her great contribution was in the more abstract approach to mathematics required to abstract away from that foundation, moving towards a more general theory of ideals. Further, the difference between their methods shows that two different ways of characterizing mathematical structuralism---that mathematics is concerned with formal structures, and that mathematical objects are only structurally defined things---can be taken as descriptions of views that are in fact distinct.
Newton developed a version of infinitesimal calculus in the early 1670s but abandoned it because it used procedures that could not be justified. They were black boxes: one put in the premises and generated the right results, but had no grasp on what was going on in the middle. In fact, both Newton and Leibniz agreed that infinitesimal calculus required justification in terms of limit procedures, which were geometrical and open to inspection at every stage. The difference was that Newton believed that this meant that any procedure using infinitesimal calculus had to be translated into geometrical limit procedures, whereas Leibniz believed that only the general technique had to be justified in these terms. Leibniz’s approach is not driven by pragmatic concerns, however, but rather by a view that the calculus extends human capacities in new ways: it goes beyond our natural faculties and hence we cannot expect them to be able to legitimate it. I show how the philosophical dispute (as opposed to the priority d ispute) over the calculus raises profound and to some extent intractable questions about the nature and limits of human reasoning.
Interest in Ernst Cassirer's philosophy has increased significantly over the last decade. One reason is that his collected papers and large parts of his Nachlaß have been made available, in print and electronically. In continental Europe, he has also been rediscovered as a theorist of culture and politics, including his philosophy of symbolic forms and his work on enlightenment thought (see, e.g., Oswald Schwemmer, Ernst Cassirer: Ein Philosoph der europäischen Moderne, 1997). In the English-speaking world, it is primarily his philosophy of science that has found renewed attention (Thomas Ryckman, The Reign of Relativity, 2005; in German, compare Karl-Norbert Ihmig, Grundzüge einer Philosophie der Wissenschaften bei Ernst Cassirer, 2001). In addition, he has come to be seen as a mediating figure between the analytic and continental traditions in twentieth-century philosophy (Michael Friedman, A Parting of the Ways: Carnap, Cassirer, and Heidegger, 2000). And some comprehensive studies of Cassirer's work and legacy have appeared, although so far mostly in other languages (Massimo Ferrari, Ernst Cassirer. Dalla Scuola di Marburgo alla Filosofia della Cultura, 1996, H.J. Sandkühler & D. Pätzold, eds., Kultur und Symbol. Ein Handbuch zur Philosophie Ernst Cassirers, 2003). Recent historical and philosophical work on Cassirer's views on science, or on his neo-Kantian "critique of knowledge", has largely concerned one particular area of his work: the philosophy of physics (and more specifically, the emergence and justification of relativity theory). However, Cassirer was impressively well read, and unusually perceptive, concerning various other parts of the sciences too, including mathematics and logic. In this symposium, we will focus on three relevant cases: first, his early, and surprisingly sympathetic, reception of the new logic developed by Frege and Russell, including their logicist commitments; second, his insightful interpretation of, and strong support for, a Dedekindian structuralist position in mathematics, as tied to his own investigation of the conceptual development of pure mathematics; and third, his holistic understanding of scientific theories in general, closely related to, and influenced by, similar views by both Duhem and Goethe. All of these cas es have received only cursory attention so far, if any at all, especially in the English literature. But each is interesting, not just from a historical point of view, but also with respect to the continuing relevance of the themes at play in them; or so we will argue. (For further details, see the three attached abstracts.)
In his groundbreaking book, The Parting of the Ways, Michael Friedman writes that "Cassirer’s outstanding contribution [to Neo-Kantianism] was to articulate, for the first time, a clear and coherent conception of formal logic within the context of the Marburg School." Indeed, Cassirer’s first paper in the philosophy of mathematics—"Kant und die moderne Mathematik" (1907)—argued not only that the new relational logic of Frege and Russell was a major breakthrough with profound philosophical implications, but that the logicist thesis itself was a "fact" of modern mathematics. This early and very strong enthusiasm for the new logic (and the new logicism) is especially surprising since the other Marburg Neo-Kantians were so strongly opposed to the very idea of "formal logic", and thought little of its new incarnation in Frege and Russell. In this talk, I explain why Cassirer thought that the new logic was so important, and I trace his sometimes ambivalent attitude toward the logicist thesis from his early 1907 paper to his last writings from the late 1930s. The new logic, Cassirer argues, gives us two things: it provides mathematical proofs that make it clear that pure mathematics does not rely on or concern empirical space and time; and it provides a richer, non-Aristotelian model of the structure and formation of concepts. We need both of these ingredients, Cassirer thinks, to justify the "freedom" of mathematics in the roughly Dedekindian way that Cassirer prefers. (And philosophy needs to justify the freedom of modern mathematics, since the "transcendental" method of Marburg Kantianism is unrepentantly "naturalistic": it implies that philosophy has no independent grip on human understanding or on the nature of objects of experience, and so no leverage by means of which it could condemn an established result of mathematics as false o r meaningless.) Cassirer’s philosophical appropriation of the new logic differed from that of Frege and Russell in a number of significant ways. First, unlike Russell, he sees no point in giving a characterization of logic and arguing that the new "logic" is really "logic" in that sense. Second, for Cassirer, we need a philosophical account of how and why mathematics is applicable to objects of experience, and he thinks that Russell’s and Frege’s platonisms completely fail on this score (although there are some good hints, he thinks, in Duhem). And, third, Cassirer thinks that we need a richer epistemology of mathematics—more attentive to details of mathematical practice and to the history of mathematics—than that provided by Russell or the logicist thesis. This epistemology would make clear why certain mathematical concepts, proofs, theories, etc. are better than others and why certain sets of conceptual tools are essential for understanding some set of mathematical phenomena. It would make clear in what way mathemat ics is a synthetic and progressive science.
It is widely known today that, among the neo-Kantians philosophers, Ernst Cassirer was the one most well informed about philosophically relevant developments in the sciences. This is true not only for the natural sciences, but also for mathematics and logic. As to the latter, Cassirer was very knowledgeable about developments in nineteenth century geometry, including the emergence of projective and various non-Euclidean geometries; he was especially impressed by Felix Klein's Erlanger Program for reorganizing and reuniting the study of geometry, as emphasized repeatedly in his writings; he also commented, approvingly, on Frege's and Russell's new logic; and he followed, closely and explicitly, the foundational debates between Hilbert and Brouwer, as is clear from his later writings. Nevertheless, Cassirer's views about the foundations of mathematics, in particular of pure mathematics, remain a relatively neglected part of his work. It is a part that deserves to be reexamined, both for hist orical and systematic reasons. Especially worthy of further examination is Cassirer's reception of Richard Dedekind's works, which played a role for him that was almost as important as Klein's. In this talk, I will bring two aspects of that reception into sharper focus: Cassirer's understanding and adoption of Dedekind's view that mathematics consists of the study of relational structures; and his more specific analysis of Dedekind's re-conceptualization of the foundations of arithmetic and analysis. These two aspects are closely related to each other—both concern the significance of the structural turn that took place in nineteenth century mathematics. They are also intertwined with more general themes in Cassirer's work, including: his thesis that a deep transformation of all the mathematical sciences took place in the modern period, indicated by the shift from "substance" to "function" as the guiding notion; and his "logicist" reworking of Kant's epistemological project. But my main focus will be on Cassirer's recept ion of Dedekind's ideas itself. Along such lines, I will argue for the following: Cassirer's response to Dedekind's works was philosophically subtle and far ahead of its time—I would go as far as saying that, among twentieth-century philosophers, he was the most perceptive interpreter of Dedekind's views and their significance. Especially illuminating, also in relation to current debates about structuralism in the philosophy of mathematics, are Cassirer's clarifications of what is meant by the "free creation" that, according to Dedekind, characterizes modern mathematics and by the resulting "ideal" nature of mathematical objects. In addition, Cassirer's analysis of the innovations at the core of Dedekind's work, concerning the conceptual foundations of arithmetic and analysis, points towards an issue that contemporary philosophers have only started to rediscover recently; namely, the question of how exactly to think about the gains in understanding that, according to Cassirer, make certain conceptual changes central to mathemati cal progress.
In her seminal piece, “Leibniz’s Dynamics and Contingency in Nature,” Margaret Wilson drew attention to three signature theses of Leibniz’s mature natural philosophy, intriguingly suggesting that all three are interrelated and supported by his own scientific discoveries:
Contingency: The laws of nature are paradigmatically contingent; they serve as exemplars of contingency within the Leibnizian system.
Providence: The laws of nature provide the basis for a new argument from design, and show how reflection on God’s ends can be useful in the practice of natural philosophy.
Entelechies: The laws of nature presuppose the existence of active, goal- directed powers or “final causes” reminiscent of Aristotelian formal natures.
Since the publication of Wilson’s article almost thirty years ago, there has been no challenge more central to understanding the emergence of Leibniz’s mature natural philosophy than providing an account as to how he came to hold these three theses, how he sees them as being related to one another, and how he thinks they are supported by his scientific discoveries. Wilson’s own account of Leibniz’s embrace of the theses of Contingency, Providence, and Entelechies has profoundly shaped the way in which this challenge has generally been approached. In the broadest terms, Wilson proposes that all three are to be understood as being rooted in Leibniz’s work in “dynamics,” and more specifically in his discovery of the conservation of vis viva and his concomitant rejection of Descartes’s geometrical physics. Thus she suggests that whereas Descartes appears to be committed to the necessity of the laws of physics, Leibniz embraces Contingency. Whereas Descartes bans consideration of divine ends in deriving the l aws of motion and impact, Leibniz adopts Providence. Whereas Descartes’s laws of physics are supposed to govern bodies whose whole essence is passive extension, Leibniz insists that the laws of nature must be grounded in active, goal-directed entelechies. Although much important work has been done extending and developing Wilson’s original suggestion, her guiding assumption that Leibniz’s mature views in natural philosophy are rooted first and foremost in his studies in physics has only become more deeply entrenched, and today may fairly be counted as orthodoxy among his commentators. In this essay I propose to challenge standard Wilsonian account of the emergence of Leibniz’s mature natural philosophy. The essay itself falls into three main sections, each of which takes up one of Leibniz’s signature theses and argues that it is best understood as arising from his increasingly sophisticated attempts to show that the laws of optics can be thought of as selecting one uniquely determined actual path from an inf inite family of possible paths. The intended moral of the three sections taken together is that while it has been tempting to suppose that Leibniz forges the central theses of his mature natural philosophy in the domain of physics and opportunistically carries them over to the domain of optics, such a story gets things essentially the wrong way around. The crucial nexus of views lying at the heart of Leibniz’s mature natural philosophy has its deepest roots in his optical discoveries, which in turn pave the way for the emergence of his mature views in physics more generally. Optics the horse, as it were, physics the cart.
At the heart of Leibniz’s philosophy is the familiar claim that the actual world is just one among many possible worlds. God has chosen to actualize this world because it is the best; nevertheless, according to Leibniz, God could have chosen differently in some meaningful sense. While this much about Leibniz’s philosophy is familiar, less familiar are the conditions that a set of possible substances must satisfy in order to qualify as a world. A world, as Leibniz defines it in his Theodicy and elsewhere, is a totality of substances belonging to a common spatio-temporal and causal order. This definition, as I shall argue, places Leibniz squarely at the start of a cosmological tradition that runs through later German philosophers such as Christian Wolff and Christian August Crusius and is taken up and transformed by the mature Kant. Despite significant disagreements about the nature of causal interaction, the attributes of monads, and the ontological status of space, one finds substantially the same concept ion of a world as a maximal set of substances connected spatially, temporally and causally in Christian Wolff, Alexander Baumgarten, Christian August Crusius, and the Pre-Critical Kant. However, though the latter take Leibniz’s conception of a world as a starting point for cosmology, which Wolff describes as a “science previously unknown to philosophers,” they modify his conception in subtle ways to satisfy a different set of explanatory demands than those originally envisaged by Leibniz. In this paper, I trace the evolution of the notion of a world from its original context in Leibniz’s theodicy to the role it comes to play in what Wolff, Baumgarten, Crusius, and Kant conceive of as cosmology.
The development of mechanics went hand in hand with its mathematization, axiomatization and formalization, culminating in Joseph Louis Lagrange’s “Mécanique analytique” (1788). Here, mechanics was reduced to the algebra of the ordinary and partial differential equations and the calculus of variations. In his recent study “Axiomatik und Empirie. Eine wissenschaftstheoriegeschichtliche Untersuchung zur Mathematischen Naturphilosophie von Newton bis Neumann,” Helmut Pulte (2005) argues that this development did not result in providing foundations for mechanics as was originally intended. On the contrary, this development is characterized as a change from certism to fallibilism. Whereas certism posits the possibility of an apodictic, i.e. absolutely certain, knowledge, fallibilism teaches us the fallibility, corrigibility and provisional character of knowledge. By focusing on Émilie du Châtelet’s “Institutions physiques” (1742) I will argue that Pulte’s thesis should be qualified. The debate about fallibilism concerns not only the axiomatic-deductive method and the epistemic status of principles. Du Châtelet presented an architecture of mechanics pursuing two aims: firstly, to guarantee the secure foundation of mechanics on the basis of the principle of contradiction and the principle of sufficient reason; secondly, to offer a methodological framework for the construction of theories. Hypotheses, du Châtelet argues, play a key role for this construction: they cannot and need not be proven; but they are and must be falsifiable. Du Châtelet compares the first task with the foundations of a building, the second task, i.e. setting up hypotheses, with the scaffolding of a building. Du Châtelet’s “Institutions physiques” are commonly interpreted as an episode of her life long wrestling with the theories of Newton and Leibniz searching for a way to combine both. Perhaps more importantly, du Châtelet’s architecture of mechanics aims not only at a synthesis of Newton and Leibniz, but also at a synthesis of certism and fallibilism. By restricting certism to the principle of contradiction and the principle of sufficient reason and fallibilism to empirical propositions du Châtelet demonstrates the compatibility of certism and fallibilism and shows the possibility of a secure foundation as well as the need for falsifiable hypotheses in physics. In conclusion, du Châtelet’s architectural draft of mechanics made an important contribution to the further development of mechanics in the 18th century. In my talk I will give some examples for du Châtelet’s influence on this development. These examples refer to the controversies over the “vis viva” (living force) and the Principle of Least Action, inc luding the correspondence between du Châtelet, Leonhard Euler and Pierre Louis Moreau de Maupertuis.
Aristotle’s natural philosophy is often characterized as qualitative and non-mathematical. However, Aristotle is clearly aware of the mathematical treatment of natural phenomena constitutive of Greek astronomy, optics, harmonics and mechanics. In Physics II.2 Aristotle calls these sciences “the more natural branches of mathematics”. Aristotle is concerned with these sciences, which have come to be called mixed, middle or subordinate sciences, for a number of reasons. In Physics II.2 and Metaphysics XIII.2-3 Aristotle appeals to these sciences in arguments against a platonic philosophy of mathematics; in the Posterior Analytics Aristotle is concerned with characterizing these sciences because of their implications for his account of demonstrative science. Though somewhat fragmentary, his treatment in Posterior Analytics is worth careful examination because it reflects Aristotle’s understanding of the relationship between mathematics and scientific knowledge of the natural world—an important topic in the hi story of philosophical reflection on natural science. Interestingly, in the De Caelo and in Metaphysics XII, Aristotle again considers one of these sciences, astronomy. Here Aristotle’s treatment is not in a methodological or second-order mode; his concern is substantive, motivated by a concern to provide an account of the heavens. In this paper I will (1) provide a careful account of Aristotle’s understanding of these sciences based on his methodological treatment, particularly in the Posterior Analytics; and (2) examine how this understanding is reflected in and illumines his treatment of astronomy in the context of his account of the heavens in the De Caelo and Metaphysics XII. This will provide us with a more accurate account of Aristotle’s understanding of the place of mathematics in the study of nature. I argue that though Aristotle does insist that the subordinate sciences belong to mathematical and not natural science, he nonetheless sees them as essential to a complete scientific knowledge of the natural world.
Aristotle’s handling of what we would call laws of nature is rarely tackled directly and by contemporary scholars. In this paper I draw on relevant passages in Parts of Animals II, Generation of Animals V and especially Meteorology IV in order to give prominence to the defining aspects of Aristotle’s approach to the laws of nature. The notion of natural law in Aristotle’s works may not be articulated in theoretical terms as neatly as it will be in later authors, but its importance in his ‘chemistry’ and in some sections of his biological corpus is quite remarkable. Conditional accounts often take the following form: if a uniform body consists of certain ingredients (present in it dunamei) in a particular proportion and if the right external conditions obtain (e.g. if sufficient – dry or moist – heat is applied to it), then a certain property will emerge, or (if already existent) it will be manifested. Conversely, Aristotle will also use conditionals to show that, if a body exhibits a certain behavior, under specific conditions (e.g. when affected by heat or cold in such and such a way), then it is bound to have this or that composition. A conditional analysis of properties and processes does not have to be, for Aristotle in any case, an elegant strategy for doing away with dispositions. Conditional analyses can be used in principle to weaken the status of dispositions; in assuming that ‘if factors X1, X2… obtain, then result Y will be produced’, one can bypass the ascription of dispositions to a certain thing. Such a conditional account can be taken simply to (causally) link a set of categorical factors to an actual event. Aristotle, I believe, shuns this temptation. In this context one might wonder how the laws of nature can conceivably hold in the realm of “for the most part.” Granted that phenomena in the sublunary sphere occur with less regularity than in the outer spheres and among the celestial bodies, necessity and ‘for the most part’ are not exactly mutually exclusive concepts: if/when the right conditions are in place, a particular effect will be produced of necessity. It is just that those conditions for the emergence and then for the manifestation of a certain disposition are not present with unfailing regularity and sheer predictability. My study of the place of laws of nature in Aristotle’s applied science aims to contribute to a fuller understanding of his effort to find order, based on causal connections, in what might otherwise look like a variegated slew of phenomena. Aristotle’s treatment of laws of nature is closer to what we might call dispositionalism than to actualism, to use a deliberate anachronism. In other words, Aristotle did not take laws of nature to be just summary descriptions of strictly actual events (an idea that the Megarians would have found perhaps palatable, just as the positivists would have found it so in more recent times); rather, he used law-like formulations to ascribe dispositions to things (especially organic and inorganic uniform materials) in the sublunary sphere.
Although early debates concerning Aristotle’s commitment to prime matter centered on the interpretation of particular passages in which Aristotle supposedly refers to prime matter, the disputants now are agreed that the question hangs on Aristotle’s philosophical requirements. The current debate is centered on the part of Aristotle’s philosophy that seems most to demand prime matter—his theory of elemental substantial change. I argue that an appeal to Aristotle’s theory of substantial change is not required in order to establish his commitment to the existence of prime matter. Instead, I draw on synchronic considerations at work in Physics II.1’s conception of what it is for an element to have a nature to present the following argument for Aristotle’s commitment to prime matter:
1. Nature is a principle and cause of being moved and of coming to rest in that to which it belongs primarily, in virtue of itself and not accidentally. (Physics II.1 192b21-3)
2. Something cannot be in itself primarily. (Physics IV.3 210b23)
3. There must be a difference between that which has a nature and the nature it has. (1, 2)
4. If that which has a nature were simply form or simply matter, there could be no difference between that which has a nature and the nature it has.
5. Therefore, that which has a nature cannot be simply form or simply matter. (3, 4)
6. If that which has a nature is neither simply matter nor simply form, then it is a composite of matter and form.
7. That which has a nature is a composite of matter and form. (5, 6)
8. Each of the elements has a nature. (Physics II.1 192b8-15)
9. An element is a composite of matter and form. (7, 8)
10. An element is the lowest-level thing that has a nature.
11. The matter of an element is prime. (9, 10)
If Aristotle’s commitment to prime matter were to rest solely on the details of elemental substantial change, the question of whether or not he is committed to it would perhaps be of only antiquarian interest. My argument for Aristotle’s synchronic justification shows there to be a philosophically rich a priori matter at stake in his commitment to prime matter—whether something that has a nature should be distinguished from the nature in it. I argue that this issue divided Aristotle from his foremost predecessors, Plato and Parmenides, and reveals the core of Aristotle’s conception of causation and explanation.
Recently, Ronald Giere, Philip Kitcher, and Janet Kourany, among others, have argued that philosophers of science should contribute directly to the crucial topic of science in the public interest. In this paper I discuss how these arguments can be strengthened by historical considerations. Although Otto Neurath believed that philosophers of science should be involved in social policy, his vision was eclipsed by the detached stance of Logical Empiricism as it developed during the twentieth century. Analytic philosophy contributed to this trend as well, as can be seen by Bertrand Russell’s rejection of John Dewey’s contention that philosophers ought to address issues concerning the applications of science. What resulted was a philosophy of science that focused on the explication of the semantic and logical properties of theories, which in turn led to an emphasis on pure science and an analysis of the methods by which a standard of disinterested objectivity could be achieved. Debates over science in the public interest were thus largely banished from professional philosophy of science. As recent writers suggest, however, the standard of disinterested objectivity is not feasible and the mere presence of interests need not be a direct challenge to the legitimacy of science. While this position is a departure from twentieth-century philosophy of science, it actually represents a return to a long tradition in Anglo-American natural philosophy that was dedicated to advancing an ideal of useful knowledge, which joined together an analysis of the epistemic and social aspects of science. This ideal formed an integral part of the concept of science for the common good advocated by Francis Bacon and Robert Boyle and it found expression in subsequent centuries in the works of Benjamin Franklin, Count Rumford, Humphry Davy, John Stuart Mill, William Whewell, and Charles Sanders Peirce. Although they put forth somewhat different conceptions of useful knowledge, all appreciated the fact that extra-scientific interests were relevant for guiding and assessing research. The history of the philosophy of science can be used as a way to learn from the past that the standard of disinterested objectivity was not necessary to the development of modern science but rather was an historically continge nt product of the twentieth-century. In turn, arguments from the useful knowledge tradition can be used to further an analysis of the way in which social and epistemic interests may combine to form not only a more robust image of scientific method, but one that is socially responsible as well.
In 1960, N.R. Hanson became the founding chair of the University of Indiana’s Graduate Program in the History and Philosophy of Science, the first program of its kind. However, from the very beginning, the program was a subject of dispute (to the extent that the Philosophy Department would not allow the word “philosophy” to appear in the fledgling program’s title) and Hanson’s own connection with it was not to last very long. In addition to his historical and philosophical work, Hanson also devoted considerable effort to determining how the relation between history and philosophy of science was to be understood. For having been one of the brightest luminaries of the early days of HPS, interest in Hanson’s philosophical and historical work over the past forty years has been confined to a few select areas like his analysis of observation, his arguments for a logic of discovery, and his defense of the Copenhagen Interpretation. Hanson shared the positivist view that the function of philosophy of science is to examine and clarify the conceptual foundations of science, though he differed from the positivists in thinking that science – both historical and contemporary – provides guidance for philosophy. In a sense, Hanson can be seen as not so much a critic of logical positivism, but as extending the field of conceptual analysis to areas the positivists had considered off-limits, like the context of discovery and the conceptualization of perception. He also believed that philosophical accounts of science should not be concerned with the static frameworks of the “catalog sciences” or with the type of science done within such frameworks, but instead with the most dynamic and formative stages of scientific development. Hanson’s approach to HPS is perhaps best appraised by stressing the many areas of disagreement between his views and those of Kuhn. I will argue that Hanson’s more philosophical orientation allowed him to make sense of both the logical and interpretive aspects of science. Hanson was also one of the first philosophers to use the findings of cognitive psychology as the basis of his epistemology, though his having pushed Kuhn in this direction has somewhat obscured Hanson’s own position. While Hanson’s conception of science was certainly shaped by the Oxford analysts and, to a lesser degree, Peirce, it was his study of history that was decisive in producing his mature views. At the core of Hanson’s philosophical approach was the desire to make sense of the rationality of science, in all of its forms, and it was this emphasis on the normative aspects of science that kept his philosophical theory together. This paper explores and defends Hanson’s contention that it is the supposition and testing of normative criteria for science that allows analysis of case-studies to go beyond the cases themselves. Hanson uses Galileo’s discovery of the law for free-falling bodies to show that while facts inexpressible in a given notation are not impossible to grasp, the practical obstacle such a process involves is very conceptually important for understanding the growth of science. Hanson’s emphasis is on how the successful conceptual framework for free-fall was rationally constructed – Hanson’s disbelief in flashes of inspiration separates him from Kuhn and it follows directly from his commitment to a normative framework. Normative criteria are the philosophical elements that allow us to learn from, and abstract away from, case-studies.
Arthur Pap was not quite a Logical Empiricist. He wrote his dissertation in philosophy of science under Ernest Nagel and he published a textbook in the philosophy of science at the end of his tragically short career, but most of his work would be classified as analytic philosophy. More importantly, he took some stands that went against Logical Empiricist orthodoxy and was a persistent if friendly critic of the movement. Pap diverged most strongly from Logical Empiricism in his theory of a “functional a priori” in which fundamental principles of science are hardened into definitions and act as criteria for further inquiry. Pap was strongly influenced by the pragmatists C. I. Lewis and John Dewey in developing this alternative theory of a priori knowledge. Using Poincaré’s conventionalism as a springboard, Pap attempted to substantiate these views with examples from physics and this was his largest foray into philosophy of science topics. In this talk I consider the position of Pap in relation to the Logica l Empiricists and the pragmatists. Pap, and through him Lewis and Dewey, constituted an alternative philosophy of science in the 1950s that never quite took hold, despite the fact that their views on the a priori are very intriguing and similar to Michael Friedman’s recent work on the constitutive a priori.
The paper’s main thesis is that the term ‘incommensurable’ was first adopted from its precise use in geometry into the philosophy of science to mean that there is no common measure between universal theories in physics by Albert Einstein in 1946. Paul Feyerabend and Thomas Kuhn have often been accredited with independently introducing the incommensurability thesis into the philosophy of science in 1962. The notions of incommensurability at the center of their philosophies of science were initially treated with much skepticism, as they were believed to imply that science is irrational. Feyerabend, in particular, was often dismissed as promoting overly radical theses that clash with the idea that science is rational. This paper takes a closer look at Feyerabend’s introduction of ‘incommensurable’ in his landmark ‘Explanation, Reduction and Empiricism’ (1962), and situates it historically within the wider development of the notion of incommensurability. First, the basic idea of incommensurability is explaine d. Then, based on archive materials such as Feyerabend’s personal copy of Duhem (1906) and his unpublished (1951) doctoral thesis, the origins of the basic idea are traced back to Duhem’s remarks about meaning change in the natural sciences. Taken together, they establish Duhem as the source of Feyerabend’s basic idea. Duhem, however, did not use the term ‘incommensurable’. Yet, prior to Feyerabend, ‘incommensurable’ had been used to describe the relations between the concepts of psychology and physics by Köhler in 1920, and to describe the relations between concepts in medicine and science by Fleck in 1927 and 1935. Köhler heavily influenced Feyerabend, just as Fleck heavily influenced Kuhn. Even so, it was none other than Albert Einstein who, in 1946, first used the term ‘incommensurable’ to describe the relation between theories in physics. Moreover, Einstein explicitly restricted his discussion of weighing the comparative merits of incommensurable theories to those that talk about the entire universe, or what Feyerabend later called ‘universal theories’. This criteria also marks the main difference between Kuhn and Feyerabend’s incommensurability theses. For Feyerabend, but not Kuhn, only such universal theories can be incommensurable. Lastly, although Preston has popularized the myth that Feyerabend was initially a scientific realist who believed that science discovers objective truths about a mind-independent reality (1997), the metaphysical position Feyerabend explicitly delineated in his 1962 was not scientific realist, but Kantian — except without necessary, unchanging categories. Peter Lipton later dubbed such positions ‘Kant on wheels’. This is the same sort of metaphysical position to which Einstein explicitly subscribed when he used the term ‘incommensurable’ in 1946. Taken together, the historical evidence indicates that although Feyerabend was interpreted by the community of philosophers of science to be promoting radical, irrationalist theses, in fact, these basic claims were merely repetitions of Einstein. To some this may come as unsurprising. After all, in the preface to second German edition of Against Method (but not in any of the English editions), Feyerabend claimed that none of his theses were new, and that they would all have seemed trivial to Einstein.
A majority of scholars interested in the development of Feyerabend holds that a more scientifically minded earlier Feyerabend went skeptic about the scientific tradition in the late sixties due to his so-called “Berkeley experience”. Confronted with students from different cultures he realized the plurality of worldviews and refused to introduce his students into the imperialistic Western tradition. My paper argues that this reconstruction, though supported by Feyerabend’s own account in Killing Time, is misleading. Instead of personal reasons, triggered by political experiences, Feyerabend’s relativism towards traditions was a rationally justified result of his work on early Greek thought. To establish my claim I will explore the relevant stations in Feyerabend’s reception of Ancient thought and its impact on his philosophy of science. Besides his published work I will also refer to a yet unpublished book Einfuehrung in die Naturphilosophie [Introduction to the Philosophy of Nature] (1972-76) he was work ing on a while Against Method appeared. Feyerabend was interested in “the 'rise of rationalism' in ancient Greece” (Farewell to Reason, 1987, p.65) throughout his career, and increasingly so in later works. This is evident from early papers like Knowledge without Foundations (1961), the generally underestimated chapter 16 of Against Method (1975), several papers during the eighties until his posthumous book Conquest of Abundance (1999). Although this fact was widely neglected, Feyerabend’s discussion of early Greek thought can clarify his philosophy of science, namely his relativism towards science and other traditions. To Feyerabend the Greek intellectual transition from myth to reason is fundamental, because it marks the historically and not rationally caused beginning of a critical tradition and naturalistic metaphysics. Feyerabend’s early account of this transition in 1961 betrays significant similarities with Popper’s idea of critical progress from closed to open worldviews. The later departure from these views results from two fundamental ideas: 1. In his discussion with Smart, Sellars, and Putnam on the meaning of scientific terms Feyerabend argues in Reply to Criticism (1965) as well as in an unpublished letter to Smart (1963) that conceptual schemes frame our worldview and are historically changing. Inspired by Nietzsche and Dodds The Greek and the Irrational he takes ancient Greek myth as an example of a different conceptual framework. The transition from mythical to scientific thought does no longer appear as breakthrough to a more adequate understanding but as a change from a functional worldview to another. 2. General non-instantial theories or fundamental universal frameworks like Homeric myth and Pre-Socratic cosmology are incommensurable, when "the main concept of the former [...] can neither be defined on the b asis of the latter nor related to them via a correct empirical statement" (Feyerabend: Explanation, Reduction and Empiricism, 1962: 76). The transition from myth to reason exemplifies incommensurability at the very foundation of Western science. This is why Feyerabend (other then Kuhn) took the values and metaphysical assumptions of science to be culturally relative.
Over the last fifteen years the relationship between Kuhn’s work and Logical Empiricism (LE) has come under a closer scrutiny, resulting in a much more reconciling image of 20th century history of the philosophy of science than the traditional one. This revisionist trend, which had started with stressing the elements of convergence between Kuhn’s central philosophical tenets and those of Carnap, the generally acknowledged leading representative of LE, has more recently focused on the affinities between Kuhn’s position and Frank’s Neurathian philosophy of science. The main thesis of this paper is that these points can be made even more forcefully and significantly in the case of Feyerabend. It is well known that, despite, contrary to Kuhn’s, Feyerabend’s philosophical education was deeply influenced by former members of the Vienna Circle and his intellectual development was characterized by a personal acquaintance with various logical empiricists, this did not prevent him from being critical of LE at least as much as Kuhn. In particular, Feyerabend’s assault on LE, sustained during the 1950s through a series of both public and private attacks against Carnap’s double-language model of logical reconstruction of scientific theories, climaxed in the early 1960s with his famous incommensurability thesis, officially directed against Hempel’s theory of explanation and Nagel’s theory of reduction. It is less known, however, that in deploying his most pointed criticism of LE, Feyerabend elaborated upon Frank’s examples of application of what Frank himself called the neo-positivists’ “pragmatic theory of meaning”. More specifically, Feyerabend first extended to paradigmatic cases of inter-theoretic reduction Frank’s thesis that a terminological ambiguity and a conceptual disparity between Special Relativity and Newtonian Mechanics is brought about by the fact that the former implies the negation of certain empirical laws assumed in the latter to ensure the uniqueness of the definitions of central theoretical terms. Then, in order to unveil the theoretical nature of ordinary language, Feyerabend developed his own pragmatic theory of observational language (which originated in his 1951 doctoral dissertation devoted to the protocol sentence debate and which was admittedly inspired by Neurath’s and Carnap’s physicalism of the early 1930s) by using Frank’s examples illustrating his thesis that established ph ilosophical principles implicitly assumed in common sense are just petrified physical hypotheses. Thus, relying on Frank, Feyerabend purportedly showed not only the formal irreducibility at the theoretical level of rationally reconstructed scientific theories, but also their linguistic incomparability at the observational one, thereby exposing some relevant drawbacks of the logical empiricist approach and paving the way to a legitimate appeal to “external” factors in (accounting for or making) theory choices. In the light of this connection, Feyerabend’s semantic holism and his later historical and sociological stance can be understood as attempts at reviving Neurath’s and Frank’s Gelehrtenbehavioristik: an approach to the philosophy of science active within the Vienna Circle but progressively marginalized by the emergence of LE from Carnap’s Wissenschaftslogik, which emphasized the ultimate irreducibility of linguistic practices to formal calculi and the pragmatic, conventional, social, and ethical dimensions of the scientific enterprise.