Introduction to the Philosophy of Science.
Doreen Fraser
Email: dlfraser@uwaterloo.ca
Office Hours: Wednesday 1pm-2pm, Friday 10am-11am
Office: HH330
Science arose from philosophy. The core areas of philophy are metaphysics (study of the nature of reality), epistemology (study of the theory of knowledge). The philosophy of science is part of epistemology. For example, the frameworks for mathematical physicists vs. those for experimental physicists, and the theoretical physicists between them.
Our goal is to obtain high quality knowledge. The philosophy of science looks into the methods and procedures we use to obtain that knowledge, and how we can improve the quality and quantity of the knowledge we obtain this way. Later on, we will discuss bias, objectivity, and impartiality, and how science is affected by socialogical phenomena.
The textbook is Theory and Reality (2003) by Peter Godfrey-Smith. This is required for the pre-class readings.
Pseudoscience is a fake science. A common example of this is astrology - the horoscopes we see in things like newspapers. Some issues with astrology as a science include the fact that it cannot make accurate predicitions, and the predictions are so general that we cannot prove they are false. In other words, astrology is not a science because it is not capable of making precise, accurate predictions.
Science includes physics, chemimstry, biology, economics, psychology, and so on as paradigm cases. There are also gray areas such as anthropology. The paradigm cases are paradigm cases because they have been around for a very long time.
Descriptive accounts of science focus on how science works historically, and then attempting to generalize from case studies and observations of existing science.
Normative accounts of science focus on how science should work to help us better obtain data, figure out which methods are better than other, and so on. For example, whether a study should be double blind or not.
In other words, descriptive accounts focus on how science is, and normative focuses on how science should be.
Science is empirical - experience is the best source of real knowledge, as opposed to rationalism, which says that the best source of knowledge is from reasoning. However, science uses both empirical evidence and reasoning together to obtain the best information we can.
The observable-theoretical distinction is the distinction between what we directly observe, and what is somewhat farther from what we observe. For example, atoms are observable using STM technology, but neutrinos are theoretical, since we have never seen a neutrino with our instruments or senses, only its effects.
The manifest image is the image we see with our senses and our instruments. The scientific image is the image we see through our scientific theory. These can often differ as more of our scientific conclusions come from indirect observations. For example, quantum mechanics tells us that particles can simply wink into and out of existance, which goes against what we directly observe with our senses.
The scientific revolution, which occurred around 1550-1700, was a huge upheaval in the way humans observe how the world works, with impacts on everything from science to medicine. Before this, we had Aristotelian physics and Ptolemiac astronomy. After, we had Newtonian physics, with contributions from Galileo, Descartes, Newton, Copernicus, Kepler, and so on.
Ptolemiac astronomy was geocentric with perfectly spherical orbits of planets and stars around the earth. There is only perfect motion in the heavens, since the heavens are perfect. The stars are perfect and make of quintessence beyond the moon.
It was observed that occasionally planets would appear to go backwards in their orbits. Ptolemiac astronomy, in response to this, introduces epicycles, which were circular orbits orbiting around deferent cycles, which were the larger circles around the earth.
Ptolemy's model actually did a decent job of predicting the motion of the planets. In fact, it was used in naval navigation to quite a bit of success.
Copernicus' heliocentric model was a radical shift, putting the sun at the center of the solar system, though the orbits were still perfectly circular. The advent of these new models also resulted in changes to physics, such as Kepler's calculations for orbital parameters like velocity and orbital radius.
This received massive amounts of backlash. The dominant religion at the time being Catholicism, it was believed that humans had a special place in the universe, and so were at the center. Further, since the heavens were the domain of the Catholic God, the heavens must then be perfect.
Galileo Galilei made big contributions to observational astronomy and physics, known for inventing the telescope and his experiments with falling objects. His observations of the heavenly bodies being imperfect resulted in his prosecution and later on, he was put into house arrest for the rest of his life.
In his "Dialogue Concerning the Two Chief World Systems", he discusses geocentrism vs. heliocentrism as a dialogue between three people - Salviati, a mouthpiece for Galileo; Simplicio, an Aristotelian, and Sagredo, an educated layman/inserted audience member. Interesting, by being written in Italian, this was more widely accessible to the wider Italian speaking population, which angered the Catholic Church, despite Galileo being a Catholic.
The first argument in the excerpt is that the moon is not perfectly smooth because when we have a flat mirror vs a diffuse wall, the flat mirror is only bright from one location, while the wall is bright regardless of what angle we observe it from. The rejoinder is that the moon is not flat, but a sphere.
The second argument is that a spherical mirror would only reflect a single point of light to any observer, since only a tiny point would be directly reflecting toward the observer. As a result, to us the moon would appear as so small a point as to be invisible. This point is emphasized by an experiment with the brightness of the wall not changing when introducing a spherical mirror.
Galileo at this point says that experiments and the experiences of the senses are more important than conclusions derived from plausible assumptions. This was an important part of the scientific revolution - the empirical discovery of new knowledge about the world, by doing rather than thinking about what seems correct.
The other sections of the book deal with the earth moving, and the sun being motionless at the center of the world.
Robert Boyle, among his contributions to physics (Boyle's law), also set some ground rules for what counts as a scientific explanation - a phenomenon must have a mechanical explanation.
Boyle introduced the concept of qualities. For example, primary qualities include size, bulk, shape, motion, and so on, while secondary qualities are the interactions between primary qualities, such as color, taste, solubility, and so on.
Empiricism is the view that all real knowledge is empirical - the only source of real knowledge is experience.
Rudolph Carnap was a physicist, and the author of "The Elimination of Metaphysics Through Logical Analysis of Language". In this article, he states that all statements in metaphysics (including philosophy of ethics and moral philosophy) are entirely meaningless, and that he will justify this using logical analysis of the language used by metaphysicists.
This ties in to science in that analysis of the language used by science would potentially improve the quality of our knowledge - that we can make sure all our statements are meaningful.
For example, he says Heidegger's passage, "What is to be investigated is being only - and nothing else; being alone and further - nothing; solely being, and beyond being - nothing. What about this nothing? [...] Does the Nothing exist only because the Not, i.e., the Negation, exists? [...] What about this nothing? - the Nothing itself nothings.".
Carnap says there are meaningful and meaningless uses of words - the verifiability theory of meaning. For example, "God" could refer to certain mythological beings, which would be meaningful, while a meaningless use would be something like a thing that is beyond experience. In other words, a sentence about the world has meaning if and only if it is possible, in theory, to make observations that count as evidence for the truth or falsity of the sentence. That means that sentences that are not about the world could also be meaningful or not, and does not depend on observations about the world.
In other words, sentences about the world have meaning if they are verifiable by observation, and statements that are not about the world are meaningful if they are analytic judgements, or contradictions of analytic judgements (statements that are false because of its meaning itself, like "One equals two.").
A pseudosentence is a sentence that is meaningless as per Carnap's definition. Meaning is the result of a sentence being somehow connected to something that we can observe.
Sentences can be meaningless because they don't make sense, like "Caesar is a prime number", or perhaps because they are not verifiable, like "God is a being with power beyond observation." - there are no observations we could make to verify this. A meaningful sentence might be, "Anthropodes are animals with segmented bodies and jointed legs." - it is easy to look at animals and see if they are anthropodes. Meaningfulness is time dependent - observations that are theoretically possible to make can change over time, like the meaningful statements that were made meaningful after the invention of the microscope allowed many new types of observations to be made.
Some sentences are meaningful but not directly observable, like "gravity makes objects fall". Despite the fact that we cannot directly observe gravity, it is connected to and defined in terms of observations that we can observe, therefore giving it meaning. This is the observational-theoretical distinction - the difference between what we can directly observe and what we derive from these. All meaningful words of the language can be reduced into other words, and eventually reduced to the words of observation sentences ("protocol sentences"). All sentences that are the result of deduction are theoretical, while all sentences that we directly observe are observational. All theoretical terms must be somehow connected to the observational.
A theoretical sentence might be "atoms are composed of protons, neutrons, and electrons". While it is not possible to directly observe the protons, neutrons, and electrons, they are part of our scientific theory, which is heavily grounded in observation. The sentence is not a direct result of the observation, but also of scientific theory, which makes it theoretical.
The line is actually rather ill-defined - we use deduction to allow us to assume that what our eyes see actually correspond to what is in the world. For the purposes of analysis, we will give it a very broad definition, all things that are connected to our five senses in some way. For example, what we see through a microscope or telescope cannot directly be observed, but we consider it an observation anyways. Evolution is a theoretical concept since no person has been alive long enough to directly notice it - it is the result of deduction performed on observations.
The analytic-synthetic distinction is the distinction between sentences that are true or false because of just their meanings, and those that are true or false because of the way the world is. Analytic sentences are those that are true because of their meaning, and are true solely due to their form - they say nothing about reality, and are transformations of factual statements rather than factual statements themselves. In order to gain knowledge about the world, we need to combine them with observations. This includes all of the formulae of mathematics and logic, like "one plus one equals two." and "a bachelor is an unmarried man". Synthetic sentences are those that are true or false because of both its meaning, and because of the way the world is, like "I ate toast for breakfast this morning".
According to Carnap, the statements of metaphysics are actually pseudostatements (meaningless), and rather than describing the way the world is, simply express the general attitude of a person towards life. What he is saying is that we should not treat the statements of metaphysics as factual, and not that the field of metaphysics itself should be eliminated.
There has been some criticism of this view. For example, the idea that something is meaningful only if it is connected to observations is a pretty big assumption, and one that rationalists are apt to reject.
There are also scientific statements that are meaningless, like statements about what is in the event horizon of a black hole (we cannot observe beyond the event horizon). It could also be said that it is meaningful because it is connected to the observations we have made outside of the event horizon.
The three types of meaningful sentences are therefore analytic statements, contradictions of analytic statements, and empirical sentences connected to observation.
Logical Empiricism is based on finding meaningful statements that are also true, based on the criteria specified above. It is one possible account of science.
Basically, logical empiricism is based on confirmation of hypotheses about the world.
The philosophy of science, according to Carnap, should focus exclusively on the logic behind science, not things like scientific history. In other words, philosophers should be focusing on the analytic rather than the synthetic, which should be the domain of the scientists.
This is a normative approach to the philosophy of science, since Carnap is stating that philosophers of science should seek to analyze statements to make sure they are meaningful, and to discard the meaningless theoretical terms and statements - correcting the logical structure of science. In other words, they should focus on validity rather than soundness.
The normative-descriptive distinction is the distinction between how science should be, and how it is.
Logical Empiricism is always based on the observations, at its core. For example, saying that a phenomenon is caused by an electron does not mean that there is actually something out there in the world that is an electron, but rather that there appears as though there is a thing that acts like an electron theoretically would causing the phenomenon. An electron is simply a good way to organize our observations.
This is analogous to the aether that earlier scientists believed in. A logical empiricist would say that this is not appropriately related to the observations that later scientists made, and so it has become meaningless in the light of our new observations. As a result, a logical empiricist would discard the meaningless term aether from science, the goal being to trim down science so that scientists can focus on making meaningful statements.
Carnap's logical empiricism is pragmatic, but does not take into account things like ethics, context of discovery, and much of the descriptive portion of the philosophy of science.
In practice, it is almost impossible to actually satisfy the verifiability principle for many core theoretical principles. In many cases, it is stretching the limit of the "in theory" clause of the verifiability principle in what it is theoretically possible for us to observe.
Some criticisms of logical empiricism include its failure to account for statements like "metals expand when heated or the nothing nothings" that are actually verifiable, even though the second part is totally unverifiable, because an experiment about metals expanding would actually give evidence in favor of the statement as a whole being true, since it is an OR statement.
Logical empiricism is also criticized for looking only at individual sentences, not at a theory at a whole. As a result it misses the bigger idea in the theory, and even ideas that span just a few sentences.
While Carnap is distinguishing between meaningless and meaningful sentences, Popper further distinguishes between good science and bad science.
Basically, Popperian science is based upon falsifying hypotheses.
Quine presented one of the most well known criticisms of logical empiricism. Quine says that even the parts of science that are analytic are subject to being revised based on empirical evidence, and so even analytic statements are subject to empirical evidence.
As a result, the analytic-synthetic distinction is more of a continuum, with logic and mathematics being more along the line than direct observations.
Truth always depends on language and on facts outside of the language - that the truth of a sentence depends on the meaning of the language, and the meaning applied to the world.
The goal of the philosophy of science is to have a framework into which we can fit our observations and obtain a certainty for any given hypothesis.
Science is based on induction - inferring generalizations using particular observations. Inductive logic is the generalization of induction into argument forms that have known properties. This contrasts with deductive logic, which infers specific cases from generalizations.
Deduction is nice because if the premises are true, then the conclusion must necessarily be true. However, we don't find it in science, because we don't have any generalizations that we are certain are true.
Induction is the primary tool we have. In general, it cannot provide certain knowledge, but only become stronger with better and more premises.
Most importantly, David Hume proposed the problem of induction - how do we know that future experiences will resemble past and present experiences? Hume states that objects have a secret nature and our mental models of these objects, based on observations, might not be a good representation. By this, it is only habit that leads us to conclude that the sun will rise tomorrow from all our past experiences of the sun rising.
It is the role of the philosophers to check the form, and the scientists to check the content.
Hypothetico-deductivism is the idea that an observation is deduced from the hypothesis, and is then confirmed by actually making the observation. A confirmation lends inductive support for the hypothesis. Basically, scientists form a hypothesis, make a prediction based on the hypothesis, try to observe the predicted observation, and then confirm the hypothesis with that observation (or reject it if the observation disconfirms it).
Recall the statement "metals expand when heated or the nothing nothings". In other words, if M \implies O is a valid deductive argument, then so is M \wedge S \implies O. This is something Carnap seeks to avoid, since the second clause of that statement is meaningless.
However, logical equivalence presents a problem: "all ravens are black" is logically equivalant to "all non-black things are non-ravens". Therefore, in inductive logic, any evidence that confirms the latter also confirms the former. So finding a white shoe confirms that all ravens are black, which seems to be absurd.
However, this is in fact perfectly valid. If we looked at all non-black things and determined that they are not ravens, then it stands to reason that ravens must be a part of everything else, the black things.
In this particular example, it is simply a matter of degree - seeing a white shoe is in fact confirmation that all ravens are black, but it is much less confirmation than actually seeing a black raven, since the set of all non-black things is much larger than the set of all ravens.
The idea is that there are different degrees of confirmation - that confirmation is not a binary value, but rather a continuum.
Consider the following inductive form: "All of the many X observed, in many circumstances, prior to (date in the future), have been Y. Therefore, all X are Y". However, consider the following instantiaion of the form: "All of the many oranges observed, in many circumstances, prior to (date in the future), have been frob. Therefore, all pineapples are frob", where "frob" means that an object is orange if first observed before (date in the future), and blue if first observed after.
By this argument, all pineapples will suddenly turn blue after (date in the future), which is probably a bad result. A rebuttal to this would be that we don't actually have any evidence that oranges are in fact frob. Observing evidence that oranges are frob would be evidence that oranges will in fact turn green. That means that since we can't observe any evidence that oranges are frob, we can't confirm or disconfirm the hypothesis.
Goodman says that in this case, frob is not a valid predicate. Only predicates that have a history of use in our language are valid predicates. However, this prevents us from adding new predicates to our language, which is not an acceptable situation.
In response to this, he says that only natural kinds of predicates are allowed. In other words, only predicates that describe the real properties and qualities of the world are allowed. Although this goes against logical empiricism since this invalidates the analytic-synthetic distinction (because we can't do the logic in a purely formal way, and have to consider the real world).
Sir Karl Popper's contribution to the philosophy of science is the idea of falsificationism - that scientific theories can only be falsified, not confirmed. He was a logical empiricist. The main idea is that instead of trying to find evidence that a theory is true, we should try to find evidence that a theory is false.
A theory is bold if it makes many predictions (there are lots of ways to disconfirm it). It is made strong by surviving tests that were likely to disconfirm it but failed to do so.
By Popper's model, the new scientific method is to make a bold conjecture, then attempt to refute it.
Popper agreed with Hume's problem of induction. Popper thinks that inductive logic has no role in science - falsificationism is deductive and is therefore capable of giving us certain knowledge.
Theories need to be bold, which means that they make predictions that are not obvious, and they should not be ad hoc - theories where the original problems are patched up without making new predictions. In other words, theories should be as risky as possible.
Basically, we have a bunch of theories that are competing with each other and eliminated with the best efforts of scientists - the fittest theories survive.
An example of an ad hoc theory is when Galileo tried to falsify the Aristotelians' view of a perfectly spherical moon by showing them the moon through his telescope. In response to seeing a cratered moon, the Aristotelians, rather than admitting that their theory was falsified, said that the moon was enveloped in an invisible substance that cannot be detected.
This is ad hoc because it makes no new predictions - if this invisible substance was detectable in some way, it would be fine since it made new predictions we could test.
In the falsification model, all scientific changes are rational. This account of science is intended to be both normative and descriptive.
Popper originally wanted to justify the difference between things like psychology and physics - he saw physics as a prime example of what real scientists should be doing, and the field of early psychology to be a prime example of pseudoscience. He wanted to explain why these were so different, and why one was science while the other was not.
Popper found that it wasn't the ability to confirm it that lead him to doubt that something is science, but the inability to disconfirm it.
An example of psychology being pseudoscience is Adlerian psychology, where inferiority is the driving force behind all human actions. The man saves a child because he needs to prove to himself that he can, and the man pushes the child into water to prove to himself that he dared to. This is not science because it's justifiable regardless of what the actual situation is.
Falsificationism is the idea that scientific theories are genuine only if they are refutable by some possible observation.
A possible problem with doing this is that there are a lot of genuine science in which it's difficult or almost impossible to falsify, like string theory (where we have no practical ways of testing certain things) or biology (where certain experiments may be unethical). Because of this, some people simply say that things like string theory are not science yet.
There are also non-scientific theories that are falsifiable, like homeopathy/alternative medicine. In response to this, Popper says that homeopathy was a science before it was falsified, and the theories that exist now are no longer science since they are simply ad hoc changes to the old theories to avoid falsificationism.
As a result of this, falsificationism is not a perfect way to demarcate science and pseudoscience - there is pseudoscience that is falsifiable and science that is not.
All scientific statements are falsifiable - even observation statements that are derived from our senses. The problem with this is infinite regress - we can't get anywhere since we have to try to falsify everything. In practice, we would have a convention in which we agree to assume that certain statements are true, but we must keep an open mind when it comes to the possibility of them being false. For example, when we are recording the pH value of a solution using a litmus test, by convention we simply assume the color corresponds to the pH as it should, and not that our senses are deceiving us. However, we must be open to the possibility that secondary reactions are causing the color to change.
One of the criticisms of Popper's account of science is that nothing ever gets confirmed, yet in real life we treat hypotheses as confirmed all the time. In response, Popper might say that this account is normative, not descriptive - this is the goal we must strive toward.
Another is that falsification doesn't actually result in a deductive argument, which was proposed by Quine. The original formulation is that if we have a hypothesis and a disconfirming observation, the hypothesis is false. However, in reality it is more like, if we have a hypothesis and a large number of auxilary hypothesis, and a disconfirming observation, then either the hypothesis is false, or one of the auxilary hypotheses. For example, if we hypothesized that a solution is acidic and the litmus test was blue, it is possible that the litmus paper is wrong, or we are temporarily color blind, or something else - the hypothesis depends on the auxilary hypotheses that the litmus paper actually works correctly and our senses aren't deceiving us.
Kuhn was a historian, not a philosopher. Instead of focusing on the justification behind discovery, he looks at the context of discovery - where theories come from and how changes in science happen. Kuhn's account of science is descriptive - it describes how science in real life is practiced.
In Kuhn's account, there are multiple stages in scientific change:
Kuhn criticizes Popperian falsificationism by saying that in practice it would not work, since people would waste all their time trying to falsify the foundations of everything. To make progress, he says, researchers need to be on the same page.
An example of this is the view of light in physics:
Normal science is research that is firmly based on a past scientific achievement, which the community as a whole acknowledges as supplying the foundation for the further practice of the science. Basically, we are solving puzzles in the framework of the current paradigm, while there is the consensus on the basic principles and the concepts of the paradigm.
A paradigm is essentially the framework in which we perform science. A paradigm might be something like the heliocentric model of astronomy, or relativistic physics. A paradigm in the narrow sense is a specific experiment, a law, or solution to a particular puzzle - a specific scientific achievement. In the broad sense, it is a package of ideas and methods that make up a view of the world and a way of doing science.
A crisis occurs when unresolved anomalies start to pile up - puzzles that scientists try to solve, but cannot - the problems that resist solutions, like experiments that result in something unexpected. When these pile up, scientists start to lose faith in the paradigm and begin to look at alternatives.
This differs a lot from Popper's model, because Popper's model says that even if a single prediction is falsified, we should reject the entire theory and cannot correct it, since that would be ad hoc. In Kuhn's model, scientists allow some anomalies to pile up first, even if there are no known solutions to certain problems, and possibly change the theory to keep up (Popper would say that this is ad hoc). An example of this would be how Newtonian mechanics can't account for changes in the orbit of celestial bodies. Although it couldn't be explained, scientists still used Newtonian mechanics until relativity was discovered.
Another example would be the epicycles introduced by scientists to explain strange motion in the geocentric model - an anomaly. Although the motion disconfirmed the original theory, scientists did not reject the entire theory, instead opting to fix the theory instead. Another anomaly would be comets and mountains on the moon proving that the heavens were not an entirely different, perfect realm.
When a crisis occurs, an alternative paradigm is suggested and scientists devide to replace the old paradigm with the new one. This can take a very long time, like the Scientific Revolution, for example.
When a revolution occurs, the new paradigm is not based on the old one. In fact, the field is rebuilt entirely, with many of the core assumptions and fundamentals shaken up. The transition is not cumulative - in other words, when we get a new paradigm, we have to start the entire field over from the basics. There is usually a large overlap between the problems that can be solved by both paradigms.
When a paradigm shift occurs, we can't transfer our research over from the old paradigm to the new - this is the incommensurability of research. The paradigm itself defines what a legitimate scientific question or answer is - this is incommensurability of standards. In other words, a paradigm shift changes the questions in addition to the answers. For example, "what is the size of the epicycle of the moon?" is a legitimate question in Ptolomian astronomy, but doesn't make any sense in Copernican astronomy. Also, "why do some substances gain mass when burned?" is a problem in phlogiston based chemistry, but not in oxygen-based chemistry.
There is also incommensurability of meaning - the meaning of terms themselves change. For example, the meaning of "space" and "mass" between Newtonian and Einsteinian mechanics, or the meaning of "species" before and after Darwin's theories.
It is also possible for scientific standards to change. For example, Aristotelian physics says that things fall because its nature drives it to the center of the universe, while modern science actually shows that it is caused by the nature of its component particles through scientific methodology. When the methodology itself changes, we cannot simply use the results of the old methodology in the new paradigm. Since scientists have rejected basically all of the core assumptions of the old paradigm, it is no longer valid to use the results available in the old paradigm.
Scientific revolutions do not happen because the new paradigm can better solve problems in the field - in practice, this does not occur, and problems in different paradigms are incomparable anyways.
Basically, Kuhnian science is based on solving problems within the framework of paradigms, and changing paradigms when there are too many anomalies.
Tips for next quiz:
An example of a paradigm shift given by Kuhn was in chemistry. Before modern chemistry, chemistry made use of a substance called phlogiston, which was an element used to explain combustion. A log contains phlogiston naturally, and burning it would release it into the air. Of course, modern chemistry moved over into the oxygen based model, where oxygen combines with carbon in the log to release carbon dioxide.
The shift to the oxygen paradigm was pretty significant, and radically changed our view of what air actually was. A good example of an anomaly occurring is that fact that certain materials gain mass after being burnt - phlogiston, being lost by burning, should decrease the mass, not increase it.
The hallmark of the paradigm shift was the emergence of many new phlogiston-based theories, since chemists are trying hard to keep the paradigm in the face of anomalies piling up. Also, there was the inadequacy of the phlogiston methods for performing experiments.
Discovery is also not an instantaneous process, as evidenced by Lavioser and his colleagues' discovery taking years to finally get right. As a result, the old theory was not just falsified at any particular instant in time - the discovery of a new entity is not just a matter of observation, and may require significant reconceptualization over a long time. Laviosier had to recognize that air was not the only gas, and in doing so change his entire way of thinking.
What Kuhn is saying is that observation isn't something that can be done independently - while Lavioser sees the oxygen from the experiment, Priestly sees dephlogisticated air. Though they see the same experiment, they observe different things. The idea that observations are affected by the mindset that the observer has is known is the theory-ladenness of observation.
A revolution is a change in world view, and we cannot keep any of the research or even data that we have collected before the new paradigm, since it is tainted by the biases of the old paradigm.
An example of this occurring in real life is the fact that Western astronomers observed changes in the heavens (sunspots, comets, new stars) after Copernicus' theory was published, while Chinese astronomers saw it much earlier, since their paradigm allowed for changes in the heavens in the first place. In other words, it is hard for us to see evidence that is outside of the paradigm.
In other words, even our senses are not objective and neutral - they are interpretations of the raw data streaming into our senses by our brains, which are embedded in a paradigm.
During a revolution, persuasive speaking and rhetoric has a role in which new paradigm is chosen. For example, Galileo's Dialogue uses a conversation between three people to be more rhetorically forceful than just presenting the theory and supporting evidence directly. The result of revolutions could depend on such things as research grants and debating techniques. This is analogous to a political revolution, according to Kuhn.
Kuhn's account differs from that of the logical empiricists and Popper in that observations are theory laden, and there are social/psychological factors that will affect how science is done and the products of science. Kuhn also focuses on science as a community rather than an individual activity, and how the best way to organize the community is to agree on some basic assumptions first rather than trying to confirm or falsify everything.
Also, for Kuhn, normal science is rational like the logical empiricists and Popper would say, but revolutions have essential irrational elements.
Note that Kuhn's account of science does not give us any certain information. Also, we lose a lot of work from paradigm shifts, and we can't tell if we are making scientific progress or getting better knowledge since there is no objective body of evidence to compare against. Additionally, Kuhn's account is only descriptive, but isn't very normative - it doesn't tell us anything about how we should be doing science, or whether this is good or bad. It doesn't solve the demarcation problem, and doesn't account for life sciences, which have not had paradigm shifts. Also, there are sometimes multiple paradigms in a single field, like in physics where there is quantum mechanics and relativity, which goes against the idea that there is only one paradigm in a field.
Imre Lakatos was a student of Popper, and claimed that Kuhn's account of science reduced science to just mob psychology, and that science should be a rational activity. Lakatos' account is based on Popper's, but takes into account a few points from Kuhn. This account is meant to be normative, but also to be somewhat descriptive and to take real-world issues and examples into account.
Lakatos says that a healthy scientific field must contain several competing research programs, and each program has scientists that try to defend their views against falsification while trying to falsify other research programs. Programs that defend themselves poorly (in an ad hoc way, or in a way that doesn't lead to new knowledge) are abandoned.
A research program is a group of scientists with the following common ideas:
A progressive research program reponds to falsification by changing the protective belt in a way that leads to new discoveries and predictions - a progressive problemshift.
A degenerating research program reponds to falsification by changing the protective belt in a way that does not lead to new discoveries and predictions - a degenerate problemshift. This is not necessarily a sure sign that scientists should abandon it, and sometimes the right course of action is to stay with the program.
It is up to the scientists to decide whether their program is degenerate, and whether they should abandon it and join another.
One criticism of Lakatos' account of science is that it doesn't actually tell scientists when a good time to abandon a degenerating research program - only that they should if it happens too often.
Another criticism is that things such as the availability of funding will affect the choices scientists make when deciding which research program to join, which means that this part of science is not rational as Lakatos wanted all of science to be.
On the other hand, this account of science supports multiple paradigm-like communities within a single field, which better describes certain fields of science, and also supports subcommunities within fields very well. Further, it has universal standards that apply to all research programs in all fields, unlike in Kuhn's account where every paradigm has its own standards.
This account gets around mob psychology because it allows a single scientist to start their own research program and defend it themselves. Even if the rest of the programs are opposed to the new one, it is still allowed to compete with them.
Paul Feyerabend was an anarchist, and has an anarchistic view of science. He was of the opinion that there was no such thing as the scientific method - "anything goes" should be the only rule in science. Feyerabend follows the views of Kuhn, but with a more extreme version of mob psychology - fewer rules, and less structure imposed.
According to Feyerabend, science consists of multiple paradigms, which are fundamental frameworks that are not tested. Ideas and standards in different paradigms are also incommensurable, like in Kuhn's account. However, this account doesn't require normal science to have only one paradigm. The more paradigms that exist at once, the more healthy the scientific field is, because we want to have as many new, interesting, novel ideas at once as possible.
Science should be guided by the principle of tenacity, which means that scientists should feel free to stay with a theory, even if there are problems, and the principle of proliferation, which means that scientists should be encouraged to try diverse and varied approaches.
An example would be Galileo, who created his own paradigm when Aristotelean astronomy was the dominant paradigm. Rather than giving better observational evidence, Galileo changed the definition of what counted as good evidence, and use rhetorical and persuasive techniques to have people embrace his new paradigm.
Modern scientists behaves like the church in the middle ages did - it has a lot of authority over what kind of science can get done, and has a social position it should not. Feyerabend says that all sciences should have equal access to funding and the education system, and people should be able to choose a science as they want.
A criticism of this account is that it removes all the ability for science to allow us to obtain high quality knowledge.
Also, the recommendation that all programs should get equal funding is not very practical since funding is not unlimited, and gives people with more wealth or political influence more power over which sciences people engage in. People should not be learning magic in schools, but in this account this would be perfectly fine.
This account is also criticized for using exceptions to justify throwing out rules entirely, rather than creating more flexible rules or rules that are more like guidelines.
This account of science is descriptive and normative. It also accounts very well for political, social, and rhetorical forces in science, like what gets taught in schools. It can also explain why revolutions sometimes occur without a crisis at all, unlike Kuhn's account.
In this account, science would be an art that results in imaginative and lively discussion, rather than a tool that helps us solve practical problems.
Underdetermination of theory by observational consequences is the idea that for any theory with some observational consequences, there also exists another, different theory with the same consequences - the observational consequences of any theory are never unique. For example, in Newtonian mechanics, the center of the observable universe could be at rest with regards to the absolute reference frame, or it could be moving at a constant velocity. Both theories result in the same observations, so it is theoretically impossible to figure out which theory is correct.
Do the disproportionate lack of women researchers in scientific fields affects the quality of knowledge obtained by science? Do gender roles have an effect on which hypotheses are formed and accepted?
A bias is a disposition to reach a particular kind of reasoning or judgement, a skew toward a specific sort of interpretation. Biases can be conscious or unconscious, and are often unconscious. Biases are an important part of human cognition, but we want to be aware of when they tend to lead to unreliable reasoning.
Sex is the biological category in which a person falls, while gender is a person's culturally shaped expression of sexual difference (masculine and feminine behaviour).
In the 1990s the theory that the SRY gene was the master gene for determining sex was replaced by a theory with a model which includes the SRY gene as one among many genes determining sex.
The previous, "master gene" theory is an androcentric one - based around males. In this theory, all fetuses start off as female, and a gene turning on eventually turns it male. However, the androcentric nature of the theory means that the researchers looked only into testes development rather than ovarian development, and that the researchers assumed that sex is a binary value determined solely by genetics.
Fausto-Sterling concluded that assumptions about gender have prevented the creation of a coherent model of sex determination. The strength of the researchers' gender biases caused them to overlook evidence such as the SRY gene being inconsistently transcribed and poorly conserved, which disconfirms the predictions that the gene should be consistently transcribed and well conserved.
The fact that most of the researchers were male contributed to the fact that they studied only the Y chromosome rather than the X chromosome.
Spontaneous feminist empiricism is usage of a feminist viewpoint to revise biases and other problems in science, but in a way that doesn't challenge ideals, methods, or standards. For example, this might include bringing in people from diverse viewpoints to review hypotheses for biases that the researchers may have missed.
Philisophical feminist empiricism is usage of a feminist viewpoint to revise traditional ideas about science and knowledge, like the institutions and structures, but in a way that keeps its empirical, objective core - revision is allowed, within certain constraints. For example, this might include socially restructing the scientific community in order to improve open critical discussion about alternative assumptions from scientists of diverse backgrounds.
Feminist postmodernism is the idea that science should embrace relativism - allowing for the idea that people of different genders, ethnic groups, and so on see the world in fundamentally different ways, and the idea that there is one true description of the world is a harmful illusion. However, this throws out the objective and empirical nature of science, and does not allow us to obtain any meaningful, high quality knowledge.
Standpoint theory is the idea that there are certain facts that are only visible from a special point of view, such as minorities. These people can criticize the basics in ways that others cannot. This is not a relativist view because these people actually do have access to facts that others do not.
All of these, so far, the apply only to fields like psychology, sociology, and biology, and don't really apply to things like physics and chemistry.
Scientific realism is the idea that it is reasonable to believe (i.e., we know) that the world is approximately the way in which our best scientific theories say it is. Scientific anti-realism is the idea that it is not.
Does particle physics describe particles? This is a metaphysical and epistemological question - it's a question about what our state of knowledge is, and what the world is like.
For now, we will do our analyses from the perspective of scientific realists. That means that our perspective is one in which we believe that particle physics accurately describes the world.
Particle physics is currently dominated by the quantum field theory, which combines quantum mechanics (which doesn't explain large scale phenomena) and special relativity (which doesn't explain small scale phenomena), though not general relativity. The standard model of physics is a quantum field theory. It has particles like the electron, neutrinos, quarks, etc. that make up matter, and photons, bosons, and etc. that make up forces. The idea that there are many different types of particles is central to the standard model.
The discrete packages in quantum field theory are known as quanta. Quanta are particle-like in that they are discrete and countable, and also have the correct relativistic energies. However, they can't be individually tracked/labelled or localized to a finite region (we can't say with certainty that a quanta is within any finite region of spacetime - it could be anywhere in the universe, though often with very smal probability). Now the question is, "are quanta the particles in particle physics?", and the answer seems to be yes.
Classical quantum mechanics has quanta, but quantum field theory only has these quanta in the absence of interactions (ideal conditions), which does not occur in the real world. As a result, quantum field theory doesn't describe quanta at all - it actually describes fields.
So particle physics doesn't actually describe particles - our best theories actually describe fields.
What is the Higgs boson? The LHC found a particle that corresponded to the Higgs boson, and CERN later confirmed that it has the same properties that we would expect the Higgs boson to have, which allowed us to say that we had found a Higgs boson. There is a background field called the Higgs field that the Higgs boson distorts, causing it to bunch together around the particle and cause more mass the more the field distorts. The Higgs boson itself is just a bunching together in the Higgs field, not a physical thing itself.
The idea is that figuring out the picture of reality that particle physics describes is not always straightforward - just trying to figure out whether particle physis describe particles requires us to look into many different aspects of the theory.
Carnap is neither a realist or an anti-realist - he doesn't say that we can or can't trust that the scientific theories we have are the same as what is in the world. In fact, Carnap says that this entire debate is metaphysics and is meaningless. Theories, according to Carnap, don't describe the world at all, and are only useful for making predictions.
Godfrey-Smith, a logical empiricist, is a proponent of realism - common sense realism naturalized. The actual goal of science is to accurately describe relity, including parts of reality that are unobservable. The idea is that scientists can reasonably hope to successfully do so at least some of the time. Success of a theory is how well we can use it to explain phenomena, or how well it makes predictions.
Godfrey-Smith emphasized theory realism, which says that our best scientific theories describe the world in an accurate way.
Most philosophers are realists, and try to argue that our theories correspond to the world, since that way we can solve more problems and better represent the world.
Hacking is a realist. Hacking's entity realism says that a large number of many theoretical entities, like photons, protons, black holes, and so on, actually exist in the world, and are not just a simple absraction of something else. Real world science is based on realism, and Hacking says there is likely no theoretical argument for or against it either way.
Hacking distinguishes between representation and intervention - the theoretical and the experimental. Representation is the concept of theoretical entities actually representing real objects, while intervention is the concept of actually being able to use these objects as a good abstraction. Hacking is not drawing on scientific theory to say what exists, but instead whether it can be used in scientific experiments.
Hacking says that the criteria for believing a theoretical entity exists is experimental, that an electron exists because we can manipulate it in the lab and use it as an abstraction. However, this assumes that the theories give approximately correct descriptions of what is in the world, and does not consider the fact that theories are used to build experiments, and experiments are used to formulate and support theories (Hacking simply avoids considering these types of experiments).
Some realists use the No Miracles argument: if the world was not even approximately similar to our theories, then it would be a miracle that our theories were so successful, and since there are no miracles, the world is approximately the same as our theories. This is an instance of a inference to the best explanation argument - the best explanation for something is true. By best, Hacking refers to the amount of success people have using the theory in experiments.
However, the definition of success changes things. It is possible that the theory that makes the best predictions and works best might not be the most accurate picture of the world. For example, the geocentric model of the world was an excellent predictive model that was widely used for navigation to great success, and at first the heliocentric model was not as accurate. However, although the heliocentric model was not as successful, it is a more accurate model of what is actually in the world.
Bas van Fraassen is a famous example of an anti-realist. According to him, something is observable if there are circumstances under which we could observe it using our unaided senses. So even if something can only seen with a microscope, it is not actually observable. Even eyeglasses make all observations made through them unobservable. In this view, things like phlogiston never existed at all in the real world, only as a useful way to organize observations (sort of like logical empiricism).
A criticism of this view is that scientists who do not commit to accepting the existance of unobservable things will form less useful hypotheses and experiments. Also, the observable/unobservable distinction isn't very important in practice, since we will need to use observations about unobservable things in any case.
Hacking and van Fraasen disagreed on many points regarding realism vs. anti-realism.
For example, seeing red blood cells using optical and electron microscopes would, according to Hacking, be good enough evidence that red blood cells actually exist - it would be absurd to believe that we could view the same visual configuration from two entirely different media, without there being real structures that we are viewing.
However, van Fraasen says that the way we are actually building and calibrating our microscopes has the goal of making them look like the results of previous microscopes, which could emphasize similarities that might not be relevant. In other words, we have tuned our microscopes to show us similar features, and then assumed that those similar features actually exist. For example, our electron microscopes might be tuned to match the results of optical microscopes, and as a result would not provide additional evidence that what we view on optical microscopes actually exists.
The distinction between aided and unaided senses is a continuum, and is a community standard. van Fraasen draws the line between aided and unaided past glasses, but before microscopes. However, we could also argue that the unaided senses are also quite reliant on the situation, and that in many situations our senses can easily deceive us.
For example, optical fluorescense microscopy has a 10%-20% success rate in preparing samples, but van Fraasen would ask why we are discarding so many samples - is it because the samples aren't actually representative, or is it because what we are expecting to see isn't actually what is in the world?
Pessimistic meta-induction is an argument for anti-realism. In the past, we have had successful theories that were successful but didn't describe the world accurately (phlogiston for combustion, caloric for heat as a fluid, ether as the electromagnetic wave medium, etc.). Our current theories are successful, but it is likely that eventually we will find that they don't describe the world accurately either.
In the 1700s, heat was said to be a fluid known as caloric, that flows from hot bodies to cold ones. By the 1800s, we knew that heat is an effect of the motion of particles in bodies the kinetic theory. However, caloric made for good explanations for a lot of phenomena for a long time.
Pessimistic meta-induction directly conflicts with the No Miracles arguments. In response, an anti-realist might argue that the best theories available aren't good enough to accurately describe reality, while a realist might argue that current theories would avoid making the same mistakes as previous theories, therefore allowing them to accurately describe reality, and also that current theories are extremely successful, much more so than previous theories.
There is also a compatiblist view of the realism vs. anti-realism debate. Structural realism is the view that mathematical equations give an approximately true description of the structure of the world, but not entities.
When we ask someone to explain something, we ask them to find a satisfying answer to the question, "why does this thing take place?". There are four major philisophical views of what an explanation is.
The first is the received view/deductive nomological view, associated with Hempel. Explanations under this view basically deductively break down the phenomena into laws and statistical regularities of nature, and are deductive arguments where the conclusion (called the explanandum) is the phenomena being explained, and the premises of the argument (called the explanans) must include at least one law or statistical regularity of nature. People tend to not hold this view anymore, and instead, it is the basis for most of the current views of explanations.
For example, using the received view, "Why is Mars at its particular position at this time of the year?" would be explained using something like the following argument:
Newton's laws of motion
past positions of Mars
mass of Mars and the sun
------------------------------------------------------------
Mars is at its particular position at this time of the year
The problem of asymmetry is an issue with the received view of explanations. The received view can easily expain "why is the length of an objects shadow what it is?" using things like the height of the object to deduce the shadow length. However, we could use almost an identical argument and explain the height of object using the length of the shadow. This doesn't feel like a good explanation, because it doesn't really make sense to explain the height of a pole in terms of the length of the shadow. In other words, we want explanations to be possible only in terms of the phenomena that cause it.
What is a law? How do we know if something is a law or regularity of nature? In the past we thought that matter cannot be created or destroyed, and that light cannot bend. However, we don't know if these are actually laws, because we can't confirm that they apply everywhere at every time. What a law actually is is a little unclear, so we can't always know whether an explanans is a law or not.
Another is the causal-mechanical view, associated with Salmon. This is what we do when we take things apart to figure out how they work - how the gears and levers in a clock let it keep time. Basically, explanations under this view are descriptions of how the parts of a thing make it work (constituative/etiological causal-mechanical explanations), where we look at the causal processes and interactions that explain how something happens. In this view, we explain things by looking at the actual phenomena - these explanations give us assurance that something works in a certain way.
Basically, an explanation of something is a listing of its causes and how those causes actually caused it.
This gets around the problem of asymmetry because it requires the explanans to cause the explanandum - the object's height causes the length of the shadow to be what it is, but not the other way around.
However, what is a cause? If we hit a basebball with a bat, we saw the bat connect, and we saw the baseball fly, but how do we know that the bat caused the baseball to fly? This is rather unclear and problematic to define. Also, not all explanations in science are causal, like quantum theory, where causes might not exist.
There is also the unification view, associated with Kitcher. This is sort of like the received view. Explanations in the unification view are those that fit into an argument schema (a template for an argument, which is basically an argument with blanks we can fill in). Argument schemas come from laws, like how Newton's laws add many argument schemas about motion. These explanations give us assurance that something fits into our worldview.
In this view, the stronger the argument schema (the more things it can explain, and the fewer laws it adds), the stronger the explanation. For example, Newtonian mechanics is better than Aristotelean mechanics because it has one set of laws that apply everywhere, rather than one set for objects on earth and another set for heavenly bodies.
For example, Darwinian evolution provides an argument schema where we add examples of properties of past generations of a creature, apply the premise that properties of a creature are inheritable from past generations, and results in the conclusion that the current generation has those properties. However, does this mean that using no laws is the best possible explanation?
However, this view can't explain unique events, because we can't use it to take something apart and ask how it works. It doesn't look at the actual phenomena - figuring out what the causes must be from the argument schemas, rather than directly looking for causes by taking the phenomena apart into its parts.
The final view is the pragmatic view, associated with Bas van Fraassen, and it is somewhat pluralistic. This view basically includes both causal-mechanical and unification views, by saying that we should use whichever view results in the more satisfying answer - the right relevance relation for a question is the one that gets the best explanation.
This view includes a theory of questions, sort of like a meta-schema (a schema for schemas). In this theory, P_k is the question, X is the contrast class (the context - additional information about what specifically we are looking for, like which field or time period we are talking about), A is the answer, and R is the relevance relation (what makes an answer to the question a good answer - the kind of answer we are looking for).
The relavence relation is what makes this powerful - it could be a causal relation, where correct answers are causal-mechanical, or it could be laws or regularities, where correct answers would be the laws and regularities of nature that result in the phenomena. Different contrast classes lets us have entirely different types of answers to the same problem, all of which are correct.
For example, "Why did he rob the bank?" could be answered with "Because the money that was there." (if we use the contrast class "as opposed to some other place"), but also with "Because it was easier than other professions" (if we use the contrast class "as opposed to getting another career"), or "Because nobody else would do it" (if we use the contrast class "as opposed to someone else robbing the bank").
This view is very practical and works very well in practice. Helen Longino states that different fields of science often have different relevance relations, and that difference relevance relations often give varying results in different fields. In light of this, the pragmatic view is best overall for science.
Realists have the Hacking criteria for existance, and the no miracles argument. Anti-realists have the underdetermination argument, limited scope of inference to the best explanation reasoning, and pessimistic meta-induction.
Examples of explanatory questions are "why is the sky blue?" and "why did the stock market crash in 1929?".
The covering law model of science is the basis of the received view of explanation. Explanations in this model must take the form of an argument:
statement of antecedent conditions
general laws of nature
-------------------------------------
empirical phenomenon to be explained
For example, "Why will a total solar eclipse occur on August 2017?" might be explained with something like "Because of the current positions of the sun, moon, and earth, and the laws of celestial motion".
The explanation-symmetry thesis is the idea that any correct explanation under the covering law model could serve as a scientific prediction, and any scientific prediction is a correct covering law explanation.
According to Hempel, a law is any statement that has a universal form (except for statistical laws), must apply everywhere and at all times, do not reference particular objects, and only have qualitative predicates (like laws that apply only to spherical objects, or nonmoving ones). In this model, a law might be "no signal travels faster than light".
However, "no gold sphere may have a mass greater than 100000kg" is not a law, while "no enriched uranium sphere may have a mass greater than 100000kg" is. What is the difference between these two? The uranium sphere is physically impossible since a nuclear reaction would spontaneously start from neutron flux, so it is physically impossible to make that sphere. However, checking whether statements like these are laws or not seems to be a difficult problem.
This model, like the received view, is also subject to the asymmetry problem:
An eclipse will occur on August 21, 2017
Laws of celestial motion
-----------------------------------------
The current positions of the sun, moon, and earth
According to the covering law model, this is a valid explanation, but it doesn't feel like a good explanation. In response, Hempel says that the premises of the argument must rely on past or present information, and not future information.
The different views of scientific explanation aren't necessarily mutually exclusive. We can use multiple different approaches to explanation in science, and some work better in certain situations than others.
While Carnap thought the goal of philosophy of science is to clarify the methods of science using inductive logic, Popper thought that science should use exclusively deductive logic. However, they both thought that science should be purely empirical.
Hume raised the problem of induction (how do we know that the future resembles the past?), and Goodman raised the new riddle of induction.
Khun says that the choice between competing theories can't be resolved by proof - we often pick theories based on things like simplicity and practical usefulness rather than how correct they are.
We want to work with induction in a more formal way. Bayesianism is a modern account of the relationship between evidence and theory.
How much more likely is it that a hypothesis h is true after a piece of evidence e is taken into account?
Bayes theorem says that the answer is P(h \mid e) = \frac{P(e \mid h) P(h)}{P(e \mid h)P(h) + P(e \mid \overline h)P(\overline h)}:
This theorem is a model for scientific reasoning, and is a rational account of how evidence supports theories, as derived from probability theory.
Suppose we have a test that is 99% accurate, and we test 1000 people. Given one person who tested positive, what is the probability that the person actually has HIV?
Here, the hypothesis is "the person has HIV", and the evidence is "the person tested positive, and the test is 99% accurate, plus we tested it on 1000 people".
We do not have enough information to give an accurate answer for this, because we don't know and can't calculate P(h) - the probability, in general, that the person has HIV. If everyone has HIV (P(h) = 1), then the person definitely has HIV. But if nobody has HIV (P(h) = 0), then the person does not, and the test was a false positive.
Consider the eclipse experiment being used to confirm Einstein's theory of general relativity. Our hypothesis is "general relativity is true", and our new evidence is "light was observed to be bent by gravity".
Suppose our prior probabilities are P(h) = 0.5 (so P(\overline h) = 1 - P(h) = 0.5) - before seeing this evidence, we thought the hypothesis had a 50% chance of being true.
Suppose our likelihoods are P(e \mid h) = 0.8, P(e \mid \overline h) = 0.1 - we think that if the hypothesis is true, then it is 80% likely that we would see the evidence, and if it was false, then it would only be 10% likely (there is a 10% false positive rate and a 20% false negative rate).
So calculating this, we get P(h \mid e) = \frac{0.8 \cdot 0.5}{0.8 \cdot 0.5 + 0.1 \cdot 0.5} = 0.89.
Now for future experiments, P(h \mid e) becomes P(h) - we take the evidence into account in the future when we are considering how likely to be true we think the hypothesis is. This is the Bayesian update.
What do these probabilities mean? For epistemology and decision theory, the probabilities are often subjective.
An objective probability is something like a coin flip resulting in heads always having probability 0.5 - it is this value regardless of our beliefs. A subjective probability is something like an estimate that someone will enjoy a class - it depends on the user's degree of belief in the statement.
We can measure subjective probability with something like a "willingness to bet" - we ask them what they'd be willing to wager that it is true. In decision theory we assume people are perfectly rational and have complete information, so people would consider a wager with expected value w to being paid w.
So for the above example, we would ask someone how much they'd be willing to bet (or at what odds they'd be willing to bet at) that the eclipse experiment would turn out to be true.
We need to ensure that our subjective probabilities obey the laws of probability, like how the sum of all options can't exceed 1.
If we have a set of bets where the probabilities add up to more than or less than 1, then we have a Dutch book - this is a set of bets where, if someone takes all of the bets in the set, the bet maker always gains something.
For example, if we have three horses, and we think each one has a 50% chance of winning a race (when only one horse can win a race), then we have a total probability of 150% (which breaks the axioms of probability). Suppose a person A proposes bets for each horse at these probabilities, and person B makes a bet of x_1, x_2, x_3 on each horse - if a horse i wins, then A pays B \frac 1 {2 \cdot 50\%} x_i, and if A wins then B pays A \frac 1 {2(1 - 50\%)} x_i.
Clearly, one of the horses must win. Therefore, B wins a bet i, and A wins the other two bets j, k. So A is paid x_j + x_k from B and pays x_i to B. With the right values of x_1, x_2, x_3 (like x_1 = x_2 = x_3), it can be guaranteed that x_j + x_k > x_i, so A will always gain more than what is lost and B will always lose more than what is gained.
B doesn't even have to be one person - the bet maker could be up against multiple people, each person betting on one horse. Although individual people may gain something from the bet, overall the bet takers lose money and the bet maker earns money. The opposite is true if the probabilities add up to less than 1.
One criticism of this (from Kuhn) is that different people can have very different degrees of belief for the likelihoods, and prior probabilities. In other words, there are still rhetorical and political factors influencing the probabilities.
The answer to this is to apply the theorem a lot of times, so regardless of what our prior probabilities are (as long as those are not 0 or 1), we always get closer to the same probabilities as we apply more evidence (assuming that the likelihoods are agreed upon). This is known as convergence.
;wip: Monday April 6th follows the Friday schedule for the week
In other words, an Aristotelean and a Copernican astronomer, if they both applied Bayes theorem and were at least the tiniest bit open to changing their minds (prior probabilities are not 0 or 1), then with enough evidence, they would always tend toward the same conclusions.
This seems to successfully solve the problem posed by Kuhn - even with different starting beliefs, people will eventually agree.
Hume's problem of induction states that we don't know for sure that the future will be like the past - how do we know for a fact that the sun will rise tomorrow? Unfortunately, Bayesianism does not solve the problem of induction - we do not know that the world won't arbitrarily change at some point, making our calculated probabilities totally invalid.
In other words, at before evidence comes in, we might have one coherent set of priors and likelihoods, and after evidence comes in, we might have a different set of priors and likelihoods. If the world changes, we could have a different set of likelihoods, even if the priors stayed the same.
Hume's problem of induction can be represented simply by saying that the prior probability of the sun rising is some value, and that given that the sun does rise tomorrow, we can update our new probability to a slightly higher value. Every day we repeat this, and every day our certainty that the sun will rise tomorrow increases.
Quiz is on monday at start of class, covers everything up to but not including chapter 15.
Final paper due on April 10, 5pm, topics on LEARN - either explain and then defend one of your favourite accounts of science from one criticism, or do the same for either realism or antirealism, 12 point font, 4 pages max, 1 inch margins, cite sources.
Godfrey-Smith's position on science is one of empiricism, naturalism, and scientific realism.
Naturalism is the idea that philisophical problems should be approached from a scientific angle of ourselves and our place in the world. We should see scientific knowledge as our best source of knowledge, and therefore the scientific methods are the best ways we have to obtain knowledge.
The main problem with empiricism was posed by the rationalists - how do we know our senses correspond to what is in the world, or that there is a world for us to sense at all? Empiricism has the idea that the mind is floating around by itself, with tenuous connections to the world through the senses.
This is difficult to resolve with scientific realism - how can we believe that our theories are approximately correct when we don't even know that our senses correspond to the world? However, as a naturalist, he says that science should inform our picture of how our senses correspond to the world, and so from a naturalistic standpoint our senses actually do correspond to the world.
Foundationalism is the idea that we should build all our knowledge off a foundation of undoubtable truth to get the best knowledge, and we should find this foundation. Coherentism is the idea that knowledge that fits together coherently is the best knowledge, and we should try to fit all our knowledge together in the most coherent way.
;wip: look into PHIL216 - probability and decision making, PHIL271 - quantum mechanics for everyone