Lecture Notes by Anthony Zhang.

PHIL350

Theories of Knowledge.

Wesley Buckwalter
Section 001
Office hours: Monday, Wednesday 2:30 PM-3:30 PM, Hagey Hall 325

14/9/15

Articles are all posted on LEARN. Articles marked with "(M)" are for the Monday class, and "(W)" are for the Wednesday class - make sure to read these before class.

Best contact method is emailing wesleybuckwalter@gmail.com with "PHIL 350" in the subject.

Epistemology is the study of knowledge.

Epistemology courses often focus on very abstract, disconnected concepts. This course focuses on the more practical, concrete aspects of modern epistemology - the study of what we know and how we know it, and what we think other people believe or know.

Knowledge tells us how to act, who to trust, which beliefs to form, and how to act virtuously. One good example of the importance of epistemology is in the law - the knowledge and beliefs of the defendant has a significant impact on the proceedings. A defendant that does something unknowingly will be treated differently from one that acts knowingly.

In the law, ignorance is not knowing something, and not knowing that one doesn't know. Willful ignorance is not knowing something, knowing that one doesn't know, and intentionally avoiding knowing it. Knowledge is awareness and strong belief.

16/9/15

Readings: Quine, Kim

Naturalized and Normalized Epistemology

Willard Van Orman Quine proposed naturalized epistemology, in opposition to "old philosophy". In "Epistemology naturalized", Quine complains about various writings about old epistemology. For example, pragmatically speaking if Descartes doubts the senses then all the sciences are suspect - doubting the senses doesn't leave us with anything practical.

According to Quine, old epistemology views itself as superior to and justifying the existance of science, allowing us to rationally generate discoveries. Additionally, old epistemology cares only about certainty, the things that must be true beyond all doubt (the foundationalist view).

Quine criticizes Descartes' arguments by asking where the guarantee of truth comes from. The commonly accepted answer is that the definition of the words themselves mean that the conclusion follows. Quine then asks where the meanings of words come from, to which it is generally agreed that it is simply convention.

Circular arguments are those that assume what they're trying to prove, and circularity is generally considered a negative feature of science.

In old epistemology, we cannot say we have knowledge until we have certainty. However, certainty is only guaranteed by subjective experiences, valid deductive arguments, and the meaning of words. However, science is based on inductive arguments and observation. Epistemology justifies science, but we actually need to assume things happen in order for science to explain them. Quine criticizes this as being circular - "the world exists because the world exists".

The inductive arguments that are so essential to science cannot give us certain knowledge. To justify this, David Hume adds the key assumption known as the Principle of the Uniformity of Nature - that the future will resemble the past. Hume justifies this inductively, by saying that the future has resembled the past in the past, so the future will resemble the past in the future. However, this is obviously a circular argument. According to Quine, circularity is unavoidable in epistemology, and maybe that's just fine.

Quine is essentially criticizing that superiority of old epistemology to science, its reliance on certain knowledge, and its avoidance of circularity. In Quine's naturalized epistemology, science comes first, and epistemology is a branch of science, specifically related to cognitive science and psychology. Here, the new goal of epistemology is to understand how science and our practices of science actually work, rather than justifying the existance of science.

Additionally, science is the best way of getting information about the world, so we might as well use science to study science. It is circular to some extent, but this is fine since we no longer care about circularity. It is circular, but not in a way that blocks progress - progress can be made without certainty, through incrementally gaining confidence in our fallible beliefs as we obtain more discoveries and scrutinize our beliefs publicly.

Kim's criticism of Quine's naturalized epistemology starts off by agreeing with Quine's criticism of old epistemology, but then says that naturalized epistemology cannot replace old epistemology. According to naturalized epistemology, individuals accept senses as input and output beliefs. However, Kim says that there is more to believing and knowing things.

Kim says that the central tenet of epistemology is justification, that epistemologists actually care about how justified beliefs are formed. In other words, justification is normative, while science is descriptive, so naturalized epistemology cannot answer these normative questions - science cannot tell us how we should form beliefs, only how we do.

For example, naturalized epistemology cannot tell us what makes evidence good, what beliefs count as being justified, what counts as a fallacy, what is rational to believe, and so on. According to Kim, belief isn't even possible without normative rules - to believe something, one needs a baseline of what counts as believable or not.

If a person believes that it's raining when it's sunny outside, naturalized epistemology would only tell us that the person is wrong, or what caused the wrongness. Epistemology must also be able to tell us what the person should believe instead, any why it is unjustified to think that it is raining.

Fundamentally, there is some egalitarian aspect to this sort of epistemology - beliefs are the same regardless of who thinks them. This is another one of the underlying assumptions in old and naturalized epistemology, though modern epistemology is more likely to take the individual into account.

Knowledge is valuable because it's a foundation to base our science on, according to old epistemology. According to Kim, what counts as justified is normative and cannot be deterined by science.

21/9/15

Readings: Kvanvig, Lackey

Assertion

According to Kvanvig, knowledge is not valuable, so we should stop studying it.

Most philosophers agree that knowledge is better than opinion/belief, even if that belief happens to be true. One of these reasons is because the beliefs can be false, while knowledge cannot - knowledge requires truth.

Guessing the right answer on a multiple choice test is different from and worse than knowing the right answer. According to Socrates, belief is "untethered", while knowledge is "tethered" - knowledge requires justification.

This leads us to the Justified True Belief (JTB) - a working definition of knowledge as a belief that is both true and justified.

However, this definition is challenged by the Gettier Problem, which proposes situations where someone has justified true beliefs, but obviously doesn't have knowledge. These sitations are Gettier situations.

For example, you walk past a farm and see things that look exactly like sheep, and believed that there are sheep on the farm. Getting closer, you see that what you saw were actually dogs with long hair. However, there are actually sheep in another part of the farm. Arguably, you did not know that there are sheep on the farm, yet you had a justified true belief.

This implies that there is something more to knowledge - that it requires truth, justification, belief, and even more things. Trying to figure out what these criteria are is an ongoing problem, but Kvanvig says we shouldn't need to care about it.

Kvanvig says that the goal of epistemology is to figure out what the ultimate cognitive state for humans is, and knowledge is not that state. The ultimate state, he says, is that to be able to tell immediately and easily whether any claim that affects one's life is true or false, and that no cognitive mistakes could occur.

Kvanvig argues that it is possible to be in that state without knowing or even understanding what knowledge is. In the example Gettier situation, you had no knowledge that there is a sheep, but ended up correct anyways. Kvanvig says that ideal cognition is possible without knowledge - that you could accidentally get the right answer and it's just as good as knowing the right answer.

However, many object to this by saying that there are certain virtues to knowledge that accidentally true beliefs don't have. Are these virtues important in themselves? Answering a question correctly because of your own knowledge feels better than answering correctly by guessing.

An assertion is a claim that something is true. An objection to Kvanvig is that knowledge is needed as a basis for assertion. The knowledge rule for assertion says that one should only assert what one knows. Assertion is very practically useful, and if knowledge is the basis of assertion as per the knowledge rule for assertion, then knowledge is not useless, and is worth studying.

The knowledge rule for assertion is suppoorted by real-world usage of assertions. Intuitively, what we want to assert as true are things that are true - when we ask someone what the current time is, we want the right answer. We might also want to ask them how they came up with that answer - how did they get the current time? We can challenge assertions by presenting evidence that the assertion was false, or saying that there is no evidence that what they asserted is actually true. In other words, assertions are proper not when we are wrong, but when we don't have knowledge of what we assert.

Lackey presents selfless assertion - when someone makes an assertion without having any knowledge about it, but the assertion is still proper. In the case of askin someone what the current time is, Lackey says that the mind of the person being asked doesn't matter.

For example, a pediatrician who, in grief of his daughter being diagnosed with autism, starts believe that vaccines cause autism, even though he also respects the teachings of science. To his patients, he says that this is not the case, because he thinks that this is most likely to be true, and this is what is in the best interests of the patients. The argument is that the pediatrician does not believe something, and therefore did not have knowledge of something, but his assertion is still proper, even if the patients know that he doesn't have knowledge of what he is asserting. Basically, he believes one thing, but asserts another in the best interests of his patients.

In other words, selfless assertion challenges either the knowledge rule, or the idea that knowledge requires belief (this is supported by the pediatrician thinking that there being no connection is most likely to be true). Perhaps the pediatrician believes both (even though they are conflicting), or the "best interests" qualifier has some significance.

Another example is a Catholic teacher who believes in creationism, but still teaches students the evidence presented by science. Although the teacher's faith has her believe one thing, she asserts another, without having knowledge, and this seems to be a proper assertion. However, the examples provided so far have been quite complex, and it is important to consider other factors in play that could influence how we view the assertions or knowledge.

23/9/15

Readings: Turri

There are different types of speech acts, and each one have their own norms and rules. For example, guessing has different norms from speculating, conjecturing, asserting, promising, or guaranteeing. The norms and rules are what we are trying to define for each relevant speech act.

There are different types of normativity. Constitutive normativity are the internal rules that consitute a practice - the rules that basically define the practice. Moral normativity are rules that say when a practice is morally permissible - the rules than define when we should do the practice. Prudential normativity are rules that say when a practice is prudent/wise - when it would be a good idea to do a practice.

For example, the rules of chess is contitutive normativity, while going easy on new chess players is moral normativity. If you have a bet that you will lose a chess game, it would be prudent to lose it in order to win the bet, and this is prudential normativity.

What is the constitutive norm of assertion? In other words, what is the fundamental rule of assertion? Do assertions need to be factive (do assertions require truth)?

One factive account of assertion is the truth rule - one should assert something only if it's true, and knowledge/justification/belief is not necessary.

Another factive account is the knowledge rule, which we looked at earlier - one should assert something only if you know it, so that needs truth, belief, and evidence.

A non-fictive account is the belief rule - one should assert something only if one believes it, so it's still fine even if the assertion is totally false.

Another non-fictive account is the justification rule - one should assert something only one can justify it, so that needs belief and justification.

Lackey's selfless assertion account is non-factive, and modern literature tends to favor non-factive accounts. It feels like when people make well-justified assertions that turn out to be false, they didn't do anything wrong and are not to blame. In other words, there are reasonable false assertions.

For example, if we have 99.9% certainty that something would happen, it seems proper to assert it as true, even though we could turn out to be wrong - we should not be to blame for making such an assertion.

Turri's account is a factive one, and he argues for the knowledge rule. According to Turri, if assertion is rule governed, then we should look at how people are actually acting out these rules in real life, particularly people who are competent at using the language - we should obtain empirical evidence for rules governing assertion.

Turri conducted social psychology experiments, testing how people reacted to assertions in a game for whether people approve or disapprove of well justified false assertions, both when it turns out true, and when it turns out false.

As it turns out, people think false but justified assertions should not be made, even when that justification is very good. However, various objections to these results were raised, such as the experiment also measuring prudential/moral normativity rather than just constitutive normativity. Also, there was a sliding scale from "very wrong" to "very right", to eliminate the possibility of a false dichotomy.

In response, Turri conducted further experiments with different situations in order to eliminate variables. The reasoning was excuse validation - the maker of the false assertion has a reasonable excuse for being wrong. Turri concludes that factive accounts of assertion better explain people's reactions to assertions.

Interestingly, blame depends on character judgements of the person - when people perceived as good broke a rule, they were blamed less or not considered to have broken rules more often than people perceived as bad. In other words, people will simply say someone did not break the rule if they are not to blame, because blame is associated with punishment, and a blameless person doesn't deserve punishment.

28/9/15

Readings: Hawthorne/Stanley, Fantl/McGrath

We now look at the act of asserting itself. Do the practical consequences of acting affect justification?

For now, we will say that one is justified in believing something if one has good enough evidence to know it. We will assume that justification is required for knowledge.

Evidentialism is a theory of what justification is like, which states that whether one is justified in believing something depends only on how much evidence is available - the person believing doesn't matter, so two people with the same evidence should always have the save belief. Evidentialism ignores motives, pre-existing beliefs, and preconceptions in favor of following the evidence, and only the evidence.

However, Fantl and McGrath deny evidentialism - there are things other than evidence that can affect whether beliefs are justified and whether one has knowledge. For example, if you know X, it is no problem to act as if X. However, if it is a problem to act as if X, then you could explain it away as though you didn't know X, even though you did. The main idea is that if you know that something is true, then it is fine to act on it - it is rational to act on your knowledge. Also, if you're justified in beliecing something is true, then it is also fine to act on it.

However, how much is at stake affects how rational it is to perform certain actions - what is the cost of getting something wrong? For example, suppose we were vacationing, and we ask someone whether a train stops in Foxboro, and we don't really care if it's right or not. Since this is pretty good evidence that it does stop there, we would believe the person and get on the train. However, if we were on the way to a critical job interview, we would probably double check just in case. In this case, the evidence is not good enough to believe the person, and would get more evidence instead.

In other words, in one case it is rational to believe the justification, and in another case it is not - so in one case, your belief is justified, and in another, it is not, even though there is exactly the same evidence in both cases. This implies that there is more than just evidence at play in belief and actions. This is at odds with evidentialism - perhaps each person actually has their own personal threshold of justification.

The rationality of acting, according to Fantl and McGrath, sets the standard for how much evidence is needed in order to know or be justified in believing something. In their example, the stakes of the situation influence whether a belief is justified.

30/9/15

Readings: Buckwalter/Turri

The accepted norm of assertion is knowledge - you should assert something only if you know it. This is supported by empirical studies of assertion in the real world, as discussed earlier. In the real world, showing is more costly than telling - so if knowledge is important for showing (asserting), it should also be important for showing.

In other words, if knowledge is the norm of assertion, is knowledge also the norm of showing? The readings seem to suggest that yes, this is the case - we should only show something if we know it:

If knowledge is the norm of both showing and telling, knowledge is also the norm of instruction - the criteria by which we should decide whether to transmit information.

An interesting thing to consider is muscle memory - if muscle memory is knowledge, then it is possible to have knowledge without belief, justification, or truth. For example, muscle memory might store the relationship between the gas pedal and the current acceleration, but one need not even believe that the gas pedal affects acceleration to act on it.

"Descartes' Schism, Locke's Reunion" looks at whether things like the train thought experiments held up empirically. Descartes' view is that knowledge is completely separate from action, while Locke states that they are intricately linked.

Locke basically says that knowledge is truth, evidence, belief, and practical factors - practical factors directly influene knowledge. Descartes basically says that knowledge is truth, evidence, and belief, all of which are affected by practical factors - practical factors are removed from knowledge. We want to test these empirically, using psychological and sociological experiments.

These experiments tested things like how knowledge was affected by two identical situations with different stakes, as well as many other variables. Each of the 600+ particupants were given situations with different stakes, and then questioned regarding the state of belief, truth, evidence, action, knowledge, and importance.

As it turns out, stakes had a negative correlation with action, truth, and evidence, and these in turn had a positive correlation with knowledge. So if the stakes are high, actionability, confidence in truth, and amount of evidence are decreased, and if they are low they are all increased. Acting had the most significant impact on having knowledge, truth a little less, but evidence a lot less.

So essentially, action does have a direct impact on knowledge, like Locke's model, but the stakes are also somewhat removed from knowledge, like Descartes' model. So the ability to act on something had the largest impact on whether one thinks one has knowledge. Note that since this experiment only measures correlation, it doesn't tell us anything about which way the knowledge/action link goes - does knowledge result in actionability, or does actionability imply knowledge?

5/10/15

Readings: Fricker, Lackey

Testimony

We want to investigate the relationship between knowledge, and the person hearing it - in particular, what is testimony, when should be believe it, and when does it count as knowledge?

Justification is necessary for knowledge. Justification, in turn, is done through good evidence - from memory, observation, reasoning, and testimony.

In legal systems, testimony is rigidly defined and is given on the witness stand. For us, giving testimony is making an assertion, claim, or presents something as true. Where we previously looked at the speaker's side (when we should say something), we will now look at the listener's side (when we should believe something).

A testimonial belief is a belief formed by testimony. Most of our beliefs actually seem to be testimonial beliefs. When we believe things people say, we seem to often gain real knowledge, often without us thinking about it much at all. We also seem to sometimes gain false beliefs, and our goal is to find a criteria for belief that helps us gain only real knowledge.

One informal one people often use is the speaker's track record - how good their testimony has been in the past. Others include the stakes, relationship with the speaker, and the motives of the speaker.

According to reductionism, testimonial beliefs are justified in the same way as any other source of evidence, like induction, past observations, and reasoning - testimony is not special. Anti-reductionism is the opposite view - that testimonial beliefs are justified by a special principle.

Fricker investigates the reductionist/anti-reductionist distinction, and argues for reductionism.

The PR thesis is presented as a principle for justifying testimonial beliefs, in favour of anti-reductionism: that hearers of testimony can assume, without other evidence, that the speaker is trustworthy, unless there are special circumstances that prevent this (like the speaker not being in the right state of mind, or known to say false things often). In other words, listeners can just assume the speakers are trustworthy, unless given a good reason not to.

Fricker criticizes this by saying that hearers are never justified in simply trusting speakers - according to Fricker, hearers of testimony should always assess the speakers for trustworthiness. In other words, gullible/blind beliefs are not justified. Instead, hearers should critically assess the testimony, be able to explain the testimony in their own words, and be able to defend the beliefs formed from the testimony. This is called the NC thesis (negative claim).

This is a very internal view - all the criteria are judged in the mind of the hearer.

One objection to this is that it is often not possible for hearers of testimony to independently confirm that speakers are trustworthy - it is unrealistic to expect hearers to evaluate all these criteria. Fricker responds that it is realistic - people actually are always evaluating trustworthiness subconsciously, and this affects our evaluations of speakers' trustworthiness. Counterfactuals are essentially thoughts about hypothetical situations - thoughts about "what would happen if X happened instead of Y?". According to Fricker, testimonial justification requires counterfactual sensitivity - thinking about "what if the speaker is untrustworthy?" and "if they were untrustworthy, would I have spotted that?". Basically, testimonial belief is justified when the hearer would be able to tell if the speaker is being untrustworthy.

Fricker's evidence for the NC thesis is mostly common sense and ordinary/everyday experiences, and normative linguistic theories - is this good enough evidence to support the thesis? Also, this doesn't tell us how much evidence we need, or how trustworthy speakers must be before forming beliefs.

One interesting case is self-knowledge (assertions about the speaker's own experiences). In these cases, hearers generally simply trust what the speakers say, and it isn't really possible to critically assess the trustworthiness in this case. Also, should be assess strangers more heavily than trusted friends? Are there some roles when the critical assessment is already done for you (for example, when a doctor tells a patient to do something)?

Lackey investigates the properties of testimony transfer - is testimony like passing a baton? In other words, can hearers get knowledge from the speaker even if the speaker doesn't have that knowledge?

Basically, Lackey talks about whether testimony simply transmits knowledge, or whether it can actually generate new knowledge, and argues for the latter. For example, a creationist teacher that teaches evolution (despite not believing it) doesn't have knowledge of the material (she is simply following orders), yet it seems like the students can obtain knowledge, even though the speaker doesn't have knowledge.

7/10/15

Readings: Hazlett

A defeater is something that defeats evidence - it takes away confidence in evidence that one has. When people make testimony, sometimes the defeaters don't get transferred - people may not transmit counterarguments, conflicting facts, and other things that might throw the evidence into question

An interesting case of testimony: person A has good vision, but the person's doctor tells them that their vision is unreliable. Person A sees X, and tells person B as such, but does not tell person B that the doctor said person A's vision is unreliable. Person A's evidence of seeing X is defeated by the doctor's statement. However, from person B's perspective, there is nothing wrong with the evidence.

Lackey concludes, from these and other situations, that testimony can actually generate knowledge, in contrast to memory, which can only preserves knowledge.

Does the way testimony work depend on what you're talking about? Hazlett investigates this by looking at several case studies.

If someone knows a lot about football, and you ask them who won a particular game, you would probably accept the answer on the basis of testimony alone. If someone knows a lot about ethics, and you ask them whether military intervention in some country is right, you would probably not accept the answer on the basis of testimony alone.

These two situations have the same form, with only the content changed. Perhaps it's because there's just so many possible options that need to be weighed to make a decision in the latter case, or there is a difference when there are multiple plausible answers?

If someone knows a lot about art, and you ask them whether a piece of art is beautiful, it doesn't really make sense. If someone knows a lot about metaphysics, and you ask them whether god exists, you would probably not accept the answer on the basis of testimony alone.

Hazlett says there are different types of testimony, including factual testimony ("what is this thing?"), ethical testimony ("is this thing right?"), aesthetic testimony ("is this thing beautiful?"), religious testimony ("does god exist?"), and so on. In each type of testimony, there seems to be different norms. When the content of testimony is about ethics, religion, or aesthetics, there seems to be something wrong about forming beliefs based on just that testimony - there is testimonial asymmetry.

Hazlett assumes testimonial asymmetry exists, and attempts to diagnose the cause. For children, moral/religious/aesthetic testimony seems to be fine - parents telling children what's right or wrong seems to be good testimony.

This view is non-reductionist in that the good/bad judgement is for the testimony itself, not about the beliefs that cause the testimony. Hazlett proposes a few theories of the causes of some testimony being different.

By understanding theory, moral/religious/aesthetic knowledge requires understanding in addition to just facts, and understanding cannot be transferred via testimony.

By acquaintance theory, moral/religious/aesthetic knowledge requires acquaintance with the subjects - perception and experience with the things being talked about, that also can't be transferred via testimony.

By virtue theory, moral/religious/aesthetic knowledge requires virtues, such as the virtue of figuring this out for oneself, a virtue that cannot be transferred via testimony.

Hazlett rejects all of these, and proposes a social theory instead: moral/religious/aesthetic knowledge is socially valuable, and testimony cannot transfer the social value via testimony - beliefs are not worth as much when they were just formed by testimony, since they reduce diversity of knowledge. In other words, testimonial beliefs are bad when they are bad for society.

One objection to these theories is that they make knowledge asymmetric. For example, why is moral/religious/aesthetic knowledge more socially valuable?

14/10/15

The second critical response is due next wednesday.

Readings: Huemer

Memories

When are memory beliefs justified? When should we believe in our memories?

Just like through testimony and observation, memory is a way to get justification, which is important for obtaining knowledge.

The majority of what we believe comes from our memories. Often, our knowledge comes from information stored in our brains. However, we can sometimes remember something very accurately as true, when it's actually false.

We believe that the sun is around 93 million miles from the Earth. However, most of us don't remember what our original reason for memorising this is.

Huemer has three possible answers to when we should believe our memories - inferential, foundational, and preservative. Huemer then rejects them in favor of a dualistic view.

In the inferential theory, memory beliefs are justified by that memory being reliable in the past - if it worked out in the past, then we are justified in believing it. However, this theory is rather circular - the only things that tell us that a memory was reliable are our other memories. If we use only our present experiences and insights, it's unlikely we could judge whether memories are reliable under this theory.

In the foundational theory, memories are the foundation of justification, and just as we usually trust perception, we should also usually trust remembering something - remembering something automatically gives a good reason to believe it. However, the passage of time alone shouldn't make false beliefs justified - it's easy to create a memory of an unjustified belief, and then say it's justified because it's a memory.

In the preservation theory, memory preserves the original justification of the belief - whatever the justification for the original belief was is also the justification for memory beliefs. Even if one does not remember what the original justification was for believing something, the belief is still justified. However, if someone gets cloned, the clone remembers doing all the things the original did, without actually having done those things - the clone should have justification to believe it did those things, since it has all the memories of the original, but under this theory it does not.

The dualistic theory takes parts of all of these theories. It focuses on a few issues with the other ones: beliefs can't increase in justification just by becoming memories, and justification should depend only on the current state of people - if clones have the same memory, they should also have the same justification.

According to Huemer, the justification for believing a memory belief is the justification for forming the belief in the first place, combined with the justification in being confident that the memory was retained correctly over time. The justifications of the two, both numbers between 0 and 1 (where 0 is infallible justification that something is false and 1 is that something is true), are multiplied together to get the overall justification.

This is somewhat similar to what foundational theory and preservation theory says, but also taking memory retention into account.

19/10/15

Readings: Myers-Schulz/Schwitzgebel, Murray/Sytsma/Livengood

Belief

Does knowledge require beliefs? So far we have assumed this is the case - traditionally, we need belief, truth, and justification in order to have knowledge. Assuming that knowledge requires belief is called belief entailment, or the entailment thesis.

We usually accept belief entailment becase it captures how minds try to get a mental picture of the world from information it receives - belief associates knowledge to minds. Basically, books contain information, and adding people believing it is needed to make knowledge.

If knowledge is an achievement, then beliefs determine who gets credit for having it - belief is required to get credit for having knowledge. Also, it is hard to think of any knowledge of a topic when there are no beliefs about it. We will examine several situations where it seems like we have knowledge without belief.

Radform proposes a thought experiment: suppose a student feels certain they don't know a subject, but when asked questions about it, suddenly remembers and answers correctly. The student then concludes that they actually did have any knowledge, even though she didn't believe it before. Basically, the student believes they didn't have the knowledge, but it was stored in their brain nonetheless as knowledge. It seems that if they had knowledge even before believing they had it, then belief is not required for knowledge. In other words, the student forgot that they had previously learned the fact, and so doesn't have belief.

Intuitively, this case is controversial because of the question of whether the student actually had knowledge of the subject. It seems like due to circumstances and a temporary memory deficit, the access to the belief was blocked temporarily. Myers-Schulz and Schwitzgebel also try to find these counterexamples.

Suppose a teacher has a subconscious bias against a certain group of people in their class, treating them as less intelligent even though she has studied biases and examined their intelligence to find them the same as the rest of the class, consciously avoiding being prejudiced. Even though she knows they are just as intelligent, she still believes deep down that they are not. Though the belief is irrational, it is still a belief, at odds with her knowledge.

Suppose a person likes to watch horror films that they are scared by, even though they know the plot is very unrealistic. When a situation in real life resembles one in the movie, the person momentarily believes it will play out like in the movie and feels fear, even though the movie was unrealistic and the person knows it would never happen.

As it turns out, in all of these cases most people thought that in this situation, the person had knowledge without belief. In other words, it empirically does not seem that belief is required for knowledge.

Also, phobias are a good example - for arachnophobia, the person has knowledge that spiders will generally not harm them if they act correctly, yet they still believe the spiders will hurt them, since they feel fear.

Myers-Schulz and Schwitzgebel propose the capacity-tendency account - knowledge requires the capacity to read the truth, even if one is not able to at a particular time. This accounts for things like forgetfulness, emotional effects, and personal biases/phobias. Instead of having belief, we simply need the capacity to reach the truth in ideal circumstances.

21/10/15

Readings: Buckwalter/Rose/Turri

Murray, Sytsma, and Livengood provide additional examples of belief not being required for knowledge, and try to address some of the criticisms of Myers-Schwitzgebel's ideas of knowledge not requiring belief.

One objection is that there are two different types of beliefs: dispositional belief (what one is prone to accepting, one's natural inclination), and occurrent belief (consciously entertained beliefs, happening at a particular time rather than being natural inclined). Although knowledge may not require occurrent belief, according to this objection, knowledge does require dispositional belief, and there is empirical evidence to back this up (would these people accept it even if they were asleep ans dreaming about it?).

Murray responds to this by presenting situations in which the distinction between dispositional and occurrent belief isn't clear. For example, most people would say god knows but does not believe that 2 + 2 = 4. Another situation is a dog specially trained to do math - most people say the dog doesn't believe that 2 + 2 = 4, but in many cases will say that the dog does know it.

A student is told by parents that the earth is the center of the solar system, but learns in school that the sun is. Although they still believe that the earth is at the center, they answer on tests and assignments that the sun is at the center. This seems to imply that the student has knowledge that the sun is at the center, but the student believes that the earth is. However, this might be explained by saying that the student knows not that the sun is at the center, but that the student knows that to obtain a good grade, they need to answer that the sun is at the center - she believes that the earth is at the center, but answers otherwise since they want a good grade.

Murray finds that in the presented situations, a significant fraction of people ascribe knowledge but not belief, which implies belief may not necessarily be required for knowledge. One objection to these new cases is that they involve things like dogs and deities, and therefore don't accurately represent beliefs and knowledge in humans.

According to Murray, belief requires conviction, while knowledge doesn't - without conviction, there can be knowledge,without belief.

Readings: Buckwalter/Rose/Turri

A pro-attitude is the urge or desire that drives an action.

Besides dispositional and occurrent, we can further distinguish between thick and thin beliefs. Thin beliefs mean that you think something is true, but without much pro-attitude toward it (no urge to act on it). Thick beliefs are those where you not only think something is true, but like that it is true, and emotionally endorse and assert it - there is a lot more pro-attitude (strong urge to act on it). Thick beliefs require not only the belief, but also a lot of higher level activity such as willpower and desire.

This paper defends the idea that knowledge requires belief, by saying that knowloedge only requires thin belief, while all the studies so far have been measuring whether people have thick belief. It justifies this by saying that knowledge is itself a pro-attitude - knowing something means to take something to be true, which is by definition a thin belief.

Consider that "you don't believe that" and "you don't know that" are essentially equal ways of challenging assertions - challenging belief means that you challenge knowledge as well.

Buckwalter/Rose/Turri replicate the Murray experiments where they presented participants with various situations and asked them questions about it, but in addition they also implied the people had thick belief, and asked "at some level, does the person think that X is true?" to test thin belief.

The results imply that while people think thick belief is not always present when knowledge is, people think thin belief is. In other words, knowledge seems to require thin belief, though not thick belief - to know something, we need to think it is true.

26/10/15

Readings: Connee/Feldman, Goldman

Justification

What are the things that justify belief? Internalism is the view that the factors that determine whether a belief is justified are all internal to the believer's experience (contained in one's mind - thoughts, memories, dispositional beliefs, inference, etc.), while externalism is the view that there are also external factors (evidence/facts that one has forgotten or never learned).

Internal experiences also include things like reflexes, muscle memory, instincts, and other non-conscious skills. External experiences also include things that cause beliefs, like reading something in a newspaper or a book (externalism says there is a distinction between beliefs formed from good newspapers vs. bad newspapers).

Accessibilism justification is justification determined by factors that a person can consciously access. Mentalism justification is justification that is simply in one's mind - it is still justification even if one cannot consciously access it (for example, the student who forgot something while under the stress of an exam still has justification, even though they can't access it).

Conee and Feldman support mentalism and defend it from accessibilism internalism and externalism. Mentalism says that if two people are the same mentally, they must have the same justification for their beliefs - when people have different levels of justification, they must have different mental states.

For example, two people read the newspaper indoors, which says it is sunny outside, but one goes outside and sees that it is sunny. While both people believe it is sunny outside, the one who went outside seems to have more justification. The mentalism view says that the internal experiences and memories of seeing that it is sunny accounts for the difference in justification.

For example, an expert birdwatcher and a novice birdwatcher both see a woodpecker, but it seems like the expert has more justification in believing that the bird is a woodpecker than the novice. Though the situation is the same, the justification depends on the mental state of the believer. In other words, the justification changed due to internalizing an external fact, or a purely internal difference in the person holding the belief.

Conee and Feldman say that these cases, as well as four more given in their paper, are representative of how justification works in all situations.

One objection to this is that some beliefs can be justified even without internal mental state, like forgotten justification, logic/probabilistic relations, stored bliefs, and impulsive beliefs. For example, if one forgets where they learned that broccoli is healthy, yet continues to believe that broccolli is healthy, they are arguably still justified in believing that broccolli is healthy, or at least more justified than someone who just believes that broccolli is healthy for no reason. A response to this might be that the person still has memory of there being evidence in the world that broccoli is healthy, and is justified in believing that their memory is generally accurate.

Goldman responds by saying that in the externalist view, justification depends on a belief's etiology - where the belief comes from, as well as the belief's history. For example, a person who reads a fact in an unreliable newspaper and forgets where they read it from seems to have less justification than a person who reads a fact in a reliable newspaper and forgets where they read it - this seems to imply that there are external factors in justification, since both people have the same mental state yet seem to have different levels of justification.

Also, consider the broccolli example again, but the person believes that it is onion rings that are healthy, misremembering that they read onion rings rather than broccoli. Although the person's mental state is the same as in the original example, it seems like in this case the person is not as justified.

Some beliefs are justified by logic or probability - logic/probabilistic relations. Although they are not mental states, it still seems like they are still valid justifications.

28/10/15

Readings: Cohen

Final exam is on December 15 at 9AM in HH 2107. The third critical response is due on November 4.

Are there justifiers (things that make beliefs proper) that are not internally available to the believer? Internalism says no, while externalism says yes.

Justifiers must make your belief likely enough to be true ("likely enough" being an arbitrary threshold). The role of justifiers is to define our intellectual duties - reaching truth, avoiding falsehoods, etc.

Accessibilism internalism says that justifiers are those things that we have access to and are aware of, while mentalist internalists says that justifiers are always mental states. The evidentialism theory of justification says that your justification depends on your own evidence.

Justification is normative - it defines what people should believe and when beliefs are okay to hold. Interalism says that as a result, fulfilling the duty of justifying beliefs is an internal matter, and they can't fulfill this duty of the matters are outside of their head. In other words, evidentialism is an internalist view.

Accordng to externalism, mental states are important, but don't cover all cases. For example, one can obtain information from a source, but the trustworthiness of the source is a justifier that is external - even if one isn't aware of the trustworthiness, it still affects the justification. Therefore, trustworthiness is an external justifier.

Interesting side effect: small children and animals can have justified beliefs if externalism is true, but not if internalism is true.

One externalism theory is reliablism, proposed by Goldman. In this theory, one should believe things that are likely to be true, so justification is simply those things that are likely to lead to true beliefs - you don't have to have mental states or anything to hold a justified belief. Justification is based on process reliablism - beliefs are justified when they are formed by reliable cognitive processes, like sensory perception, good reasoning, and clear memories.

A reliable cognitive process must tend to produce more true beliefs than false beliefs. Some unreliable cognitive processes include hunches, guesses and biased reasoning. Process reliablism is external because whether a process is reliable can often depend on the environment or other external factors, and one doesn't have to realize a belief is justified to have justification.

For example, if one's memory is often correct, then memory is a reliable cognitive process. If one's memory is often incorrect, then memory is not a reliable cognitive process.

Consider a person who is running and a brain in a vat who is deceived into thinking it is running. Both of their internal states are the same, and they both believe they are running, but the brain in the vat is not actually running. According to internalism, the justiifcation is the same for both the runner and the brain in a vat - they both believe they are running and are justified in believing so. However, by externalism, the brain's belief that it is running is not justified, because it is not actually running.

According to Cohen, a proponent of internalism, the brain in a vat is justified in believing it is running. If we were all brains in vats, then no beliefs would ever be justified, which definitely seems to be false. If we have every reason to believe something, then the fact that the cognitive process is unreliable and we don't know shouldn't matter - justified beliefs are produced by good reasoning, regardless of whether it results in reliable outcomes or not. Cohen suggests that there is no connection between justification and truth.

One objection to this is the following example: if we see a red object, we should be justified in believing that there is a red object in front of us. However, we notice that the room is lit with red light. If justification is disconnected from truth, then even though we realize the room is lit with red light, we are still justified in believing the object is red. However, it seems like we should actually not be justified in believing the object is red in this case, since it could just be white and look red due to lighting.

2/11/15

The third critical response is due this Wednesday.

Readings: Zagzebski, Greco

Virtue Epistemology

Virtue epistemology says that knowledge is the result of acts of intellectual virtues.

It seems that moral virtues are hard to define. For one, moral virtues seem to depend on cultures. There do seem to be universal moral virtues, however - dignity, respect, and courage seem to be present in all cultures, even if they express them in different or even conflicting ways.

On the other hand, intellectual virtue seems to be the tendency to have a love of understanding or the truth. Zagzebski says that virtue is the deep or lasting excellence, while a vice is a deep or lasting defect. Here, "deep and lasting" means something that comes close to defining who a person is - a part of a person's character.

One acquires virtues through hard work over a long period of time, resulting in the tendency to have that excellence. Moral virtues motivate you to do the right thing, while intellectual virtues motivate you to seek the truth - virtues are emotional disposition to try to accomplish things.

In addition to disposition, virtues must actually result in accomplishment to some extent - they have a reliability component. A virtuous person must, in addition to be motivated to do something, they must be successful. One cannot be intellectually virtuous if they aren't successful in finding the truth. Likewise, an honest person must be motivated to be honest, but also actually succeed in being honest.

So a virtue, according to Zagzebski, are a deep/lasting excellence that involves motivation to produce a certain outcome and reliable success in doing so.

A good example of an intellectual virtue is courage - defending the view best supported by evidence regardless of its popularity. Another is responsibility - thinking through the consequences of holding some views or doing their research. Another is fairness - considering all views on their merits alone. Another is curiousity - enthusiasm for intellectual ideas. Another is open-mindedness - a lack of prejudice.

Galileo's example is a good one. Despite being prosecuted by religious organisations, he demonstrated intellectual courage by advocating heliocentrism - he defended the view best supported by evidence, was motivated to do so, and was successful in seeking the truth.

Zagzebski says that knowledge is the result of acting on intellectual virtues - this is virtue theory. Intellectual virtue being the basis of knowledge seems to be a form of reliablism - virtue is reliable mechanism, and knowledge is generated by reliable mechanisms.

However, there is more to reliability than just the ratio of true beliefs to total beliefs. Some virtues like creativity are stepping stones to getting true beliefs rather than reliably generating true beliefs themselves.

Greco supports a different virtue theory, combining this externalist view with internal views. In this view, virtues are simply abilities - how able you are to reliably achieve results.

Greco criticizes Zagzebski's view by saying that people can have justification even without virtue. Also, it's possible to have reliable processes without justification - if one doesn't have any internal access to the mechanisms that resulted in the belief, how can we say that the belief is justified?

According to Greco, knowledge must be formed from intellectual virtue, and on the basis of rules of belief formation that, from your perspective, you endorse (norm internalism). In other words, you need to have intellectual virtue and follow your own rules for good beliefs (this is called "countenancing one's norm for belief formation"). Basically, knowledge is the result of applying intellectual virtues for the right reasons.

For example, someone who often guesses numbers correctly might have intellectual virtue, but they aren't following their own rules for forming good beliefs.

Consider a baseball pitcher and an automatic pitching machine - they can both throw the ball at a high speed very reliably. However, the pitcher also countenances the norms for throwing balls fast - the pitcher follows their own rules for what they consider a good way to throw a baseball, while the pitching machine does not.

In Greco's view, the intellectual virtue requirement ensures that the belief is objectively reliable, while the countenance of one's own rules for belief formation ensures that the belief is subjectively responsible. For knowledge, one has the virtue in the first place because one follows their own rules for belief formation. Interestingly, Greco's virtue theory also allows the brain in a vat to have justified beliefs, even though all their beliefs are all false - they can have intellectual virtues and follow all the rules of belief formation.

4/11/15

Readings: Elgin

Truth and Epistemology

Is truth really that important in epistemology? Elgin argues that it is not as important as we make it out to be.

In truth-centred epistemology, truth is the central goal/objective, and obtaining truth is valued above all else. However, there's no real epistemic reason for it to be.

Elgin proposes that instead, we should simply strive for obtaining something that is "true enough" for all the purposes we are interested in - epistemic statements are proper/acceptable if they are true enough.

Elgin proposes that understanding is more important than truth, and sometimes false beliefs can lead to good understanding. For example, thought experiments might be impossible/absurd, but they can help shed light on an idea and help people understand certain situations.

Felicitous falsehoods are these "true enough" statements that aid understanding - helpful fictions. For example, epistemology says that in science, only true beliefs are acceptable. However, real science relies on false beliefs all the time and seems to do just fine. Newtonian mechanics isn't how physics actually works, but engineers use it all the time because it's true enough for most purposes.

There are also simpler felicitous falsehoods common in science. For example, simplifying models in order to look at the relevant variables, or doing curve regressions to make sense of trends in data, or even just using approximations. Even though the results are technically not correct, they are true enough and help a lot with understanding, which is why we should use them.

An interesting case is when geocentric models were the dominant view of the world. These models actually gave results good enough for sailors to navigate by, and in fact were even better than early heliocentric models. The geocentric models were practically useful, even if the understanding it gave was for a false view of the world.

The tortoise and the hare fable probably never actually happened, but is still useful as a moral lesson on the value of being reliable. As a moral lesson, it is true enough to bring about understanding.

One objection to this view is that the understanding given by felicitous falsehoods may be harmful to understanding true views, such as creationism or ancient aliens. In response to this, Elgin suuggests that felicitous falsehoods must also take facts into account to some extent - that they must be based on reality.

Basically, felicitous falsehoods should result in a theory that is true as a whole, even if individual parts of it are false.

9/11/15

So far we've been talking about how to tell when beliefs are proper/responsible/acceptable. However, does it even make sense to say a belief is proper? Can one be responsible or criticized for improper beliefs? According to Alson, we cannot control our own beliefs.

Deontological justification is justification in terms of obligations - it is proper to hold a belief when it is formed without breaking any of the rules of belief formation. In other words, deontological justification is a freedom from blame for believing.

The deontological justification rules should forbid beliefs likely to be false, and permit beliefs likely to be true. It is based on deontological actions - actions are acceptable if they follow the rules of acting.

However, Alson argues against this by saying that while we can choose our actions, we cannot choose our beliefs. Therefore, we can't be blamed or judged for our beliefs, and can't say they are proper or improper if we can't even control them. If we ought to do something, then we should at least be able to do it - obligation requires ability.

Ginet's deontological justification model says that believing something if and only if they follow the rules of belief formation, and could not be blamed for believing it. Doxastic voluntarism is the view that people control their own beliefs. Doxastic involuntarism is the view that people cannot.

Descartes says that being able to control one's beliefs is one of the most fundamental abilities that one has. Most philosophers in the past shared a similar view, but modern philosophers generally have the opposite view. Beliefs can be controlled no more than one's automatic bodily functions.

According to Alson, in addition to perceptual beliefs being involutary (we can't control what we believe we perceive), inferential beliefs are as well - we can't make ourselves arbitrarily believe that 1 + 1 = 3.

Alson admits that we do seem to have some sort of control over the long term of our beliefs - pretending to believe something over a long time could eventually result in actually believing it, though the success rate probably isn't very good.

11/11/15

It is commonly accepted that what we ought to do is limited by what we can do - "ought implies can" (OIC). If we have a moral obligation to do something, then we must have the ability to do it - promises are not binding if fulfilling them is impossible. This is widely accepted in modern philosophy and appears in everything from arguments about free will to moral dilemmas.

However, most arguments defending OIC appeal to intuition or common sense, or just say that it seems obvious. However, do perceptions of someone's ability affect whether they are morally obligated?

Consider someone says they would do something, but is unable to because of a traffic accident, and is therefore physically unable to do it. Consider a similar situation, where someone says that they would do something, but is unable to due to clinical depression, and is therefore psychologically unable to do it.

Study participants were asked whether the person was able to do the task, obligated to do the task, and to blame for not doing the task in either situation. Most people said the person was obligated but unable in both situations. However, people tended to say the person was unable to in the car accident situation but was able to in the depresion situation. Additionally, people tended to ascribe blame to the person with depression significantly more than to the person in a car accident.

This seems to imply that people think the person is obligated perform the task, yet unable to do so - evidence against OIC. However, blame was assigned only in the psychological case.

In other words, people are still obligated to do something, even when they don't have ability to. Also, blame is independent of obligations. This held in a variety of situations and durations/scopes of inability, implying that "ought implies can" is not supported experimentally.

Informal psychology is very good at belief attribution (telling when people have a belief), because humans tend to have a lot of success in their predictions. Therefore, an experiment is conducted in which we ask people whether they thing someone has a belief, in order to test whether beliefs changed. When designed in a certain way, the experiment can test whether people can choose their beliefs, and in so doing test whether belief voluntarism is the case.

Someone hears that believing something will result in them having a more satisfying life, so chooses to believe that thing. When study participants were asked whether the person now actually believes that thing, they said that the person does in fact believe.

Someone sees that there is a particular chance of rain at an outdoor event the next day, and as an optimist chooses to believe it will not rain. When study participants were asked whether the person actually believes that it will not rain, people said that regardless of whether the chance of rain was 5% or 95%, the person actually does believe that it will not rain. In other words, people attributed belief even when the person has good evidence that the belief is false.

Another experiment tested perceptual evidence vs. inferential evidence, and seemed to imply that perceptual evidence results in less voluntary beliefs - when the person actually saw it was raining, people were less likely to think that the person believes it will not rain.

Overall, the experiments implied that beliefs can be voluntary, and that the strength of evidence doesn't significantly affect whether voluntary beliefs are formed.

16/11/15

Readings: Snowdon, Cath

Types of Knowledge

Are there fundamentally different types of knowledge? What is the difference between "know how" vs. "knowing that".

"Knowing that" is often called propositional knowledge - knowledge that something is true, like a fact, sentence, or a proposition. "Knowing how" is often called procedural knowledge - knowledge of how something is done, as well as the skills necessary to do it.

Intellectualism is the view that knowing-how is a specific type of knowing-that - that knowing how to drive a bus is the same as knowing that "turning the wheel left causes the bus to veer leftward", and so on. In other words, one knows how to X if and only if they possess a certain sort of propositional knowledge about X - knowing how to do something is always reducible to knowing that a related set of propositions are true.

Anti-intellectualism is the opposite view - that knowing-how and knowing-that are distinct types of knowledge. One knows how to X if and only if they possess a certain ability to X.

Additionally, anti-intellectualism says that ability is necessary for know-how (if you know it, then you can do it), while praxism says that ability is sufficient for know-how (if you can do it, then you know it).

The argument for this, presented by Ryle, is that knowing how requires us to contemplate a proposition, which requires us to know how to contemplate propositions, which requires knowing how to contemplate that proposition, and so on, ad infinitum.

This view basically says that knowledge must be grounded in knowing how, in the form of our reflexes, instincts, and innate knowledge. Knowing how to do something requires additional things over knowing-that, such as abilities, capacities, and dispositions. You can know everything about skiing, but if you're on a mountain and can't ski down, then the anti-intellectualist view is that you do not know how to ski.

Consider someone who doesn't know anything about skiing, yet up on the mountain easily skis down - it seems that even without any propositional knowledge, the person knows how to ski.

Consider someone who knows all about skiiing, but has no legs - it seems that even in the presence of propositional knowledge, the person doesn't know how to ski.

Snowdon criticises anti-intellectualism, by arguing that ability is not necessary for knowing-how, and that ability is not sufficient for knowing-how.

Suppose someone knows how to bake a cake, but then the world's egg supply disappears overnight. It seems that the person still knows how to bake a cake, but the person doesn't have the ability to bake the cake since there are no eggs. Suppose someone who knows how to fix bikes but loses their arms - the person could still teach others how to fix bikes, but can't do it themselves.

Suppose someone is in a room with an unlocked door, but the person thinks that it is locked. It seems that the person doesn't know how to leave the room since they think the door is locked, but the person does have the ability to simply open the door and leave. Suppose someone can do 50 push-ups, but while another person can only do 20. It seems that both people know how to do push-ups equally well, but one person is simply stronger - they both know how, but have different levels of ability.

18/11/15

Readings: Stanley, Bengson/Moffett/Wright

Cath, in response to Snowdon's criticisms, defends anti-intellectualism by criticizing intellectualism. In addition to know-how stored in reflexes, instincts, and muscle memory, Cath says that it is possible for people to know how to do something, but not know any propositions about it, and provides several examples.

Suppose there is someone who reads a book that had incorrect instructions to change a light bulb, but just happened to have a printing error that made the instructions correct (like in the Gettier cases, the person has true belief but no knowledge, because the book was only right by coincidence). Arguably, the person knows how to change the lightbulb, but doesn't have any propositional knowledge of it, since the evidence was only correct by coincidence. It seems that the know-how is more in tune with the ability to change the light bulb than having knowledge about changing light bulbs.

Intuitively, lucky incidents of correctness, or accidentally doing something right, seems to indicate knowing-how but not knowing-that - knowing-how is less sensitive to luck than knowing-that.

What do psychological experiments say about knowing-that vs. knowing-how? We want to have an experiment where people are presented with a situation in which someone coincidentally learns the right thing (so they have false beliefs) but they still have know-how.

Know-how seems to have a non-cognitive nature. Experimentally, Bengson/Moffett/Wright try to test whether people think knowing-that is different from knowing-how.

People were presented with a case where someone is able to teach someone else how to do stunts without ever being able to do them themselves. When asked, most of them judged that the person had know-that but didn't have ability, supporting anti-intellectualism.

23/11/15

Readings: Longino, Okruhlik

Bias

Previously, Quine and Kim said that epistemology is a branch of science similar to cognitive science and psychology - objective science is the best way to obtain objective knowledge about the world. Science relies on observations that everyone can evaluate, and hypotheses that the public can scrutinize.

Longino and Okruhlik disagree that science is an objective activity. Longino says that objectivity depends on how scientific communities are organized, while Okruhlik says that in some areas we aren't objective.

Outcome objectivity is the extent that science accurately and truly describe the world, while methods objectivity is the extent that the methods and processes of science are not arbitrary,

According to Longino, people often think science is outcome objective because it is methods objective. But is science actually methods objective? Are the processes used in science free of bias and prejudice? Since methods objectivity seems to imply outcome objectivity, we will look at methods objectivity.

The positivist scientific method involves discovery (coming up with ideas), followed by justification (evaluate how well evidence supports those ideas). Discovery is notably not objective, since the ideas we pursue are heavily influenced by what we're interested by, and what funding is available for - contextual values. In contrast, justification is supposed to be entirely objective, free of psychology or personal preference.

Longino says that scientists have different ideas as to what counts as evidence, depending on their background beliefs and assumptions. In other words, evidence itself is subjective and contextual, and this subjectivity influences which hypotheses get onfirmed.

Okruhlik talks specifically about contextual gender values - how gender influences background asumptions and values, and how it affects science. One example she gives is the "sleeping beauty/prince charming" model of human reproduction, where the egg cell waits around for sperm cells to reach it and activate it - textbooks subtly describe the way reproduction works in a way consistent with gender norms.

In other words, the stereotype that women are somehow more "passive" and men more "active" influences what we see under the microscope. The language in textbooks seems to reflect this; consider "the sperm cell penetrates the egg cell", versus "the egg cell draws the sperm cell in".

If we want to fix this, we need to recognize that scientific methods are subject to contextual values, and that sexist beliefs play a role in research. Does this make all of science subjective? Do these cases indicate that science itself is sexist, or are they just isolated incidents?

According to feminist empiricism, science is objective by virtue of its methods - following the rules of science leads to good science, and although the discovery phase is subjective, the justification phase is not.

According to standpoint epistemology, contextual values are an essential part of science, and having the right ones lead to good science - scientists that have certain beliefs and values, like valuing explanatory power, do more objective science.

According to feminist post-modernism, science is not objective, and no view is more valid than another - science is just one possible story, and one cannot judge either way.

Longino and Okruhlik promote a standpoint between feminist empiricism and standpoint epistemology: science is not objective due to contextual values, which are always present, but we can improve objectivity by changing the social organisation of science. Social organisation can change contextual values when they are bad - we can organize science to encourage transformative criticism of bad contextual values (such as values that have too many background assumptions).

In other words, science can be more objective by organizing it to best promote criticism of beliefs, theories, and contextual values. Objectivity is a property of a community of scientists that have different viewpoints - a greater number of more varied viewpoints mean more objectivity. Okruhlik says that the deficiencies of different contextual values cancel each other out when put together.

25/11/15

The final exam is basically a subset of the questions from the study guide (posted on LEARN) and from the questions at the beginning of each set of slides. On the final we choose a subset of those questions to answer.

The empiricists argue that science is in fact objective and value-free, as long as the methods are followed correctly. In other words, the quality of science done doesn't depend on the beliefs/experiences/values/etc. that one has.

If this is the case, why does diversity improve the objectivity of science? There is some evidence that suggests this is the case.

According to Sandra Harding, women have an advantage over men, because there are fewer distorting contextual beliefs, and those beliefs are more in tune with reality - so if we did science from the standpoint of women, we would have more objective science.

According to Logino, this is because the woman's standpoint values more complex relationships, applicability to human needs, novelty, and more.

Longino and Okruhlik reject feminist empiricism, but still think science can be objective. They say that viewpoints do matter, but it is impossible to eliminate contextual values from science. Instead, we add more values by increasing diversity and allow existing values to be criticized.

Longino and Okruhlik say that contextual values often affect us involuntarily, and without us being consciously aware of them - these are our implicit biases. Popular questions in psychology are how to detect these impliit biases and how they cause people to behave differently.

The implicit association test is a well-known psychological bias test that is very successful in determining their attitudes. It tests people's reaction times in linking together different concepts. For example, one version showed pictures of objects and several categories that participants could sort them into. Initially the categories would be harmless vs. dangerous, then African-American vs. European-American, and then "African-American or harmless vs. European-American or dangerous" and "African-American or dangerous vs. European-American or harmless". The last two categories either conform or don't to certain social stereotypes, and measuring the reaction times in answering these tells us about one's impliit biases (empirically, people have low reaction times when the associations are consistent with stereotypes, like "European-American vs. harmless").

Implicit association tests for race showed that 88% of white Americans and 48% of black Americans show implicit bias in favor of white people. This manifests in the real world in often unpredictable ways, such as resumes with "stereotypically white" getting 50% more employer responses compared to resumes with "stereotypically black" people, even when all of the resumes are otherwise identical. When bus riders asked to ride for free due to lack of funds, bus drivers allowed white/asian people to 72% of the time, indian people 50% of the time, and black people only 36% of the time.

That means that there are significant differences in opportunity based on race, even after we control for factors like explicit beliefs/biases.

Also, biases influence scientists studying biases. Scientists sent fake studies, some finding bias, some not, were asked to evaluate those studies. Women scientists tended to find research that suggests there are biases toward men of slightly higher quality than research showing no bias, while men scientists tended to find the opposite.

30/11/15

Readings: Call/Tomasello, Drayton/Santos

The final exam questions are picked out of the questions from the study guide.

Knowing about our implicit biases allows us to consider them more critically, and counteract their effect by thinking in terms of other viewpoints.

Do these implicit attitudes/biases reveal anything about actual beliefs? Alston though that normal beliefs are just as uncontrollable as our implicit ones, but are they related?

How can we have knowledge about ourselves and the world given all our implicit biases/attitudes? Do these implicit biases undermine justification for our beliefs?

Are we morally responsible for actions caused by these states of mind, or having these biases? Should we blame the bus drivers that showed unconsciously racially biased behaviour towards people asking to ride for free?

Primate Knowledge

To get a better picture of our implicit biases, we want to look at how animals see the world.

Humans are seemingly unique in their cognitive capacity to explain people in terms of their unique mental states and perspectives, and their ability to explain behaviour by interpreting mental states. We can look at what a person is doing, and immediately get an idea of what they're feeling, thinking, and their state of mind. Additionally, we can understand that people can have their own beliefs, and that those can even be false. These abilities are called the theory of mind.

Do non-humans have a theory of mind? Call/Tomasello study chimpanzees to try to answer this question using experiments on chimpanzees.

One experiment had two cases, where food was placed in front of a chimp, and the experimenter either could or couldn't see the food. As it turns out, the chimps would take the food more often when the experimenter couldn't see it, and could even differentiate when experimenters covered just their mouth vs. just their eyes. This seems to imply that chimps can at least understand visual perception in others.

The best test for a theory of mind is the ability to understand that others have beliefs and that they can potentially be different from one's own, and the ability to predict behaviour based on those others having false beliefs. The false belief test detects this. Suppose the subject is observing two agents in a room. Agent 1 places a marble in box A, and then leaves the room. Agent 2 then moves the marble to box B, and after that Agent 1 comes back into the room, seemingly unaware of this. The subject is then asked which box Agent 1 will try to look in if they're trying to find the marble - if the subject says box A, then they have a model of false beliefs in others and can predict based on that model.

Testing young children suggests that they do have this model. Nonverbal versions of this experiments on monkeys suggests that monkeys do not.

One experiment tested this on chimps. There were two cases where food was placed in front of a chimp in one of two boxes, and it was made clear that the experimentor knew which box the food is in. Then, the box contents are switched while the experimentor can't see them. The reaction times of the monkeys when allowed to get food would, under experiment assumptions, show which box the monkey expected the experimenter to look in. The results showed that the chimps would take equally long, which implies chimpanzees don't have any model of false beliefs in others.

Basically, experiments found that monkeys could understand what others see/hear/know, but not what others believe - they can represent whether people have beliefs about the world identical to their own, but not any other beliefs.

Some other animals like scrub jays are capable of deception - intentionally generating false beliefs in others. This is a subject under active research.

This raises questions about whether the theory of mind is even applicable to other animals - perhaps the concept itself only really applies to humans. Also, if animals can know without beliefs, then perhaps knowlege doesn't require belief.

Review

Creative Commons License This work by Anthony Zhang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Copyright 2013-2017 Anthony Zhang.