Language and the Content of Belief

Language and the Content of Belief

If language is a core feature of consciousness, our conscious thoughts, expressed in language, should accurately reflect our belief states, and we should be able to accurately determine the contents of at least our own beliefs. Further, we should be able to freely affect what our belief states are through rational analysis. It is this ability that creates in us a sense of moral agency and responsibility. Through rational analysis and argument, we can form beliefs that are appropriate and honorable. If we assume other humans are more or less like us, we may also be able to extend this ability to other humans through inference and analogy. Ascribing content to the beliefs of non-human animals would be riskier business, unless we found animals that could use our language. If language is a core feature of consciousness, then a machine that could use human language as a human might use language would have achieved human consciousness. On the other hand, if language is a more distal feature of consciousness, ascribing content to our own beliefs might be as risky as ascribing content to the beliefs of other humans, animals, and machines. Our moral decisions may be determined by something other than rational analysis. Our moral views may be the product of evolution, not reason. I will argue that many of our beliefs and thoughts are unconscious, and we attempt to ascribe content to our beliefs by the same inferences we make to ascribe content to others. To say we know our own minds is only to say that we are aware of our minds, not to claim that we know the specific content of our beliefs.

Human language brings clarity and understanding to human thoughts and beliefs. In fact, many have argued that without language, humans have no capacity for thought or belief. Descartes expresses a firm conviction that language is necessary for any thought:

There has never been an animal so perfect as to use a sign to make other animals understand something which bore no relation to its passions; and there is no human being so imperfect as not to do so. . . . The reason animals do not speak as we do is not that they lack the organs but that they have no thoughts. It cannot be said that they speak to each other but we cannot understand them; for since dogs and some other animals express their passions to us, they would express their thoughts also if they had them. (CSMK 575)

While the idea that language is necessary for the emergence of belief has been accepted for centuries, philosophers and others have begun to use the term “belief” more permissively, making the assertion much less obvious. While to say a cow had beliefs may have once implied the cow ascribed to some creed or doctrine, the claim has a much more mundane connotation in contemporary philosophy. For example, using the language of belief/desire psychology, we might say that a group of cows and humans gathering under a cover after hearing a thunderclap share a common belief that it is about to rain. We will also say they desire to stay out of the storm. Cows do not need the ability to express their beliefs to want to avoid a storm that appears to be imminent. In this case, it is easy to describe the cow’s behavior using the language of belief/desire psychology, but it is also easy to imagine that the humans under the cover are in a far different position than the cows; they understand their position, have plans and fears for the future, and have a sense of what it is right and wrong to do. We want to say the humans are conscious, and the cows are not. We know the humans are conscious because we assume them to be more or less like us, and we are conscious. Language expresses our thoughts and beliefs, and we assume that other humans use language and experience consciousness as we do.

Language does more than provide evidence of consciousness, though; it is the structure of consciousness. A sophisticated study of human language and behavior should produce a powerful and accurate psychological theory. If language sets humans apart from machines and animals, then language is quite likely the feature of human consciousness that produces moral agency and responsibility. If animals and machines are not capable of beliefs and thoughts, then humans are the only known creatures to have any concept of moral responsibility. However, if consciousness is not unique to humans, or if language is not the stuff that makes consciousness, then we may not be able to construct an adequate description of beliefs and desires, much less moral agency.

Language of Machines

Daniel Dennett argues that we can use language, through the “intentional stance,” to describe the beliefs of people, animals, or artifacts including a thermostat, a podium, or a tree (Brainchildren 327). It is easy to construct sentences to describe the beliefs of these objects (“The thermostat believes it is 70 degrees in this room”). If the thermostat is working properly and conditions are more or less normal, we should be able to predict the temperature based on the actions of the thermostat, or we should be able to predict the actions of the thermostat by knowing the temperature in the room. We recognize the possibility of error, however. As the thermostat may be broken, we are likely to say, “According to the thermostat, . . .” If the room does not feel warmer or cooler than the thermostat indicates, then we assume all is well. If we want to know the true nature of belief, being able to describe the beliefs of a thermostat is outrageously unsatisfying. Unless the thermostat is able to describe its own beliefs using language, we are loath to even suggest it has beliefs.

But given the capacity for human language, machines might appear to have beliefs and desires similar to human beliefs and desires. In fact, if a machine could use human language in a manner indistinguishable from human use, it is difficult to see how the consciousness of the machine could be denied with any certainty. Of course, the claim that such a machine is impossible goes back at least to Descartes, who wrote, “It is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do” (CSM II 140). Surely Descartes did not imagine 21st century computer programs when he provided this early version of the Turing Test (in which a computer is held to be conscious if it can master human conversation), but so far his challenge has not been met.

In John Searle’s Chinese room argument, we are challenged to accept that even a computer that could pass the Turing Test would not prove the computer is conscious. Although he does not deny that machines could someday be conscious, a language program would not be proof of it (Searle 753-64). Our best reason for believing the machine is not conscious is that it is not similar enough to a human to be considered conscious by analogy. Even if we can’t deny beliefs and desires to a machine with certainty, we are equally ill equipped to accurately ascribe beliefs and desires to machines, or trees, or stones.

Beliefs of Non-Human Animals

We are more likely to feel confident ascribing beliefs to non-human animals for several reasons: they share at least part of an evolutionary history with humans, they share a genetic material with humans, they share behaviors similar to those of humans, and they share a physiological structure similar to that of humans. As a result, many humans feel comfortable making inferences about non-human animal experience and consciousness based on analogy with humans.

David Hume claims that we can make many inferences about animals based on the assumption that animals are analogous to humans in many respects. Similarly, we can make inferences about humans based on the observation of animals. For Hume, this is compelling evidence that humans are not as rational as we like to think. Animals make many of the same inferences as humans without the benefit of scientific or philosophical reasoning. Our philosophical arguments are used only to support beliefs we share with less rational animals. While we may think we are using reason, we are only providing explanations for beliefs built by habit or biology. He says,

It is custom alone, which engages animals, from every object, that strikes their senses, to infer its usual attendant, and carries their imagination, from the appearance of the one, to conceive the other, in that particular manner, which we denominate belief. No other explication can be given of this operation, in all the higher, as well as lower classes of sensitive beings, which fall under our notice and observation.

Hume clearly feels we can ascribe beliefs to non-human animals. In particular, we can assume that animals believe in cause and effect. In contemporary terms, our beliefs may be formed by evolution or experience, but our own understanding of those beliefs is expressed through rational explanation. Hume’s assumption that it is possible to infer anything at all about humans based on an analogy with animals is, of course, unproven. However, his description brilliantly illustrates the possibility that beliefs we hold to be founded in reason are merely the result of habit, while reason is only our way of expressing those beliefs. This is enough to warn us of the perils of ascribing content to beliefs based on our descriptions of our own beliefs. It is at least possible that there is a great divide between what we believe and what we think we believe.

In his paper, “Do Animals Have Beliefs?,” Stephen Stich examines the difficulty of ascribing content to animal beliefs. For Stich, the problem of ascribing content to animal beliefs is serious enough that we may fear ascribing content to any beliefs at all. Stich offers two possible accounts for animal belief and belief ascription, ultimately rejecting both. (Animals 15-28)

The first possibility is that animals do have belief, and we can ascribe content to those beliefs by observing animal behavior (in the manner of Hume). Stich contends, “When we attribute a belief to an animal we are presupposing that our commonsense psychological theory provides . . . a correct explanation of the animal’s behavior” (Animals 15). Indeed, desires and beliefs can provide a foundation for describing the causes of animal behavior. Assuming they are analogous to humans, animal beliefs are formed by perception and inference. Seeing, hearing, and smelling food in a dish, the dog comes to believe there is food in the dish, just as there is every morning. This belief results in a desire to gain access to the dish. Once an animal has formed beliefs, these beliefs can generate other beliefs.

For example, some dogs have a desire to chase squirrels. Upon seeing a squirrel in the back yard, such a dog will bark at the door, because this particular dog believes barking at the door will cause a human to come and open the door. (We could describe an infinite array of beliefs. For example, dogs believe squirrels should be chased. Dogs believe humans should open doors for dogs. Dogs believe barking at doors is more effective than scratching them.)

According to Stich, the appeal of a view based on beliefs and desires is that it is the most intuitive explanation for human behavior. Further, it is hard to imagine that we could explain human behavior through belief/desire psychology without being able to explain animal behavior in the same way. If folk psychology fails in one case, it appears to fail in the other.

The second possibility is that animals do not have belief. It is impossible to ascribe content to animal beliefs; therefore, it is meaningless to talk about animals having belief. If a dog has no concept of what a bone is, then it is impossible to say that the dog has any beliefs at all about bones. Without language, it is impossible to ascribe belief to animals. This begs the question of whether language actually enables us to ascribe content to beliefs accurately. Still, if we can’t ascribe content to the beliefs of animals, then we may run into trouble ascribing content to the beliefs of humans.

Stich poses the solution offered by David Armstrong. According to Armstrong, although animals lack the concepts we have, we can ascribe content to animal beliefs in a “referentially transparent” (de re) manner. A dog may respond to a bone in the same manner we would expect it to respond if it had our concept |bone|. Armstrong acknowledges that we can not talk about animal beliefs in a way that is “referentially opaque” (de dicto). In order to do this, we would have to know that the dog had a concept analogous to our concept of |bone|, which is impossible. Armstrong claims, however, that the dog does have a de dicto concept of |bone|, and enough research of animal psychology might eventually give us insights to animal concepts. For Armstrong, our de re discussions of animal concepts presuppose that there are correct de dicto beliefs on board the animal that correspond to our de re descriptions. If no correct de dicto concepts exist, then our efforts are only a way of describing animal behavior, not a way of understanding animal belief (19-21).

On Armstrong’s view, eventually we will gain enough knowledge of animals to accurately ascribe content (de dicto) to animal beliefs. Stich’s most serious objection to Armstrong’s argument is that we can only ascribe contents of beliefs to subjects that “have a broad network of related beliefs that are largely isomorphic to our own” (27). We cannot ascribe content to the beliefs of any being that does not share our concepts, and we have no way of knowing what concepts animals share. For example, even if we understand all the conditions necessary for a dog to react to a bone in front of him, it will make no sense to say, “Fido believes a bone is in front of him,” unless we assume Fido has a concept for “in front of,” among others. Following Armstrong’s suggestion, it may be possible to determine exactly how a dog would react to a bone or bone-like object in every conceivable situation. We can predict with 100 percent accuracy the behavior of the dog. We may identify all the properties of the human concept |bone| and all the properties of the dog concept of |bone’|. We’re not out of the water, though, as the concept |bone’| is not the dog’s concept but our concept of the properties of the dog’s concept. We still don’t know what concept the dog has on board.

For Stich, a larger problem may be that we do not know what concepts other humans share. If we follow the reasoning that we can only claim beings have beliefs if they have specifiable content and that content is only specifiable if they have concepts isomorphic to our own, we are in a position of implying that humans with concepts radically different from our own have no beliefs at all. Examples of such humans would include people from different times or cultures. Indeed, anyone from a different language community would be in danger of being declared to be wholly without beliefs.

Stich concludes that it is impossible to decide whether a belief without specifiable content is a belief at all, and it is impossible to verify content for either human or non-human animals. He claims, “If we are to renovate rationally our pre-theoretic concept of belief to better serve the purposes of scientific psychology, then one of the first properties to be stripped away will be content” (27). Folk psychology, based on the attribution of content to beliefs and desires, is inadequate for a scientific account of belief.

Belief and Other Minds

If there is any possibility of accurately attributing belief to any other minds, it would seem that human minds, with a capacity for human language, would be the best hope. We recognize that a human can have a mind full of desires, beliefs, and rational arguments without ever expressing them. In Kinds of Minds, Daniel Dennett points out that this is possible because we sometimes have beliefs and desires that go unexpressed, and we can imagine never expressing any of them, or at least misleading people as to what they are. Actually ascribing content to the beliefs of humans is risky business, then, but at least we feel confident that humans are generally able to communicate beliefs and desires roughly isomorphic to our own beliefs and desires. We believe humans have minds, and their use of language is the best evidence of it (Kinds 12).

Because humans use language, we show them greater moral concern than we show other animals. The closer their language is to our own, the more concern we show them. Wittgenstein famously said that if a lion could talk, we couldn’t understand it. Dennett suggests that this lion would be the first in history to have a mind and would be a unique being in the universe. We assume that any animal that can use language in the manner of humans has a mind (Kinds 18).

The problem with this assumption is that we might be easily fooled. Another human may use language in exactly the same way that I do, express all the beliefs I have, exhibit all the behavior I exhibit, and perhaps be acting deceptively or robotically. When serial killers and pedophiles are arrested, friends, family members, and coworkers are generally interviewed who express that they have made grandly mistaken ascriptions of beliefs and desires to the criminals. It is the trust we place in members of our language community that enables us to be duped in such horrendous ways. We should perhaps be less confident that members of our language community have beliefs and desires isomorphic to our own.

But even if some members of the language community are deceptive, surely they at least have minds—at least have some beliefs and desires, even if we can’t know the content. If we encounter a robot with a human appearance and the ability to use human language effectively (something like the fictional Stepford Wives), would we assume the robot to have a mind? Such robots are being developed, but none exists (see Dennett’s discussion of Cog[1] in Kinds of Minds, page 16), so the questions can’t be answered empirically. While developing such a robot, we may come to understand exactly how a mind develops and comes into being. On the other hand, it is possible to imagine such a robot existing with no mind and no human feeling at all. If we can imagine a robot as an automaton, why not imagine that at least some humans are automata? Perhaps their use of language is as unconscious as our basic reflexes. Their bodies simply produce language naturally with no self-awareness and no beliefs and desires. While we assume this is not the case, it is impossible to determine this with any certainty.

What We Know of Our Own Minds

If nothing else is certain, we must know the contents of our own minds. Descartes was unable to doubt the existence of his mind, and it seems quite impossible for me to doubt the thoughts I am thinking right now. As I produce thoughts, I am aware of them, and it is impossible for me to escape them. My thoughts, formed by language, express the contents of my beliefs and desires precisely, because that is how I have intended to express them to myself. I can’t imagine I am deceiving myself or that I am an automaton. I am a thinking being immersed in my conscious life. If the language I use in thinking expresses my beliefs accurately and rationally, then this is what enables me to develop moral principles and behave in a morally responsible manner.

But what of our “unconscious” thoughts? Hume demonstrated that our belief in cause and effect seems to exist in a precognitive state. We don’t use language and reason to develop a belief in cause and effect—in at least some cases, language merely expresses what is built into us. Our moral reasoning, though, is based on careful consideration and tediously crafted arguments. Surely our language is not expressing a precognitive instinct or intuition. In Kinds of Minds, Dennett quotes Elizabeth Marshall Thomas saying, “For reasons known to dogs but not to us, many dog mothers won’t mate with their sons” (10). Dennett rightly questions why we should assume that dogs understand this behavior any better than humans understand it. It may just be an instinct, produced by evolution. If the dog had language, it might come up with an eloquent argument on why incest is wrong, but the argument would seem superfluous—just following the instinct works well enough.

By the same token, human moral arguments may do nothing more than express or at best buttress deeply held moral convictions instilled by evolution or experience. In a Discover magazine article titled “Whose Life Would You Save?” Carl Zimmer describes the work of Princeton postdoctoral researcher Joshua Green. Green uses MRI brain scans to study what parts of the brain are active when people ponder moral dilemmas. He poses various dilemmas familiar to undergraduate students of utilitarianism, the categorical imperative, or other popular moral theories.

He found that different dilemmas trigger different types of brain activity. He presented people with a number of dilemmas, but two of them illustrate his findings well enough. He used a thought experiment developed by Judith Jarvis Thompson and Phillipa Foote. Test subjects were asked to imagine themselves at the wheel of a trolley that will kill five people if left on course. If it is switched to another track, it will kill one person. Most people respond that they will switch to another track in order to save four more lives, apparently invoking utilitarian principles. In the next scenario, they are asked to imagine they can save five people only if they push one person onto the tracks to certain death. Far fewer people are willing to say they would push anyone onto the tracks, apparently invoking a categorical rule against killing innocent people. From a purely logical standpoint, the two questions should have consistent answers.

Greene found that some dilemmas seem to evoke snap judgments, which may be the product of thousands of years of evolution. He notes that in experiments by Sasrah Brosnan and Frans de Waal capuchin monkeys who were given a cucumber as a treat while other monkeys were given grapes would refuse to take the cucumbers and sometimes would throw the cucumbers at the researchers. Brosnan and De Waal concluded that the monkeys had a sense of fairness and the ability to make moral decisions without human reasoning. Humans may also make moral decisions without the benefit of reasoning. It appears evolution has created in us (at least in those who are morally developed) a strong aversion to deliberately killing innocent people. Evolution has not prepared us for other dilemmas such as whether to switch trolley tracks to reduce the total number of people killed in an accident. These dilemmas result in logical analysis and problem solving. Zimmer writes, “Impersonal moral decisions . . . triggered many of the same parts of the brain as nonmoral questions do (such as whether you should take the train or the bus to work)” (63). Moral dilemmas that require one to consider actions such as killing a baby trigger parts of the brain that Greene believes may produce the emotional instincts behind our moral judgments. This explains why most people appear to have inconsistent moral beliefs, behaving as a utilitarian in one instance and as a Kantian the next.

It may turn out that Hume was correct when he claimed, “Morality is determined by sentiment. It defines virtue to be whatever mental action or quality gives to a spectator the pleasing sentiment of approbation” (Rachels 63). His claim is that we evaluate actions based on how they make us feel, and then we construct a theory to explain our choices. If the theory does not match our sentiment, however, we modify the theory—our emotional response seems to be part of our overall architecture. The work of philosophers, then, has been to construct moral theories consistent with our emotions rather than to provide guidance for our actions.

Language gives us access to our conscious thought. Language permits us to be aware of our own existence and to feel relatively assured that other minds exist as well. It is through language that we make sense of ourselves and the world. We may be deceived, though, into thinking that thought is equivalent to conscious thought. Much of what goes on in our mind is unconscious. Without our awareness, our mind attends to dangers, weighs risks, compensates for expected events, and even makes moral judgments. Evolution has provided us with a body that works largely on an unconscious level. However, humans, and perhaps some nonhuman animals, have become aware of their own thoughts, and this awareness has led to an assumption of moral responsibility. This awareness should not be taken to prove that we are aware of the biological facts that guide our moral decisions.

Stephen Stich explores the development of moral theory in his 1993 paper titled, “Moral Philosophy and Mental Representation.” In the essay, Stich claims that while most moral theories are based on establishing necessary and sufficient conditions for right and wrong actions, humans do not make mental representations based on necessary and sufficient conditions. He says, “For if the mental representation of moral concepts is similar to the mental representation of other concepts that have been studied, then the tacitly known necessary and sufficient conditions that moral philosophers are seeking do not exist” (Moral 8). As an alternative, he suggests that moral philosophers should focus on developing theories that account for how moral principles are mentally represented. He writes:

These principles along with our beliefs about the circumstances of specific cases, should entail the intuitive judgments we would be inclined to make about the cases, at least in those instances where our judgments are clear, and there are no extraneous factors likely to be influencing them. There is, of course, no reason to suppose that the principles guiding our moral judgments are fully (or even partially) available to conscious introspection. To uncover them we must collect a wide range of intuitions about specific cases (real or hypothetical) and attempt to construct a system of principles that will entail them. (8)

On this view, moral theories represent beliefs that are not only unconscious but are unavailable to the conscious mind. In order to make a determination of the content of our own moral beliefs, then, we must examine our own moral decisions and infer the content of our beliefs. In this approach, we find that humans are deciphering their own beliefs in much the same manner the Brosnan and De Waal determine the moral beliefs of capuchin monkeys. Not only does language fail to give a full accounting of our belief states, but our conscious thoughts may be an impediment to determining our actual beliefs, so that we must consider prelinguistic or nonlinguistic cues to discover what we actually believe.


When we ascribe content to the beliefs of other beings, including human beings, we assume those beings have mental experiences roughly isomorphic to our own. Based on our own experiences and beliefs, we make inferences about the beliefs of other beings. The more a being resembles us, the more confident we are in making such inferences. As a result, we are most comfortable ascribing contents to the beliefs of humans who speak the same language we speak. We are even more comfortable if the person is of the same gender and social class. Even in these cases, though, we may be too optimistic. Our own beliefs may be as inaccessible to us as the beliefs of our distant neighbors or monkeys or lobsters. Ascribing content to beliefs may be futile. On the other hand, we seem to survive quite well assuming that we know our own beliefs and that others have beliefs that are more or less transparent to us. We may be able to use the language of belief/desire psychology as a heuristic to help us understand, manipulate and cope with our behavior and the behavior of others. Although language is a distal feature of consciousness and may not accurately determine the content of our beliefs, language may enable us to gain a community of thinkers and form successful relationships with other beings.

Works Cited

Dennett, Daniel C. Brainchildren. Cambridge: MIT P, 1998.

—. Kinds of Minds. New York: Basic Books, 1996.

Descartes, Rene. The Philosophical Writings of Descartes: Volume II. Trans. John Cottingham,

Robert Stoothoff, and Dugald Murdoch. Cambridge: Cambridge UP, 1985.

—. The Philosophical Writings of Descartes: Volume III. Trans. John Cottingham,

Robert Stoothoff, Dugald Murdoch, and Anthony Kenny. Cambridge: Cambridge UP,


Hume, David. An Enquiry Concerning Human Understanding. Vol. XXXVII, Part 3. The

Harvard Classics. New York: P.F. Collier, 1909–14;, 2001. [May 11, 2004].

Hume, David. “Morality as Based on Sentiment.” The Right Thing to Do: Basic Readings in

Moral Philosophy. Ed. James Rachels. Boston: McGraw Hill, 2003.

Searle, John. “Is the Brain’s Mind a Computer Program?” Reason at Work. Eds. Steven Cahn,

Patricia Kitcher, George Sher, and Pater Markie. Wadsworth, 1984.

Stich, Stephen P. “Moral Philosophy and Mental Representation.” The Origin of Values, ed.

Michael Hechter, Lynn Nadel & Richard E. Michod. New York: Aldine de Gruyter.

1993. 215-28.

Publications/MPMR/MPAMR.html. [May 11, 2004].

—. “Do Animals Have Beliefs?” Australasian Journal of Philosophy 57.1: 15-28.


Zimmer, Carl. “Whose Life Would You Save?” Discover Apr. 2004: 60-65.

[1] Dennett is working with Rodney Brooks, Lynn Andrea Stein, and a team of robotocists at MIT to develop a humanoid robot named Cog. Dennett says, “Cog is made of metal and silicon and glass, like other robots, but the design is so different, so much more like the design of a human being, that Cog may someday become the world’s first conscious robot.” (Kinds 16)

Leave a Reply