The Personal is Political

For decades now, feminists have been telling us that what goes on in the private sphere affects the public sphere. The rallying cry of “The personal is political!” was heard by many. Some, such as Susan Okin, even predicted the problem this would cause for men. In order for women to enter the public sphere, men would have to enter the private sphere. If women were paid less and given less respect because their commitment to their jobs was diluted somewhat by family obligations, employers were likely to be even more harsh with fathers who wanted to be part of family life.

Though the warnings were unheeded, they were not unjustified. Katherine Reynolds Lewis has just published an article describing the struggles modern fathers face. It was assumed in the past that fathers would rather not take responsibility for changing diapers, taking sick kids to the doctor, and going to meet with teachers. This assumption turned out to be false. Fathers in the past were afraid that if they were more involved in the private sphere of home and family, they would be punished by their employers. Their fears have been realized. Fathers have been passed over for promotions and even fired after insisting on taking leave to be with their children.

Liberating women for equal pay will require liberating men as well. As society we should assume that all parents love their children and want to be with them to ensure their healthy development. Some fathers and mothers are not good parents to be sure, but rewarding rather than punishing those who are will benefit us all.

How free can we be?

I’m a little behind the curve on this, but a Jan. 2 article by Dennis Overbye in The New York Times deals with free will and the latest developments in cognitive science regarding free will. Overbye cites the work of Benjamin Libet who demonstrated (to his satisfaction, anyway) in the 1970s that people act before becoming consciously aware of their choices. Consciousness and apparent free choices seems to follow the mechanism we call our body rather than direct it. It is compared to a monkey riding the back of a tiger and making up a story about how the monkey directed the tiger’s actions.

The no free-will bus campaign
The no free-will bus campaign (Photo credit: morgantj)

To some extent, I guess we all believe that actions are caused by physical laws and past events. Whenever someone commits a horrible crime, we ask, “What would cause someone to do such a thing?” We believe there is an answer, and scientists seek the answers. People who argue most strenuously for free will will generally back off when confronted with their own shyness, depression, impatience, or other trait they’ve tried for years to modify.

A simple test for free will involves the compulsion to crunch on ice. For reasons I don’t understand, people with an iron deficiency will crunch ice compulsively, annoying co-workers, family members, and passers-by. Give them iron, and suddenly they “choose” to stop crunching ice all the time.

English: Daniel Dennett at the 17. Göttinger L...
English: Daniel Dennett at the 17. Göttinger Literaturherbst, October 19th, 2008, in Göttingen, Germany. (Photo credit: Wikipedia)

So, is this cause for despair or optimism? Understanding the causes of our actions gives us more tools to help control them (giving iron supplements, for example). At the same time, knowing our actions are caused makes us doubt the free will of the soul (or mind, if you prefer). We feel a loss of dignity, for some reason. Daniel Dennett argues consistently and persistently that recognizing and understanding causal relationships gives us more freedom, not less. When he says “more” freedom, though, he really means more than none, which isn’t comforting to the hard-core indeterminists in the world.

One problem is that punishment becomes meaningless if people are not free, or so it is claimed. Baruch Spinoza answered this by saying that you would control the actions of a rabid dog in the same manner regardless of whether the dog chose to be rabid. The same, he claimed, should apply to humans. Punishment is no longer retribution, though, it is now simply a necessary condition of life.

William James
William James (Photo credit: Wikipedia)

On the other hand, William James claimed that we are forced to believe in free will because we are forced to make choices every day. If we do not believe in free will, we cannot make any choices, so we are paralyzed. From a practical standpoint, we feel we are free and must act as if we are free.

This may be as good as it gets.

What can ethics courses achieve?

In a Houston Chronicle commentary titled “Where’s Right and Wrong in Ethics?,” Donald Bates explores why required university courses in ethics fail to produce ethical business practices. Bates lists many familiar examples of unethical behavior in public life (Enron and WorldCom, for examples) and blames them conveniently on the separation of church and state.

Bates claims that ethics is taught from a position of Utilitarianism (the greatest good for the greatest number), egoism (what most benefits the long-term interest of the individual), rights (deontological forms of duty to others or entitlements for oneself), or abstract principles of justice. This is his first mistake. University ethics courses teach the theories listed by Bates, although his list is far from exhaustive, but ethics instructors are not wont to teach “from a perspective.” To understand the study of ethics, students must be familiar with competing theories, but universities provide education, not indoctrination.

Bates goes on to say, “Trying to teach ethics without a religious underpinning means absolutes do not exist, everything is situational.” This is his second mistake. The fact that many competing ethical theories (and religions, for that matter) have emerged over the centuries is not evidence that absolutes do not exist. It is evidence only that absolutes are extremely difficult to discover and agree upon. Teaching ethics from the standpoint of a “religious underpinning” is to teach from a standpoint of absolute knowledge of right and wrong and good and bad, which would require professors to claim to know the mind of God, a claim that would be met with suspicion for good reason.

Expecting ethics courses to make the world more ethical is a little like expecting professional athletes and pop stars to be good role models. Ethical solutions and agreement are not easy to come by. Claiming that the state should enforce morality founded in religion begs the question of which religious perspective is correct and who will decide on the proper perspective. Rational people of good will disagree on ethical practices each day, and this is a good foundation of a pluralistic society.

If we are lucky, we might be able to teach a few students a little humility and respect for the efforts of others to discover right from wrong. Many students will claim that ethics is just a matter of common sense. Oddly enough, Bates seems to agree. In each case he presents, he believes there is a universally accepted opinion of what is right and wrong. If he is correct, students do not need to be taught what is right, they need to be prevented from being evil. It is unlikely that the leaders of Enron and WorldCom made a mistake in ethical thinking. More likely, they decided to do something that showed no concern for the harm it caused others.

An ethical society requires skeptical humility from its leaders and educators, recognition of the humanity of others, and a desire to limit harm to all. This lesson is not easily taught but it is easily shared by the way we live.

Future of Bioethics

Many problems of bioethics revolve around the value of life. Many bioethicists accept the Judeo-Christian view that human life and human life only has great intrinsic value. As a corollary it is taken that anything thing that is both alive and human possesses a right to respect and continued life.

These assumptions are powerful and pervasive, but go against the intuition of many people. The assumption that human life has great value and is even sacred would lead one to assume that it is proper to create as much human life as possible, but only a few people actually believe this. The prevalence of contraception and encouragement of abstinence belies an underlying belief that perhaps not every human life is of great value simply because it is possible for it to exist.

Similarly, rights are not granted uniformly to all that are human and alive, although many pretend that they are. When consciousness ceases to exist or fails to begin in living human tissue, many people will regard this being as perhaps being worthy of dignified treatment, but the idea that it is of the same value of all other human life is not represented through everyday actions of most people.

Concern for the “right to die” is some circumstances also implies a rejection of the view that life is sacred in all cases. Alternative views of the value of life can be useful in resolving the apparent contradiction between the actions many people take and their declared respect for life and individual rights. Not all people see life as sacred and valuable. The first noble truth of Buddhism, for example, is that life is suffering. We seek continued existence as a result of desire, which intensifies our suffering. Life becomes valuable, then, because it fulfills a desire which is itself irrational. Other views see life as the inevitable consequence of physical laws or nature. The fact that humans exist and desire life is a brute fact that is morally significant only because of the suffering generated by the desire for life.

We may recognize that life is valuable for reasons that are not metaphysical. A pre-embryonic collection of cells may be of great moral significance to a certain man who is hoping, with a bit of desperation, to become a father and see his child before he succumbs to a life-threatening disease himself. For this man, these human cells are not morally significant because they are endowed with rights and dignity at their first creation. He is not concerned with the metaphysical status of the cells. He is concerned, instead, with their ontological status. They exist and he wants them to survive because he is interested in their continued existence. In this case, we may feel morally obliged to take great measures to ensure the survival of these cells because they mean so much to this hopeful father. We are concerned for this father and he is concerned for his progeny. The moral commitment arises from concrete human relationships.

For similar reasons, non-human life may become of great moral concern to us. Police officers who have worked with service animals for many years will often refer to a deceased animal as a “partner” and such animals sometimes receive funerals and memorials. Few would claim that service animals are accorded respect and value because of the sanctity of life.

In both the cases I’ve given above, it can be claimed that the duties accorded to life are indirect duties to the ones who care about the life. While that is true, the moral commitment could arise from a direct concern for a life. An individual may value her own life because she enjoys being alive and wants to continue her existence. Her own concern for her life makes her life something of value. Our of a concern to reduce her suffering at the thought that her life may not be preserved, medical professionals will devote themselves to preserving her life.

In such cases as outlined above, it is compassion, sympathy, empathy, or care that creates moral demands for the preservation of life. This view of the value of life will not appease the demanding vitalist, but it may be accepted by many people from different faiths and philosophical backgrounds. It helps us reconcile the strong drive to preserve and extend life with our belief that some people have a right to die, that some non-human life deserves extraordinary care and respect, and the view that some human cells are precious while others are less precious.

Becoming familiar with death

Death be a Stranger No More

Although every human is ultimately successful at achieving death, most of us experience profound anxiety over the event. When pressed, some of us will claim that we do not fear death as much as the process of dying, but philosopher Thomas Nagel points out that the worst thing about dying is that it is followed by death (3). Simone de Beauvoir adds lucidity to the human experience, adding, “All men must die; but for every man his death is an accident and, even if he knows it and consents to it, an unjustifiable violation” (526). Of course, we can give many philosophical and spiritual reasons for fearing death and dying, but our lack of familiarity with the process must play a crucial role in our anxiety. Philippe Aries points out, “any discourse on the subject of death becomes confused and expresses one of the many forms of pervasive anxiety” (Reversal 134). He claims that we moderns have moved death in to the shadows out of fear, but we’ve only intensified the anxiety. Eliminating the fear of death and dying is not an option for humans, but it is possible to stop denying the existence of death and to face death head on and in close proximity.

Albert Camus describes the desire to control one’s own death in The Happy Death. The protagonist, Mersault, wants to be conscious when he dies to experience the last part of life and to have some will in his death. He faces death in paradoxes: “Conscious yet alienated, devoured by passion yet disinterested, Mersault realized that his life and his fate were completed here and that henceforth all his efforts would be to submit to this happiness and to confront its terrible truth” (140). This may not be the kind of happy death most of us would imagine, but it has features that seem common to what most people want, the desire to manage dying with dignity and autonomy. A change in how and where people die could help more people experience their own version of a happy death. In fact, I assert that a hands-on approach to the dying and recently dead would offer many benefits for both the dying and their caregivers.

Confronting one’s own death gives one a clearer sense of identity and purpose. It is cliché to say that we should live every day as if it is our last, but planning for our final days focuses our attention on who we want to be and how we want to be remembered. A constant recognition of the certainty of death is now seen as morbid and even psychologically harmful, but the person who is prepared to die is not rejecting life. Rather, such a person is likely enhancing an appreciation for life and experiencing a deeper connection with family, friends, and other loved ones.

In “Dying in a Technological Society,” Eric J. Cassell argues that death in the past was primarily a moral matter. When one was clearly about to die, the task at hand was to care for spiritual matters. He says that death is now a technical matter of rescuing patients from the hands of death. Death has become a technological event, he says, in part because “death has moved from the home into institutions—hospitals, medical centers, chronic care facilities and nursing homes” (43). He notes also that the nature of death has changed as a result of changes to family structure. Notably, the desire of the elderly to live independent lives is part of the reason for death moving from the moral to the technological realm. This creates quite a quandary. Cassell says, “To die amidst his family he must return to them—reenter the structure in order to leave it. Reenter in denial of all the reasons he gave himself and his children for their separation, reasons equally important to them in their pursuit of privacy and individual striving and in their inherent denial of aging, death and fate” (44). On his view, the free choices of older individuals have denied them of the care they desire at the end of life. Death must now be removed from the technical sphere and regained in the sphere of morality and family.

The first step to realizing the best deaths possible for patients is to recognize that dying is a natural process that does not require medical intervention. Of course, those who are dying may have medical needs such as pain management or comfort care, but in this respect they are no different from the living as we all need pain management and comfort from time to time. To change how we die, death must not be seen by medical professionals as the dark enemy to be kept at bay for as long as possible but as the final visitor we must all meet at the end of life. By permitting families and friends participate in the care of the dying, we may also help the living better prepare for the process of dying and the inevitability of death. In his seminal work, The Patient as Person, Paul Ramsey said, “‘The process of dying’ needs to be got out of the hospitals and back into the home and in the midst of family, neighborhood, and friends. This would be a ‘systemic change’ in our present institutions for caring for the dying as difficult to bring about as some fundamental change in foreign policy or the nation state” (Ramsey 135).

This systemic change is difficult to bring about because it must overcome profound changes in way families are structured, the way care is provided, and the way society perceives death. Care in hospitals is often synonymous with technology. In the home, “care” implies being with a paid caregiver. Many, if not most, people would prefer to die at home with loved ones, but loved ones are rarely home, and few can afford to take off months or sometimes years to care for a dying person no matter how strong the bonds of love. What’s more, the dying person is often caught between the medical urging to prolong death at all costs and the discomfort with death of caregivers. Jack Coulehan captures this tension well:
The term invisible death sounds rather benign, but its invisibility ultimately carries with it a lack of preparation and inability to cope with the savage beast. Savagery emerges from its lair in many guises, among which is the alluring face of medical technology. Closely bound up with the reclusion of death from social life is the embarrassment that the living feel in the presence of dying people (Jack Coulehan, 37).

The embarrassment could be relieved by a program of death education, support for home death, and greater acknowledgment and discussion of death in our society. Often, the natural processes of death can be shocking to those who are with a loved one at the time of death. A few short conversations with caregivers about the processes dying people experience would lessen the anxiety and shock of the caregivers when the dying person begins to gag, wheeze, cough up fluids, and so on.

Narratives are filled with evidence that death, distant and medicalized, is not what patients desire. Poet Donald Hall was married to the much younger poet, Jane Kenyon. To the surprise of both, it was Kenyon who died first, of leukemia. Hall was in a position to care for her in their home. He was strong enough physically to lift her, and strong enough mentally to face death with her, although he narrates his experience in excruciating detail in his book of poetry, In Memoriam. Even given his ability to care for her and her desire to die at home, she almost died in a hospital. Hall writes,

When she couldn’t stand, how could she walk?
He feared she would fall
and called for an ambulance to the hospital,
but when he told Jane,
her mouth twisted down and tears started.
‘Do we have to?” He canceled.
Jane said, ‘Perkins, be with me when I die (41 Hall).

Ultimately, he had a change of heart, and she died as she stared at him with eyes full of “dread and love.” This was her desire and he fulfilled it out of love and devotion for her, surely he benefited as well. Besides the knowledge that he honored the dying wish of his beloved wife, he also had an experience of death that was excruciating but also filled with care and valuable to him. The exquisite pain of the experience shows through his words. Death will always be unwelcome, but, oddly, we may learn that we can “survive” death in the sense that we know everyone will pass on successfully and that we move beyond pain rather than toward it.

Activists have made an effort to educate parents about the dangers of “medicalizing” birth. They claim that birth is a much richer experience when done as naturally as possible in the presence only of the family and a birth attendant, rather than a hospital room full of strangers and exotic technology. The movement for home death echoes many of the arguments for home birth. Indeed, those who favor home birth are more likely to favor home death as well. Researchers also observe a correlation between areas where home births are common and areas where home death is common. These correlations may relate to shared social values, but they may also be a result of the proximity of individuals to hospitals or other care facilities.

A study published in the American Journal of Public Health by Silviera, Copeland, and Feudtner in 2006 attempted to analyze the contribution of social values to home death as opposed to other factors such as availability of hospital beds and income. In part, the conclusion stated, “Although we found that hospital bed availability was associated with hospital death at the individual level, the relationship became insignificant at the aggregate level” (6). The study notes that about 90 percent of patients state a preference for dying at home but only about one-third are able to do so. The correlation with home birth seems to reflect some shared social values, but the analysis is extremely difficult. Still, the authors have suggestions for increasing the number of deaths that occur in the home. They say, “Reducing the proportion of people who die where they had not wanted to die is likely to require programs that address individuals, their society, and its cultural values, and the health system in which they reside” (7). Any attempt to increase the number of home deaths will require concerted effort to educate both the public and health care professionals in addition to more alternatives to hospital care.

Cultural Considerations

Those educated to be culturally competent will recognize that the desire of a family to wash and prepare the body of a deceased loved one is common to many cultures. With a few exceptions, the more faith a society places on technology, the more distant its citizens will be from the process of death. A number of obvious reasons present themselves. Many people believe death can be kept at bay longer if their loved ones are in a hospital receiving the best care modern technology can provide. This, of course, implies an unwillingness to accept the inevitability of death and a deep fear of the process of dying, which should in this view be left to the experts.

In an essay published in 1975, Jack Goody describes Western attitudes toward death:
Only the bare bones of death are seen today in Western societies. With smaller households and low mortality, each individual experiences a close death very infrequently, if we understand close in both a spatial and social sense. In childhood, one is often kept away from the immediate facts of death, either by parents (if it is a sibling) or by relatives and friends, if it is a parent. Grief is suppressed rather than externalized (7).

In the last century, we have become more and more distant from death, especially in the United States. Many adults have never seen a corpse. This enables denial of death for a time, but it prepares one poorly for the fact of death when it occurs. In pre-industrial societies it is impossible to avoid the reality of death. The fact that technology and affluence have enabled us in the West to isolate ourselves from death does not mean it is good to do so, for death has not been eradicated, only hidden from view. The psychological and spiritual, however one defines the term, benefits of experiencing the death of others in a loving and close manner will benefit us all as a society. We will become habituated, to put it in Aristotelian terms, to care and grieve with greater immediacy and efficacy.

In the past, Americans were much more familiar with death in every aspect, and I would not want to return to the conditions of pre-Civil War America. In this era, as described by Lewis O. Saum in “Death in Pre-Civil War America, death was so ubiquitous that almost no one had failed to be in the presence of a dying person, often a child. He describes a society in which every letter was opened with dread because it was sure to have news of more death. Letters generally contained graphic details of the effects of disease and dying, and the general populace knew well the signs of impending death. They realized, also, that no one was immune from death. All the same, death was recognized as a chance to behave morally. To die well was to accept the fate of Providence. In addition, most felt that the proximity of death gave an opportunity for spiritual growth and reflection. Saum says, “Philosophy has been referred to as the learning to die, and insofar as humble Americans philosophized they did indeed learn to die” (39). Those who are constantly aware of death tend to choose their actions more carefully than those who are denying the existence of death. They lead deeper spiritual lives. Although it may happen, it is not necessary for modern American culture to experience pandemic or massive loss of life from violence or war such as existed in pre-Civil War America to regain familiarity with death. More care for loved ones and more frank discussion about the process of dying could help regain some of the benefits with our earlier experience with death without having to revive the horrifying conditions that provided them.

Philippe Aries describes the transition from the home to the hospital for death and the family. He notes, “Rapid advances in comfort, privacy, personal hygiene, and ideas about asepsis have made everyone more delicate. Our senses can no longer tolerate the sights and smells that in the early nineteenth century were a part of daily life, along with suffering and illness” (Hour 570). Advances in hygiene, comfort, and privacy were certainly goods freely chosen by most Americans. Again, we have no one to demonize, but few could foresee the consequences of this shift to what was thought to be life-prolonging and medically superior—indeed the hospital was life-prolonging and medically superior. Aries notes that the burden of care had been shared by extended families and neighbors, but the circle began shrinking in the twentieth century. He says, “This little circle of participation steadily contracted until it was limited to the closest relatives or even to the couple, to the exclusion of the children. Finally, in the twentieth-century cities, the presence of a terminal patient in a small apartment made it very difficult to provide home care and carry on a job at the same time” (570). In contemporary America, the burden of care frequently falls to one person, a spouse or single son or daughter. Perhaps society did not set out to remove itself from death; it is merely one of the more dire consequences. Many think it morbid to talk openly of the need to be present at the death of a close relative or friend. Death has been placed behind a privacy curtain of a hospital. Aries notes, “The hospital is the only place where death is sure of escaping a visibility—or what remains of it—that is hereafter regarded as unsuitable and morbid. The hospital has become the place of solitary death” (571). It is not bad manners to discuss death openly. My teenaged son is interested in mortuary science and funerary practices. He has read books on the subject, visited funeral museums on two continents, and become something of an expert. As a result, his school counselors called his parents into a meeting to discuss the possibility that he was psychologically disturbed or suicidal. It was incomprehensible to the mental health professional of the twenty-first century that an interest and knowledge of death could be expressed in a psychologically healthful manner.

Just how closeness to death enriches our lives is as difficult to define as exactly how art enriches our lives or the study of the humanities. What we take away from a death experience may be as varied as what audience members take away from a tragic drama or moving symphony. Simone de Beauvoir describes her mothers death saying, “Cancer, thrombosis, pneumonia: it is as violent and unforeseen as an engine stopping in the middle of the sky. My mother encouraged one to be optimistic when, crippled with arthritis and dying, she asserted the infinite value of each instant; but her vain tenaciousness also ripped and tore the reassuring curtain of everyday triviality” (526). Certainly some of us would rather stay behind the curtain of everyday triviality and enjoy a greater distance from death.

In 1987, my grandmother’s brother died of bone cancer. Within six weeks my grandfather has succumbed to lung cancer. Only weeks later, fire that resulted from lightning destroyed her home. My uncle, a country preacher, told her how wonderful it was that God had been with her throughout this horrible ordeal. He seemed desperate to regain some “everyday triviality,” but death has a way of forcing a deeper meaning on us. Indeed, the conversation we have with death throughout our lives informs all that we do, and we harm ourselves when we cough politely and dismiss ourselves at the earliest convenience. A meaningful life demands more of us. In the words of Eric Cassell, “In the care of the dying, it may give back to the living the meaning of death” (48). Although we know that death is still inevitable, we want to deny its existence. Aries says, “The tears of the bereaved have become comparable to the excretions of the diseased. Both are distasteful. Death has been banished” (580). It would be difficult not to celebrate the success at pushing death a little further away. No longer do parents withhold attachment to their children until they feel more certain they may live. No longer is calamity floating to us on every breeze, but this tide of great accomplishment separates us from our humanity and meaning. And death, still, is not vanquished but merely held at bay.

Patient Autonomy

Most patients express a wish to die at home rather than in a hospital among strangers, yet most people in America die in a hospital. Many people die in emergency situations, and dying in a hospital presents ethical qualms for virtually no one. Others, however, desire to die at home but get caught up in the fight against death rather than care for dying. Medical interventions and efforts to prolong life take precedence over providing the care the patient has requested.
Jeffrey Stout provides a typical narrative of how death occurs in America:
My maternal grandfather, for all his traditional skill in carrying out his own dying, did not die in his bedroom at home. Like the vast majority of Americans today, he died in a hospital, which he experienced as a sprawling bureaucracy, run by managers, staffed by technical experts, and clogged with advanced technology he could neither understand nor do without . . . . After days of frustration, he finally called a couple of doctors into his room and vented his moral outrage (Stout 275).
The death of Stout’s grandfather is what Philippe Aries refers to as the bad death or ugly death. In his description of the bad death, he says,

This is always the death of a patient who knows. In some cases he is rebellious and aggressive; he screams. In other cases, which are no less feared by the medical team, he accepts his death, concentrates on it, and turns to the wall, loses interest in the world around him, cuts off communication with it. Doctors and nurses reject this rejection, which denies their existence and discourages their efforts (587).

The usual demon of bioethics, the paternalistic physician, is not the problem in this case. Any doctor would be reasonable in assuming that patients brought to the hospital were brought there for care. By pushing death into the hospital, we have created an untenable situation for medical teams. Any given professional providing care for Stout’s grandfather would probably agree that a home death would be preferable. This is true for most patients with long-term, terminal illnesses. Generally, patient autonomy is not taken away by paternalistic hospital staff; it slips into a bureaucracy created to fight death, not accommodate it. Patients themselves or their caregivers may voluntarily check into a hospital when a medical crisis occurs without foreseeing that they are in effect asking the doctors to treat their condition rather than to allow death its natural progression. Indeed, when a patient is presented to hospital staff, they must assume that the patient is seeking treatment to prolong life. To withhold treatment in this case could lead to a charge of negligence.

Patient autonomy may also be limited by external factors. To take the extreme and most obvious example, the wish to die at home can only be accommodated for those who have a home. The wish to die among family members can only be afforded those with loving family members. The patient’s wishes can be respected but cannot be fulfilled anymore than the common wishes to marry a billionaire with eternally youthful good looks. The mere act of wanting something does not make it possible or obligatory. Patients may, of course, refuse treatment. The easiest way to avoid treatment, however, is to stay away from treatment providers. It is difficult to break old habits, though, and most of us are in the habit of going to doctors and hospitals when we feel bad.

In some cases, patients may not only express a preference for where they will die but for how their body will be prepared once they have passed. William May says, “While the body retains its recognizable form, even in death, it commands a certain respect. No longer a human presence, it still reminds us of that presence which once was utterly inseparable from it” (139). Does respect for autonomy extend beyond death? Philosopher Jeremy Bentham asked that his remains be preserved and kept at University College London as his “Auto Icon.” Perhaps he would not be thrilled with the results of the original efforts at preservation, but he was preserved, and his remains are still displayed at the university. Would we be violating Bentham’s autonomy by destroying or burying his remains? More to the point, can caregivers be accused of violating the autonomy of the dead recently deceased or otherwise? The first impulse is to say any reasonable request should be respected beyond death, but often our selfish ends make us think differently. Franz Kafka asked that all his manuscripts be destroyed when he died, but his friend Max Brod never carried out the request. As a result, Kafka has become a renowned author, and we may feel Brod did him a great favor by failing to carry out his dying wish. We could also take the view that Kafka was harmed by having his wish ignored.

Thomas Nagel gives some justification for respecting the wishes of the dead. He says, “When a man dies we are left with his corpse, and while a corpse can suffer the kind of mishap that may occur to an article of furniture, it is not a suitable object of pity. The man, however is. He has lost his life, and if he had not died, he would have continued to live it, and to possess whatever good there is in living” (7). The person who has died is still of value to us. Our obligations do not evaporate on the occasion of death. Decisions about whether a patient’s autonomy can be violated after death need clarification, but most families try to honor the free choices of their loved ones. This is more easily done when the patient is not handed over to strangers in a hospital. The exception is when the recently dead wished to donate usable organs. In cases of lingering illness, this is usually not a concern or even an option, but if donation is desired and possible, a hospital death may be recommended.

Caregiver Autonomy

Medical staff sometimes suggest (on occasion the suggestions may feel like force to the caregivers and patients) that patients be transferred to long-term critical care or hospice in spite of the preference of both the caregivers and the patients to have the death occur at home. Indeed, caring for a dying patient can be extremely traumatic and physically demanding, so the concerns of medical staff for the caregivers are understandable. One aspect of care for demented patients that is often overlooked is reduced inhibition. Having to lift and bathe an adult patient is physically demanding, cleaning feces and urine and changing diapers can be psychologically disturbing, but watching one’s parents engage in extremely inappropriate and embarrassing sexual behavior can be completely demoralizing. Given these realities, it is easy to understand why doctors and nurses would advise caregivers to consider long-term care or hospice over home care for the dying patient. Nonetheless, a beneficent denial of autonomy is still denial of autonomy.

A 2005 study published in Palliative Medicine examined the predictors for a home death. Not surprisingly, it found that home deaths were most likely to occur when the dying person wanted to die at home, when the physician visited the home during the last month of life, and when the care recipient had a healthy caregiver. The authors note that ethicists tend to focus on individual autonomy but that the autonomy of the caregiver cannot be ignored in this context. The article says, “The emphasis on individual autonomy overlooks the communal nature of death. When considering who is responsible for meeting the needs of the dying person, the informal caregiver plays a significant role. The choice of dying at home has profound consequences on informal caregivers, typically the wife or daughter” (497). Efforts to increase the number of home deaths must consider the need for support for caregivers and acknowledgment of the autonomy of caregivers. Sometimes the autonomy of the patient or caregiver must necessarily be compromised, so solutions should be sought that respect both. Some examples might include in-home hospice or palliative care, caregivers’ day out programs, or even out-of-home temporary services where a patient could be cared for outside the home for short periods to provide some brief respite for the caregiver. Ideally, families could work together to provide such solutions themselves, but it is not always possible for such solutions to work, especially with smaller families. In some cases, a hospital death may be the best option. In such a case, the authors of the study mentioned above suggest that we work to improve “the environmental qualities of institutions to enable them to offer the same things that people value about home deaths” (498). The worst any dying person should have to endure would be a death in a hospital in a quiet room with familiar and consistent caregivers. Proper education and social support should help caregivers cope with the disturbing behavior of dying and demented patients. Home death is unlikely to be successful when the needs of the caregivers are ignored.

In 2004, Cindy Cooley published an essay in the International Journal of Palliative Care exploring why patients in the United Kingdom are not able to choose where they die. Among other reasons, caregivers have difficulty getting hoists and other equipment that would enable them to care for the dying in the home. Providing equipment for home care would be cheaper than admitting patients to hospitals, but, again, bureaucracy works against the care team, whether it is made up of family or palliative care nurses.

I recently interviewed a woman in Galveston, Texas who cared for her dying father for three years as he died from Alzheimer’s Disease. He died in September 2006. During one medical crisis, she took him to the hospital for help. When he was to be released from the hospital, the staff told her he would need to be moved to a long-term care facility. The woman averred, explaining that she would care for him at home and that she had power of attorney. She was told that she would not be able to provide appropriate care and that it would be too much of a strain on her. She explained that she would prefer to decide for herself when she was over-taxed, rather than leaving the decision to strangers. A few months later, her father died in her arms at home as both he and she had wished. The medical staff members at the hospital were correct, though, that the strain was tremendous, and the process took a visible toll on her physically and mentally. Not only was the strain of caring for him sometimes surprisingly difficult, but she was not prepared for the bodily occurrences at the end. She had imagined holding him as he gently passed over to a state of calm, peaceful death. She was surprised by his convulsions, gasping, expelling of mucous, and other fluids. She struggled to comfort him in his last moments while also cleaning and restraining him. In spite of all this, she says she would do the same thing all over again.

One reason such situations occur is that medical emergencies as the one described above create chaos. Giving further reasons that patients do not die in the place of choice, Cooley also wrote, “Relatives may panic; they may be elderly or on their own and unsure if the end is imminent when the patient is gasping or panicking. Fear at the end is often enough to galvanize the relative into calling the emergency services . . . Doctors who are unfamiliar with the patient or unsure what to do to relieve distress may see the hospital as the safest option.” The result is that the patient dies in the hospital rather than home. Without sufficient education, it is impossible for caregivers to distinguish between the end of life and a passing crisis that requires intervention to provide comfort to the patient. This distinction is often difficult for professional caregivers, so it is understandable that family members would have trouble making a decision while watching a loved one in obvious discomfort or even agony.

The role of the caregiver complicates matters of patient autonomy in all cases here. Even a caregiver who wishes to honor the choice of the patient may weaken toward the end. The physical and psychological demands may be much greater than anticipated. Medical crises may be much more traumatic than anyone could imagine. Many dream of quietly holding a loved one and talking softly as the person gently slips into the comfort of death, but death is rarely so commodious. Even before such surprises, fewer caregivers than patients have a preference for the death to occur at home. They may agree to have the death in home only out of respect for a loved one. These caregivers may have enormous anxiety about watching someone die to begin with. Any sign of “emergency” that gives them reason to call an ambulance can be used to relieve them of an accepted but unwanted burden.
The authors of a Canadian study on home death note the nature of competing autonomous choices in home death, saying, “A central ethic in palliative care is the view that how people die should be grounded in self-control and choice. However, the emphasis on individual autonomy overlooks the communal nature of death. When considering who is responsible for meeting the needs of the dying person, the informal caregiver plays a significant role. The choice of dying at home has profound consequences on informal caregivers, typically the wife or daughter.”

In her book, Healing the Dying, Melodie Olson gives advice to caregivers. She advises caregivers to recognize their physical limitations and get help when needed. This is good advice, but it assumes that help is available. Most caregivers probably do not endure the strain and ardor of care alone out of choice. It is difficult to imagine caregivers turning away genuine offers of support. Olson also advises caregivers to sleep as much as needed. Again, she assumes that patients will be cooperative and put their needs on hold long enough for caregivers to get adequate rest. She also advises, among other things, to take advantage of respite care (182). Rather than viewing her list as good advice for caregivers, it would make sense to view it as a list of caregiver needs. Both patients and caregivers sometimes opt for hospice or hospital care as the time of death approaches. One might guess this is caused by the unexpected hardship of a home death. Patients will be much more likely to die at home if caregivers are provided respite care, opportunities to sleep and exercise, and help with strenuous physical tasks. Home death is more likely to occur when home caregivers are given support, information, and alternative access to services. A home visit from a physician may not be necessary. Visits from social workers, nurses, or other professionals may prove extremely useful in improving success for home deaths in the U.S.

Death with Dignity

Jack Coulehan describes two distinct movements for death with dignity. One is rooted in a philosophical tradition defending self-determination and individual rights, specifically the right to euthanasia and assisted suicide. This movement promotes social changes that will prevent patients from living beyond their normal lifespan and promotes aid in dying. The other movement to promote death with dignity focuses on relationships between the dying person and others. This movement recognizes the communal nature of death and seeks to shine a light on the “invisible death” described by Aries. The relational concept of death with dignity described by Coulehan advocates a more humane way to approach the “art of dying,” even if it must be done in a hospital.

This relational movement for death with dignity does not focus solely on the dignity of the patient. We are concerned now for the dignity of caregivers, family, friends, and society. Instead of being invisible, death will be publicly recognized and mourning will no longer be considered impolite or embarrassing. Society will have tolerance for the concerns of the dying patient and the grief and passion of the caregivers. We will recognize that leading an authentic life will require an acceptance and recognition of death.

A return to communal death and mourning will not come easy, but small steps and bring continual improvement. Already, hospice care is becoming more available and bringing families into contact with dying relatives. This is not a complete answer, but it is improvement for a society so accustomed to hiding death. In addition to hospice care, availability of medical equipment in the home can help facilitate a dignified death without undue strain on caregivers. Respite care, provided in home or in a medical facility, can make it possible for caregivers to keep loved ones at home when it might otherwise be necessary to put them in a hospital. Proper death education can help caregivers recognize the signs of impending death without panicking and calling an ambulance to relieve the agony of the dying person. As we learn to better prepare for death as a society, as families, and as individuals we will live fuller and deeper lives with greater appreciation for the sensations of earthly existence.

Language and the Content of Belief

Language and the Content of Belief

If language is a core feature of consciousness, our conscious thoughts, expressed in language, should accurately reflect our belief states, and we should be able to accurately determine the contents of at least our own beliefs. Further, we should be able to freely affect what our belief states are through rational analysis. It is this ability that creates in us a sense of moral agency and responsibility. Through rational analysis and argument, we can form beliefs that are appropriate and honorable. If we assume other humans are more or less like us, we may also be able to extend this ability to other humans through inference and analogy. Ascribing content to the beliefs of non-human animals would be riskier business, unless we found animals that could use our language. If language is a core feature of consciousness, then a machine that could use human language as a human might use language would have achieved human consciousness. On the other hand, if language is a more distal feature of consciousness, ascribing content to our own beliefs might be as risky as ascribing content to the beliefs of other humans, animals, and machines. Our moral decisions may be determined by something other than rational analysis. Our moral views may be the product of evolution, not reason. I will argue that many of our beliefs and thoughts are unconscious, and we attempt to ascribe content to our beliefs by the same inferences we make to ascribe content to others. To say we know our own minds is only to say that we are aware of our minds, not to claim that we know the specific content of our beliefs.

Human language brings clarity and understanding to human thoughts and beliefs. In fact, many have argued that without language, humans have no capacity for thought or belief. Descartes expresses a firm conviction that language is necessary for any thought:

There has never been an animal so perfect as to use a sign to make other animals understand something which bore no relation to its passions; and there is no human being so imperfect as not to do so. . . . The reason animals do not speak as we do is not that they lack the organs but that they have no thoughts. It cannot be said that they speak to each other but we cannot understand them; for since dogs and some other animals express their passions to us, they would express their thoughts also if they had them. (CSMK 575)

While the idea that language is necessary for the emergence of belief has been accepted for centuries, philosophers and others have begun to use the term “belief” more permissively, making the assertion much less obvious. While to say a cow had beliefs may have once implied the cow ascribed to some creed or doctrine, the claim has a much more mundane connotation in contemporary philosophy. For example, using the language of belief/desire psychology, we might say that a group of cows and humans gathering under a cover after hearing a thunderclap share a common belief that it is about to rain. We will also say they desire to stay out of the storm. Cows do not need the ability to express their beliefs to want to avoid a storm that appears to be imminent. In this case, it is easy to describe the cow’s behavior using the language of belief/desire psychology, but it is also easy to imagine that the humans under the cover are in a far different position than the cows; they understand their position, have plans and fears for the future, and have a sense of what it is right and wrong to do. We want to say the humans are conscious, and the cows are not. We know the humans are conscious because we assume them to be more or less like us, and we are conscious. Language expresses our thoughts and beliefs, and we assume that other humans use language and experience consciousness as we do.

Language does more than provide evidence of consciousness, though; it is the structure of consciousness. A sophisticated study of human language and behavior should produce a powerful and accurate psychological theory. If language sets humans apart from machines and animals, then language is quite likely the feature of human consciousness that produces moral agency and responsibility. If animals and machines are not capable of beliefs and thoughts, then humans are the only known creatures to have any concept of moral responsibility. However, if consciousness is not unique to humans, or if language is not the stuff that makes consciousness, then we may not be able to construct an adequate description of beliefs and desires, much less moral agency.

Language of Machines

Daniel Dennett argues that we can use language, through the “intentional stance,” to describe the beliefs of people, animals, or artifacts including a thermostat, a podium, or a tree (Brainchildren 327). It is easy to construct sentences to describe the beliefs of these objects (“The thermostat believes it is 70 degrees in this room”). If the thermostat is working properly and conditions are more or less normal, we should be able to predict the temperature based on the actions of the thermostat, or we should be able to predict the actions of the thermostat by knowing the temperature in the room. We recognize the possibility of error, however. As the thermostat may be broken, we are likely to say, “According to the thermostat, . . .” If the room does not feel warmer or cooler than the thermostat indicates, then we assume all is well. If we want to know the true nature of belief, being able to describe the beliefs of a thermostat is outrageously unsatisfying. Unless the thermostat is able to describe its own beliefs using language, we are loath to even suggest it has beliefs.

But given the capacity for human language, machines might appear to have beliefs and desires similar to human beliefs and desires. In fact, if a machine could use human language in a manner indistinguishable from human use, it is difficult to see how the consciousness of the machine could be denied with any certainty. Of course, the claim that such a machine is impossible goes back at least to Descartes, who wrote, “It is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do” (CSM II 140). Surely Descartes did not imagine 21st century computer programs when he provided this early version of the Turing Test (in which a computer is held to be conscious if it can master human conversation), but so far his challenge has not been met.

In John Searle’s Chinese room argument, we are challenged to accept that even a computer that could pass the Turing Test would not prove the computer is conscious. Although he does not deny that machines could someday be conscious, a language program would not be proof of it (Searle 753-64). Our best reason for believing the machine is not conscious is that it is not similar enough to a human to be considered conscious by analogy. Even if we can’t deny beliefs and desires to a machine with certainty, we are equally ill equipped to accurately ascribe beliefs and desires to machines, or trees, or stones.

Beliefs of Non-Human Animals

We are more likely to feel confident ascribing beliefs to non-human animals for several reasons: they share at least part of an evolutionary history with humans, they share a genetic material with humans, they share behaviors similar to those of humans, and they share a physiological structure similar to that of humans. As a result, many humans feel comfortable making inferences about non-human animal experience and consciousness based on analogy with humans.

David Hume claims that we can make many inferences about animals based on the assumption that animals are analogous to humans in many respects. Similarly, we can make inferences about humans based on the observation of animals. For Hume, this is compelling evidence that humans are not as rational as we like to think. Animals make many of the same inferences as humans without the benefit of scientific or philosophical reasoning. Our philosophical arguments are used only to support beliefs we share with less rational animals. While we may think we are using reason, we are only providing explanations for beliefs built by habit or biology. He says,

It is custom alone, which engages animals, from every object, that strikes their senses, to infer its usual attendant, and carries their imagination, from the appearance of the one, to conceive the other, in that particular manner, which we denominate belief. No other explication can be given of this operation, in all the higher, as well as lower classes of sensitive beings, which fall under our notice and observation.

Hume clearly feels we can ascribe beliefs to non-human animals. In particular, we can assume that animals believe in cause and effect. In contemporary terms, our beliefs may be formed by evolution or experience, but our own understanding of those beliefs is expressed through rational explanation. Hume’s assumption that it is possible to infer anything at all about humans based on an analogy with animals is, of course, unproven. However, his description brilliantly illustrates the possibility that beliefs we hold to be founded in reason are merely the result of habit, while reason is only our way of expressing those beliefs. This is enough to warn us of the perils of ascribing content to beliefs based on our descriptions of our own beliefs. It is at least possible that there is a great divide between what we believe and what we think we believe.

In his paper, “Do Animals Have Beliefs?,” Stephen Stich examines the difficulty of ascribing content to animal beliefs. For Stich, the problem of ascribing content to animal beliefs is serious enough that we may fear ascribing content to any beliefs at all. Stich offers two possible accounts for animal belief and belief ascription, ultimately rejecting both. (Animals 15-28)

The first possibility is that animals do have belief, and we can ascribe content to those beliefs by observing animal behavior (in the manner of Hume). Stich contends, “When we attribute a belief to an animal we are presupposing that our commonsense psychological theory provides . . . a correct explanation of the animal’s behavior” (Animals 15). Indeed, desires and beliefs can provide a foundation for describing the causes of animal behavior. Assuming they are analogous to humans, animal beliefs are formed by perception and inference. Seeing, hearing, and smelling food in a dish, the dog comes to believe there is food in the dish, just as there is every morning. This belief results in a desire to gain access to the dish. Once an animal has formed beliefs, these beliefs can generate other beliefs.

For example, some dogs have a desire to chase squirrels. Upon seeing a squirrel in the back yard, such a dog will bark at the door, because this particular dog believes barking at the door will cause a human to come and open the door. (We could describe an infinite array of beliefs. For example, dogs believe squirrels should be chased. Dogs believe humans should open doors for dogs. Dogs believe barking at doors is more effective than scratching them.)

According to Stich, the appeal of a view based on beliefs and desires is that it is the most intuitive explanation for human behavior. Further, it is hard to imagine that we could explain human behavior through belief/desire psychology without being able to explain animal behavior in the same way. If folk psychology fails in one case, it appears to fail in the other.

The second possibility is that animals do not have belief. It is impossible to ascribe content to animal beliefs; therefore, it is meaningless to talk about animals having belief. If a dog has no concept of what a bone is, then it is impossible to say that the dog has any beliefs at all about bones. Without language, it is impossible to ascribe belief to animals. This begs the question of whether language actually enables us to ascribe content to beliefs accurately. Still, if we can’t ascribe content to the beliefs of animals, then we may run into trouble ascribing content to the beliefs of humans.

Stich poses the solution offered by David Armstrong. According to Armstrong, although animals lack the concepts we have, we can ascribe content to animal beliefs in a “referentially transparent” (de re) manner. A dog may respond to a bone in the same manner we would expect it to respond if it had our concept |bone|. Armstrong acknowledges that we can not talk about animal beliefs in a way that is “referentially opaque” (de dicto). In order to do this, we would have to know that the dog had a concept analogous to our concept of |bone|, which is impossible. Armstrong claims, however, that the dog does have a de dicto concept of |bone|, and enough research of animal psychology might eventually give us insights to animal concepts. For Armstrong, our de re discussions of animal concepts presuppose that there are correct de dicto beliefs on board the animal that correspond to our de re descriptions. If no correct de dicto concepts exist, then our efforts are only a way of describing animal behavior, not a way of understanding animal belief (19-21).

On Armstrong’s view, eventually we will gain enough knowledge of animals to accurately ascribe content (de dicto) to animal beliefs. Stich’s most serious objection to Armstrong’s argument is that we can only ascribe contents of beliefs to subjects that “have a broad network of related beliefs that are largely isomorphic to our own” (27). We cannot ascribe content to the beliefs of any being that does not share our concepts, and we have no way of knowing what concepts animals share. For example, even if we understand all the conditions necessary for a dog to react to a bone in front of him, it will make no sense to say, “Fido believes a bone is in front of him,” unless we assume Fido has a concept for “in front of,” among others. Following Armstrong’s suggestion, it may be possible to determine exactly how a dog would react to a bone or bone-like object in every conceivable situation. We can predict with 100 percent accuracy the behavior of the dog. We may identify all the properties of the human concept |bone| and all the properties of the dog concept of |bone’|. We’re not out of the water, though, as the concept |bone’| is not the dog’s concept but our concept of the properties of the dog’s concept. We still don’t know what concept the dog has on board.

For Stich, a larger problem may be that we do not know what concepts other humans share. If we follow the reasoning that we can only claim beings have beliefs if they have specifiable content and that content is only specifiable if they have concepts isomorphic to our own, we are in a position of implying that humans with concepts radically different from our own have no beliefs at all. Examples of such humans would include people from different times or cultures. Indeed, anyone from a different language community would be in danger of being declared to be wholly without beliefs.

Stich concludes that it is impossible to decide whether a belief without specifiable content is a belief at all, and it is impossible to verify content for either human or non-human animals. He claims, “If we are to renovate rationally our pre-theoretic concept of belief to better serve the purposes of scientific psychology, then one of the first properties to be stripped away will be content” (27). Folk psychology, based on the attribution of content to beliefs and desires, is inadequate for a scientific account of belief.

Belief and Other Minds

If there is any possibility of accurately attributing belief to any other minds, it would seem that human minds, with a capacity for human language, would be the best hope. We recognize that a human can have a mind full of desires, beliefs, and rational arguments without ever expressing them. In Kinds of Minds, Daniel Dennett points out that this is possible because we sometimes have beliefs and desires that go unexpressed, and we can imagine never expressing any of them, or at least misleading people as to what they are. Actually ascribing content to the beliefs of humans is risky business, then, but at least we feel confident that humans are generally able to communicate beliefs and desires roughly isomorphic to our own beliefs and desires. We believe humans have minds, and their use of language is the best evidence of it (Kinds 12).

Because humans use language, we show them greater moral concern than we show other animals. The closer their language is to our own, the more concern we show them. Wittgenstein famously said that if a lion could talk, we couldn’t understand it. Dennett suggests that this lion would be the first in history to have a mind and would be a unique being in the universe. We assume that any animal that can use language in the manner of humans has a mind (Kinds 18).

The problem with this assumption is that we might be easily fooled. Another human may use language in exactly the same way that I do, express all the beliefs I have, exhibit all the behavior I exhibit, and perhaps be acting deceptively or robotically. When serial killers and pedophiles are arrested, friends, family members, and coworkers are generally interviewed who express that they have made grandly mistaken ascriptions of beliefs and desires to the criminals. It is the trust we place in members of our language community that enables us to be duped in such horrendous ways. We should perhaps be less confident that members of our language community have beliefs and desires isomorphic to our own.

But even if some members of the language community are deceptive, surely they at least have minds—at least have some beliefs and desires, even if we can’t know the content. If we encounter a robot with a human appearance and the ability to use human language effectively (something like the fictional Stepford Wives), would we assume the robot to have a mind? Such robots are being developed, but none exists (see Dennett’s discussion of Cog[1] in Kinds of Minds, page 16), so the questions can’t be answered empirically. While developing such a robot, we may come to understand exactly how a mind develops and comes into being. On the other hand, it is possible to imagine such a robot existing with no mind and no human feeling at all. If we can imagine a robot as an automaton, why not imagine that at least some humans are automata? Perhaps their use of language is as unconscious as our basic reflexes. Their bodies simply produce language naturally with no self-awareness and no beliefs and desires. While we assume this is not the case, it is impossible to determine this with any certainty.

What We Know of Our Own Minds

If nothing else is certain, we must know the contents of our own minds. Descartes was unable to doubt the existence of his mind, and it seems quite impossible for me to doubt the thoughts I am thinking right now. As I produce thoughts, I am aware of them, and it is impossible for me to escape them. My thoughts, formed by language, express the contents of my beliefs and desires precisely, because that is how I have intended to express them to myself. I can’t imagine I am deceiving myself or that I am an automaton. I am a thinking being immersed in my conscious life. If the language I use in thinking expresses my beliefs accurately and rationally, then this is what enables me to develop moral principles and behave in a morally responsible manner.

But what of our “unconscious” thoughts? Hume demonstrated that our belief in cause and effect seems to exist in a precognitive state. We don’t use language and reason to develop a belief in cause and effect—in at least some cases, language merely expresses what is built into us. Our moral reasoning, though, is based on careful consideration and tediously crafted arguments. Surely our language is not expressing a precognitive instinct or intuition. In Kinds of Minds, Dennett quotes Elizabeth Marshall Thomas saying, “For reasons known to dogs but not to us, many dog mothers won’t mate with their sons” (10). Dennett rightly questions why we should assume that dogs understand this behavior any better than humans understand it. It may just be an instinct, produced by evolution. If the dog had language, it might come up with an eloquent argument on why incest is wrong, but the argument would seem superfluous—just following the instinct works well enough.

By the same token, human moral arguments may do nothing more than express or at best buttress deeply held moral convictions instilled by evolution or experience. In a Discover magazine article titled “Whose Life Would You Save?” Carl Zimmer describes the work of Princeton postdoctoral researcher Joshua Green. Green uses MRI brain scans to study what parts of the brain are active when people ponder moral dilemmas. He poses various dilemmas familiar to undergraduate students of utilitarianism, the categorical imperative, or other popular moral theories.

He found that different dilemmas trigger different types of brain activity. He presented people with a number of dilemmas, but two of them illustrate his findings well enough. He used a thought experiment developed by Judith Jarvis Thompson and Phillipa Foote. Test subjects were asked to imagine themselves at the wheel of a trolley that will kill five people if left on course. If it is switched to another track, it will kill one person. Most people respond that they will switch to another track in order to save four more lives, apparently invoking utilitarian principles. In the next scenario, they are asked to imagine they can save five people only if they push one person onto the tracks to certain death. Far fewer people are willing to say they would push anyone onto the tracks, apparently invoking a categorical rule against killing innocent people. From a purely logical standpoint, the two questions should have consistent answers.

Greene found that some dilemmas seem to evoke snap judgments, which may be the product of thousands of years of evolution. He notes that in experiments by Sasrah Brosnan and Frans de Waal capuchin monkeys who were given a cucumber as a treat while other monkeys were given grapes would refuse to take the cucumbers and sometimes would throw the cucumbers at the researchers. Brosnan and De Waal concluded that the monkeys had a sense of fairness and the ability to make moral decisions without human reasoning. Humans may also make moral decisions without the benefit of reasoning. It appears evolution has created in us (at least in those who are morally developed) a strong aversion to deliberately killing innocent people. Evolution has not prepared us for other dilemmas such as whether to switch trolley tracks to reduce the total number of people killed in an accident. These dilemmas result in logical analysis and problem solving. Zimmer writes, “Impersonal moral decisions . . . triggered many of the same parts of the brain as nonmoral questions do (such as whether you should take the train or the bus to work)” (63). Moral dilemmas that require one to consider actions such as killing a baby trigger parts of the brain that Greene believes may produce the emotional instincts behind our moral judgments. This explains why most people appear to have inconsistent moral beliefs, behaving as a utilitarian in one instance and as a Kantian the next.

It may turn out that Hume was correct when he claimed, “Morality is determined by sentiment. It defines virtue to be whatever mental action or quality gives to a spectator the pleasing sentiment of approbation” (Rachels 63). His claim is that we evaluate actions based on how they make us feel, and then we construct a theory to explain our choices. If the theory does not match our sentiment, however, we modify the theory—our emotional response seems to be part of our overall architecture. The work of philosophers, then, has been to construct moral theories consistent with our emotions rather than to provide guidance for our actions.

Language gives us access to our conscious thought. Language permits us to be aware of our own existence and to feel relatively assured that other minds exist as well. It is through language that we make sense of ourselves and the world. We may be deceived, though, into thinking that thought is equivalent to conscious thought. Much of what goes on in our mind is unconscious. Without our awareness, our mind attends to dangers, weighs risks, compensates for expected events, and even makes moral judgments. Evolution has provided us with a body that works largely on an unconscious level. However, humans, and perhaps some nonhuman animals, have become aware of their own thoughts, and this awareness has led to an assumption of moral responsibility. This awareness should not be taken to prove that we are aware of the biological facts that guide our moral decisions.

Stephen Stich explores the development of moral theory in his 1993 paper titled, “Moral Philosophy and Mental Representation.” In the essay, Stich claims that while most moral theories are based on establishing necessary and sufficient conditions for right and wrong actions, humans do not make mental representations based on necessary and sufficient conditions. He says, “For if the mental representation of moral concepts is similar to the mental representation of other concepts that have been studied, then the tacitly known necessary and sufficient conditions that moral philosophers are seeking do not exist” (Moral 8). As an alternative, he suggests that moral philosophers should focus on developing theories that account for how moral principles are mentally represented. He writes:

These principles along with our beliefs about the circumstances of specific cases, should entail the intuitive judgments we would be inclined to make about the cases, at least in those instances where our judgments are clear, and there are no extraneous factors likely to be influencing them. There is, of course, no reason to suppose that the principles guiding our moral judgments are fully (or even partially) available to conscious introspection. To uncover them we must collect a wide range of intuitions about specific cases (real or hypothetical) and attempt to construct a system of principles that will entail them. (8)

On this view, moral theories represent beliefs that are not only unconscious but are unavailable to the conscious mind. In order to make a determination of the content of our own moral beliefs, then, we must examine our own moral decisions and infer the content of our beliefs. In this approach, we find that humans are deciphering their own beliefs in much the same manner the Brosnan and De Waal determine the moral beliefs of capuchin monkeys. Not only does language fail to give a full accounting of our belief states, but our conscious thoughts may be an impediment to determining our actual beliefs, so that we must consider prelinguistic or nonlinguistic cues to discover what we actually believe.

Conclusion

When we ascribe content to the beliefs of other beings, including human beings, we assume those beings have mental experiences roughly isomorphic to our own. Based on our own experiences and beliefs, we make inferences about the beliefs of other beings. The more a being resembles us, the more confident we are in making such inferences. As a result, we are most comfortable ascribing contents to the beliefs of humans who speak the same language we speak. We are even more comfortable if the person is of the same gender and social class. Even in these cases, though, we may be too optimistic. Our own beliefs may be as inaccessible to us as the beliefs of our distant neighbors or monkeys or lobsters. Ascribing content to beliefs may be futile. On the other hand, we seem to survive quite well assuming that we know our own beliefs and that others have beliefs that are more or less transparent to us. We may be able to use the language of belief/desire psychology as a heuristic to help us understand, manipulate and cope with our behavior and the behavior of others. Although language is a distal feature of consciousness and may not accurately determine the content of our beliefs, language may enable us to gain a community of thinkers and form successful relationships with other beings.


Works Cited

Dennett, Daniel C. Brainchildren. Cambridge: MIT P, 1998.

—. Kinds of Minds. New York: Basic Books, 1996.

Descartes, Rene. The Philosophical Writings of Descartes: Volume II. Trans. John Cottingham,

Robert Stoothoff, and Dugald Murdoch. Cambridge: Cambridge UP, 1985.

—. The Philosophical Writings of Descartes: Volume III. Trans. John Cottingham,

Robert Stoothoff, Dugald Murdoch, and Anthony Kenny. Cambridge: Cambridge UP,

1991.

Hume, David. An Enquiry Concerning Human Understanding. Vol. XXXVII, Part 3. The

Harvard Classics. New York: P.F. Collier, 1909–14; Bartleby.com, 2001.

www.bartleby.com/37/3/. [May 11, 2004].

Hume, David. “Morality as Based on Sentiment.” The Right Thing to Do: Basic Readings in

Moral Philosophy. Ed. James Rachels. Boston: McGraw Hill, 2003.

Searle, John. “Is the Brain’s Mind a Computer Program?” Reason at Work. Eds. Steven Cahn,

Patricia Kitcher, George Sher, and Pater Markie. Wadsworth, 1984.

Stich, Stephen P. “Moral Philosophy and Mental Representation.” The Origin of Values, ed.

Michael Hechter, Lynn Nadel & Richard E. Michod. New York: Aldine de Gruyter.

1993. 215-28. http://ruccs.rutgers.edu/ArchiveFolder/Research%20Group/

Publications/MPMR/MPAMR.html. [May 11, 2004].

—. “Do Animals Have Beliefs?” Australasian Journal of Philosophy 57.1: 15-28.

1979.

Zimmer, Carl. “Whose Life Would You Save?” Discover Apr. 2004: 60-65.


[1] Dennett is working with Rodney Brooks, Lynn Andrea Stein, and a team of robotocists at MIT to develop a humanoid robot named Cog. Dennett says, “Cog is made of metal and silicon and glass, like other robots, but the design is so different, so much more like the design of a human being, that Cog may someday become the world’s first conscious robot.” (Kinds 16)