Thursday, August 15, 2013

From a History of Science Textbook

"Modern astronomers regard the planet Earth as being an undistinguished planet, orbiting a fairly unremarkable star on the outer fringes of an unexceptional galaxy...."

They are implying that the physical size and location means that the Earth could not be particularly important.

How would they react if we used the same sort of reasoning about Newton's Principia?

"Newton's Principia is an average-size book written in a country that is on the outer fringes of a small continent."

Would the book be more important if it were a very long book written in the center of the largest continent - say, in Kazakhstan rather than in England?

The quotation is from Bowler and Morus, Making Modern Science, p. 277. 

Friday, October 19, 2007

New Scientist On Ethics and Religion

For a perfect example of how flimsy scientists' reasoning tends to be when they deal with ethics and religion, look at the article in New Scientist featured on the magazine's cover under the title of "If morality is hard-wired in the brain, what's the point of religion?"

This cover-title pretty much sums up its argument, and its summary in the magazine's table of contents repeats the same idea: "What good is God? You don't need religion to be a moral person, so why is it part of almost all human cultures?"

The article describes the work of cognitive psychologist Marc Hauser, whose book Moral Minds says we have an innate sense of morality that is similar to our innate sense of language. Just as Noam Chomsky has argued that the many different human languages all reflect certain deep structures that are hard-wired in our brains, despite the difference of their surface grammars, Hauser argues that the many different moral codes of different human societies all reflect common deep structures that hare hard-wired in our brains, despite the differences in their explicit moral codes that make some societies accept and others reject slavery, revenge killing, aggressive war, inferiority of women, and so on.

Then the article asks: since these deep moral structures are wired in our brains, and since this means that atheists can be good people, then why do we need religion as a support of morality?

It gives the answer developed by Jesse Berling, head of the Institute of Cognition and Culture at Queens University, Belfast: morality and religion evolved in parallel in response to the same evolutionary pressures. Once early humans acquired language and a theory of mind, they could spread news about people's behavior, and anyone who behaved in a pro-social manner would be at an advantage because of his good reputation, while anyone who behaved in an anti-social manner would be at a disadvantage. At the same time, religious belief evolved, because early humans developed a sense that they were being watched and judged, which they attributed to supernatural beings rather than to the group, since theory of mind attempts to attribute intentions and meaning even when they are none.

This argument is superficially plausible, but it has some very obvious flaws.

First, if there are hard-wired deep moral structures that underlie the moral systems of different societies, but if these deep structures allow the moral codes of these societies to be so different that some societies accept slavery, revenge killing, and inferiority of women as part of their moral code, while other societies condemn these same things as immoral, then these hard-wired deep structures obviously are not in themselves enough to impel us to live moral lives or even to tell us what it is to live a moral life. Marc Hauser's theory of hard-wired deep structures underlying morality is interesting, but it obviously does not eliminate the need for moral reasoning or for sanctions to support moral behavior, as the author of this article seems to think.

Second, Jesse Berling's theory that religion exists to support pro-social and suppress anti-social behavior, shows a profound ignorance of the history of religion. This might be a good explanation of the function of modern ethical religions, such as Judaism, Christianity, and Islam, but early religions did not have this sort of ethical function at all.

Anyone who knows the most prominent Greek myths, such as the myth of Agamemnon sacrificing his daughter Iphegenia to please the gods, can see that this early religion had nothing to do with ethics: it was a matter of pleasing capricious, temperamental gods in the same way that you would please capricious, temperamental people, by doing favors for them. Go back further to early horticultural societies, and you can see that religions are largely a matter of imitative magic, of convincing nature to be more fertile by performing fertility rites.

Jesse Berling's theory of the evolution of religion is what we would expect from a cognitive scientist who knows all about cognitive science and who knows nothing about the history of ethics and the history of religion. When he talks about the evolution of religion, he assumes that the earliest religions had the same functions as religion had in the nineteenth century, because he is ignorant of what literature and anthropology tell us about the history of religion.

See Helen Philips, "Is Good Good: You don't need religion to be a moral person, so why is it found in almost every human culture and what is it for?" New Scientist, September 1-7, 2007, pp. 32-36.

Sunday, April 15, 2007

Physicalists And Robots

Physicalists deny much of what makes us human by saying that we are essentially computers.

There is a striking example of this fact in an interview with Kevin O'Regan, who is director of the Laboratory of Experimental Psychology in Paris and is considered an important thinker by physicalist philosophers.

The interviewer asks him whether his ideas about the nature of consciousness have changed his view of himself, and he answers:

"It hasn't changed at all, because I knew I was a robot, and I was just trying to prove it to people. And finally I've managed to get it across to them. ... Ever since I've been a child I've wanted to be a robot. I think one of the great difficulties of human life is that one is inhabited by uncontrollable desires and that if one could only be a master of these and become more like a robot one would be much better off. "

When the interviewer asks if he feels estranged from other people, who think they are more than robots, he replies:

"I knew that they were all robots, and that they were just labouring under the delusion that they weren't. ... People are listening a bit more, but they're still very uspet, because ... they really do feel that they are persons and not robots."

From Susan Blackmore, Conversations on Consciousness (New York, Oxford University Press, 2006) pp. 171-172.

Thursday, March 22, 2007

Bernard Baars' Global Workspace Theory

Baars' global workspace theory is a good example of how computer modeling can help us to understand how consciousness works without helping us to understand what consciousness is.

Brain scans during studies of binocular rivalry (where you show different images to the two eyes and the subject is consciousness of only one at a time) have shown that visual consciousness in the brain involves a series of steps. First, the brain identifies a visual field of pixels, then lines and edges, then motion, then colors, and so on. Only in the last step, when there is object recognition in the lower temporal cortex, does the subject become conscious of what he is seeing.

Baars theorizes that consciousness acts as a global workspace that accumulates information and then feeds it back to other parts of the brain. He has developed computer models of consciousness on this basis.

This does help to explain the function of consciousness and why it evolved. I believe you can see it in the waggle dance of honey bees: they seem to be accumulating information as they watch the dance, and once they have accumulated enough information, they go to the location that the dance pointed at.

It is hard to avoid thinking that the bees are conscious when you look at the waggle dance, just as it is hard to avoid thinking that a dog is conscious when he sniffs around before deciding which way to go. Consciousness involves a delay that lets you collect more information before acting than you can collect when you act on reflex.

But this does not explain the fact that we evolved consciousness rather than evolving the sort of mechanical memory workspace that a computer uses to collect information. Presumably, it was more efficient for the brain to evolve in a way that uses consciousness to hold this information than to evolve in a way that uses computer-like memory to hold this information. But what is this consciousness that the brain evolved to use? That is still a mystery.

Saturday, February 03, 2007

Phenomenologists And The Itch

Here is an example of how difficult it is to have precise knowledge about inner experience.

The phenomenologists, who specialize in observing their own inner experience, claim that a key feature of consciousness is intentionality: Consciousness is always conscious of something. I think this is untrue.

The five senses that we use to perceive the world have intentionality, and so the verbs we use to describe them can take objects: I see something, hear something, smell something, feel something, or taste something.

But there are other things in our consciousness that do not seem to have intentionality, such as a pain or an itch. An itch feels the same whether it is caused by a flea biting you or by a rash in your skin. You cannot say: "I itch a flea" or "I itch a rash." You simply say "I itch," without the itch having intentionality and refering to something beyond itself.

This seems to be a more primitive form of consciousness. We would be better off if the itch were about its cause. If you "itched a flea," that would be an unpleasant itch that you would want to scratch, getting rid of the flea. If you "itched a rash," that would be an itch that you would not want to scratch, since it is not healthy to scratch a rash. But apparently we never evolved to make this distinction between different itches, as we make the distinction between different sights or sounds.

Among humans, you might say that an itch still has intentionality in the sense that it is about an irritation in some location on our body. But we locate itches in this way only because we are conscious of ourselves and of our bodies. Animals without self-consciousness might just react to an itch as an irritation without being conscious of its location.

Conceivably, there could be animals with consciousness but without any intentionality, without any awareness that the consciousness refers to something.

For example, when worms are exposed to the light, they squirm around to excape the light. Worms have light sensitive patches of skin but do not have eyes that focus the light and let them see objects. Conceivably, they are experience the light in the same way that we experience an itch or a pain.

If I have a pain in my lower back, I try to move in a way that reduces the pain - without any reference to an object in the world that is a source of the pain. Likewise, the worm may wriggle in a way that reduces the discomfort of being exposed to light, without any reference to something out in the world that is a source of the light.

We do not know whether a worm’s consciousness actually works in this way - or even whether a worm has consciousness. But it is conceivable that a worm could have this sort of consciousness without intentionality. The fact that we would still call if consciousness even though it does not have intentionality shows that intentionality is not a necessary feature of consciousness, as phenomenologists claim.

Despite all the time that they have spent thinking about their own inner experience, phenomenologists seem to have overlooked the itch.

Saturday, January 13, 2007

Nick Bostrom Says We Are A Computer Simulation

The most foolish article I have ever read in a serious publication was by Nick Bostrom, claiming that we are probably a computer simulation.

More precisely, he argues that either 1) most civilizations destroy themselves before they become technologically mature or 2) only a tiny fraction of technologically mature civilizations are interested in creating computer simulations of their ancestor civilization or 3) we probably are a computer simulation.

If 1 and 2 are not true, then future civilizations will want to devote some of their immense computing power to simulating their ancestor civilizations. Calculating the probabilities, Bostrom finds that there will be many far, far more computer-simulated minds in simulated ancestor civilizations than natural non-simulated minds in non-simulated civilizations. Therefore, it is much more probable that we are simulated minds living in a computer simulation than it is that we are non-simulated minds.

What is wrong with this argument? Obviously, the problem is Bostrom's assumption that computer-simulations of people will have consciousness, as we do.

We may develop so much computing power that we can do precise simulations of the weather, even calculating the effect of each butterfly flapping its wings. But no matter how powerful our computers are, standing near a computer simulation of a rain storm will not get you wet.

Likewise, it is possible that, no matter how powerful our computers are, no matter how precisely we can simulate people's behavior, a computer simulation of a person will not be conscious - any more than the figures in video games are conscious.

This is a major issue in contemporary philosophy of consciousness. I think that philosophers such as John Searle have shown that computer can never develop consciousness. Cognitive scientists sometimes ignore the question, sometimes deny that the question is meaningful, and sometimes claim that computers can be conscious.

How can Bostrom do detailed calculations of probability based on the number of natural humans and the number of computer-simulated humans that we can expect to exist, while completely ignoring the improbability that computer simulations of people could ever be conscious?

This combination of mathematical astuteness and philosophical blindness has earned Bostrom the position of director of the Future of Humanity Institute at Oxford University, where he spends his time arguing that we should use genetic engineering to change what it means to be human. No doubt, the Nick Bostroms of this world have the wisdom needed to decide exactly how humanity should be reengineered.