Yes. (At least for many of the most useful definitions of machines/beasts.) But I guess
there’s a lot more to it — or so a lot of bloviators would like us to believe. There are a lot of opinion pieces out there that decry the materialism/reductionism/scientism involved in thinking of people as machines (or animals). It’s not often that these are given space in the New York Times. This article by Richard Polt is therefore special enough to warrant some attention. Here are some choice quotes.
Wherever I turn, the popular media, scientists and even fellow philosophers are telling me that I’m a machine or a beast. My ethics can be illuminated by the behavior of termites. My brain is a sloppy computer with a flicker of consciousness and the illusion of free will. I’m anything but human.
Already we’re off to a terrible start. Apparently if Polt’s behaviour can be illuminated by termites and if his brain is a computer, then he’s not human. That’s like saying: “Everywhere I turn, these arrogant scientists are telling me that the thing that comes out of my tap can be illuminated by the formula H2O. In other words it’s anything but water.” Well sure, if he deliberately defines human in the most anti-scientific way possible (the creature that’s not an animal, has free will and whose consciousness has nothing to do with computers) I don’t see how it helps to blame science for being corrosive to his definition.
I have no beef with entomology or evolution, but I refuse to admit that they teach me much about ethics. Consider the fact that human action ranges to the extremes. People can perform extraordinary acts of altruism, including kindness toward other species – or they can utterly fail to be altruistic, even toward their own children. So whatever tendencies we may have inherited leave ample room for variation; our choices will determine which end of the spectrum we approach. This is where ethical discourse comes in – not in explaining how we’re “built,” but in deliberating on our own future acts.
I’m not sure who’s supposed to be suggesting that an evolutionary account of ethics should replace ethical deliberation; I don’t think I’ve seen such nonsense. But the idea that they have nothing to teach us is also a bit rich. His example of a dilemma is whether he should cheat on a test. While knowing the science doesn’t automatically tell you the answer, I’d imagine that being aware of the human biases of rationalisation and how they evolved might have something to say to someone who’s concocted a reason why cheating on the test is perfectly fine. As for biology not having enough explanatory power, he might as well say computer science has nothing to teach us about a computer’s chess-playing. Because you see, a computer’s chess-playing ranges to the extremes so the algorithm must leave ample room for variation. This is where the computer’s choices (and what pieces the opponent plays) come in…
Any understanding of human good and evil has to deal with phenomena that biology ignores or tries to explain away – such as decency, self-respect, integrity, honor, loyalty or justice.
Once again, he doesn’t simply state that currently we don’t have an adequate explanation of (say) self-respect in a way that ties it in to our biological nature. Instead, he defines his position as being right. Self-respect defined as something NOT explainable by biology so any biological explanation is “explaining it away” by definition by trying to make it into something that’s not self-respect.
Next they tell me that my brain and the ant’s brain are just wet computers…So are you and I essentially no different from the machines on which I’m writing this essay and you may be reading it?..Siri may find the nearest bar for you, but “she” neither approves nor disapproves of drinking. The word “bar” doesn’t actually mean anything to a computer: it’s a set of electrical impulses that represent nothing except to some human being who may interpret them.
The word “bar” doesn’t actually mean anything to a human: it’s just a set of electrical impulses in the brain. This is the old Chinese Room argument restated. Since the Chinese Room has generated an entire industry of debate I’ll just point the way to a resource. What’s particularly galling is that a professor of philosophy would use such a cheap rhetorical trick. It’s like he’s saying: “they think that we’re similar to rectangular plastic contraptions that run on electricity — how wackadoo and against common sense is that!” There’s no argument there, just an unashamed appeal to superficial differences and people’s prejudices.
None of these devices can think, because none of them can care; as far as we know there is no program, no matter how complicated, that can make the world matter to a machine. So computers are anything but human – in fact, they’re well below the level of an ant. Show me the computer that can feel the slightest twinge of pain or burst of pleasure; only then will I believe that our machines have started down the long road to thought.
I now see the genius of Alan Turing even more. In his seminal paper that laid the foundation for AI, he quickly turns away from trying to define thought (considering the question meaningless) and instead turns to the engineering problem of the imitation game (Turing test). Polt’s definition of thought here is completely non-standard and self-serving. Perhaps we can be charitable and interpret him as relaying the commonly-held idea that without having personal experience, machines can’t be conscious. But he gives no argument at all for this. He can say that in the end it’s all ones and zeros so of course no device can ever feel pain, but then I can reply that in the end all human thought is just molecules so of course no human ever feels pain.
Without a brain or DNA, I couldn’t write an essay, drive my daughter to school or go to the movies with my wife. But that doesn’t mean that my genes and brain structure can explain why I choose to do these things – why I affirm them as meaningful and valuable.
I understand the bit about genes since they don’t determine specific behaviour (as if anyone with appropriate scientific credentials suggested they do?). But if he’s really saying that his brain structure cannot explain what he does, he’s even more out there than the rest of his essay suggests. Just because we can’t read things off a brain scan (yet?) and make sense of it in this way doesn’t mean that it’s not all coming from the brain structure. Otherwise he’d have to say that an identical copy of him would have different motivations or behaviour, which is about anti-science as you can get.
So why have we been tempted for millenniums to explain humanity away? The culprit, I suggest, is our tendency to forget what Edmund Husserl called the “lifeworld” – the pre-scientific world of normal human experience, where science has its roots. In the lifeworld we are surrounded by valuable opportunities, good and bad choices, meaningful goals, and possibilities that we care about. Here, concepts such as virtue and vice make sense.
In other words, we should pay more attention to folk-psychology, folk-physics, the “wisdom of ages” and common sense. All of our problems have been that we’ve put too much stock in science and allowed it to challenge our pre-scientific conceptions. Yep, we’ve really taken a fall, haven’t we?
But concepts from information theory, in this restricted sense, have come to influence our notions of “information” in the broader sense, where the word suggests significance and learning. This may be deeply misleading. Why should we assume that thinking and perceiving are essentially information processing?
This is actually an interesting question and one that reflects genuine controversy among people studying consciousness, AI and cognitive science. We don’t currently know if there’s something over and above information processing because we don’t know how things fit together for perception, thinking or consciousness at a detailed-enough level. But Polt is not asking a serious question, he’s using it as a rhetorical club. He already thinks that biology doesn’t have anything to tell us about what makes him tick so the idea that thinking has nothing to do with information seems about right. A great answer is from one of the comments:
“Because form follows function, and the form of the brain strongly implies that it’s function is information processing. With its long wires, vast networks linking disparate regions together, high-speed electrical signaling, molecular scaffolding, and its circuit-like input-output dynamics, it seems clear, not only to me, but to neuroscientists in general, that the function of the brain is to process information.”
We need to recognize that nature, including human nature, is far richer than what so-called naturalism chooses to admit as natural…The same scientist who claims that behavior is a function of genes can’t give a genetic explanation of why she chose to become a scientist in the first place. The same philosopher who denies freedom freely chooses to present conference papers defending this view.
The same scientist who claims that our behaviour consists of atoms moving around can’t predict their behaviour by looking at the trajectories of atoms. They can’t write an equation of how their own atoms cause their behaviour. Therefore nature is far richer than atoms? Of course it’s true that these examples show that the best level of explanation is usually at a higher level than atoms (or genes etc.) — but Polt isn’t interested in that distinction. Instead, his reasoning is this:
- Low-level science (physics, genetics) can’t account for X → It will never account for X → There’s more to X than materialism.
- Low-level science’s account for X is not the most useful level of explanation → Science is trying to explain X away
Richard Polt is a professor of philosophy at Xavier University in Cincinnati. His books include “Heidegger: An Introduction.”
As usual, his accreditation (when compared to his essay) shows that philosophy’s in real trouble as a discipline.
0 Comments