Over the past two decades, the philosopher David Chalmers has established himself as a leading thinker on consciousness. He began his academic career in mathematics but slowly migrated toward cognitive science and philosophy of mind. He eventually landed at Indiana University working under the guidance of Douglas Hofstadter, whose influential book “Gödel, Escher, Bach: An Eternal Golden Braid” had earned him a Pulitzer Prize. Chalmers’s dissertation, “Toward a Theory of Consciousness,” grew into his first book, “The Conscious Mind” (1996), which helped revive the philosophical conversation on consciousness.
Perhaps his best-known contribution to philosophy is “the hard problem of consciousness” — the problem of explaining subjective experience, the inner movie playing in every human mind, which in Chalmers’s words will “persist even when the performance of all the relevant functions is explained.”
Chalmers is now writing a book on the problems of a technological future we are fast approaching: virtual reality, digitally uploaded consciousness, artificial intelligence and more. I met with David Chalmers in his office at New York University to discuss this future and how we might relate to it.
Prashanth Ramakrishna: Artificial general intelligence, A.G.I., is a system capable, like us humans, of performing open-ended tasks independent of specific problems or contexts — conversation, common-sense reasoning, experiential learning and so on. The popular science fiction example is HAL 9000 from the film “2001: A Space Odyssey.” Is A.G.I. achievable? And if it is, does our civilizational conversation seem sufficiently robust on this topic?
David Chalmers: I think artificial general intelligence is possible. Some people are really hyping up A.I., saying that artificial general intelligence is just around the corner in maybe 10 or 20 years. I would be surprised if they turn out to be right. There has been a lot of exciting progress recently with deep learning, which focuses on methods of pattern-finding in raw data.
Deep learning is great for things we do perceptually as human beings — image recognition, speech recognition and so on. But when it comes to anything requiring autonomy, reasoning, decisions, creativity and so on, A.I. is only good in limited domains. It’s pretty good at playing games like Go. The moment you get to the real world, though, things get complicated. There are a lot of mountains we need to climb before we get to human-level A.G.I. That said, I think it’s going to be possible eventually, say in the 40-to-100-year time frame.
Once we have a human-level artificial intelligence, there’s just no doubt that it will change the world. A.G.I.s are going to be beings with powers initially equivalent to our own and before long much greater than our own. To that extent, I’m on board with people who say that we need to think hard about how we design superintelligence in order to maximize good consequences. How robust is the current conversation? I find there’s more and more attention among A.I. researchers to making A.I. beneficial in the short term and consistent with a good future for humanity in the long term.
P.R.: I caught you using the word “beings.” Are you equating general intelligence with consciousness?
D.C.: I like to distinguish between intelligence and consciousness. Intelligence is a matter of the behavioral capacities of these systems: what they can do, what outputs they can produce given their inputs. When it comes to intelligence, the central question is, given some problems and goals, can you come up with the right means to your ends? If you can, that is the hallmark of intelligence. Consciousness is more a matter of subjective experience. You and I have intelligence, but we also have subjectivity; it feels like something on the inside when we have experiences. That subjectivity — consciousness — is what makes our lives meaningful. It’s also what gives us moral standing as human beings.
P.R.: Even if consciousness is a reproducible epiphenomenon of the right information-processing system endowed with the right representational structures, there will always be an opaque veil separating behavior seemingly inspired by subjective experience from behavior actually inspired by subjective experience. If our ethical obligations to objects matter only insofar as those objects are conscious, then how should we deal with our inherently ambiguous ethical obligations toward A.I.?
D.C.: In philosophy this is the ancient problem of other minds. How do you know whether another person or system in general has a mind? I know that I have a mind. Descartes says, “I think. That’s the one thing I’m certain of. Therefore, I am.” But, when it comes to other people and to computers, you’re not going to have that degree of certainty.
What should the criteria be? Is just doing sophisticated things enough to convince you that a system is conscious? Winning a game of Go certainly is not. Being able to carry on an intelligent conversation would be a start. Maybe an A.I. system that could describe its own conscious states to me, saying, “I’m feeling pain right now. I’m having this experience of hurt or happiness or sadness” would count for more. Maybe what would count for the most is feeling some puzzlement at its mental state: “I know objectively that I’m just a collection of silicon circuits, but from the inside I feel like so much more.”
P.R.: Some people might argue that if I can be sure that I’m conscious and if there are varying degrees of similarity between me and other beings of potential consciousness, then I can make probabilistic judgments about the consciousness of those beings.
D.C.: For me, the way to get some purchase here is to think about gradually transforming yourself into an A.I. You be the A.I.: gradual uploading. Gradually replace your neurons, one at a time, with computer parts or upload them to a computer. You start as a fully biological system, and then you’re three-quarters biological and one-quarter silicon, and then half biological and half silicon, then one-quarter biological and three-quarters silicon, and finally you’re a fully silicon system. If you make it a functionally perfect simulation throughout, then you’re going to be there till the end still saying, “Yup, I’m still home!” If it’s a proof, it’ll only be a proof for you. Someone else can still say, “I think you turned into a zombie.”
P.R.: One can imagine not simply becoming the A.I. but nondestructively merging with it. In a future where everyone is augmented by A.I., where we all have perfect computing power, perfect memory, perfect ability to synthesize and deploy knowledge, would collaboration become obligatory? There would, after all, be no obstacle except indeed collaboration to solving most every problem regarding human welfare.
D.C.: Hopefully we’ll find a good solution to climate change just like that. And 30 seconds later, dissolve the Israeli-Palestinian conflict? Maybe that’s a harder one. A lot of the irrationalities we have are collective. Some of our irrationalities are tied to our goals, to me rationally wanting my goal and you rationally wanting your goal. Often a solution is that we both get our second most desired outcome, or our third, and so on. People aren’t good at settling for this type of solution, though. Maybe you’d need a whole new module for compromise,for finding goals that we can universalize. But that goes beyond simple means-end instrumental intelligence and more into something reflective like figuring out what our goals should be.
Immanuel Kant thought that morality is part of rationality. There’s the thought that a superintelligent A.I. will turn into a super-moral one, that it will turn into a sort of Kantian being that will only take on goals it can universalize for everyone. That’s a very speculative view of how A.I. will be.
P.R.: Where between the Kantian being and us inventing our way into our own demise do you locate your own conception of the future?
D.C.: I value human history and selfishly would like it to be continuous with the future. How much does it matter that our future is biological? At some point I think we must face the fact that there are going to be many faster substrates for running intelligence than our own. If we want to stick to our biological brains, then we are in danger of being left behind in a world with superfast, superintelligent computers. Ultimately, we’d have to upgrade.
The other way it could go is that new artificial intelligences take over the world and there’s no place for humanity. Maybe we’re relegated to some virtual world or some designated part of the physical world. But you’re right, it would be a second-class existence. At the very least maybe they keep us around as pets or for entertainment or for history’s sake. That would be a depressing outcome. Maybe they’d put us in virtual worlds, we’d never know, and we’d forget all this stuff. Maybe it’s already happened and we’re living in one of those virtual worlds now. Hey, it’s not so bad.
P.R.: This is probably a good time to define what virtual reality is, just because there are multiple ways that we use the word “virtual.”
D.C.: The word “virtual” originally meant a “fake” or an “as if.” A virtual tie is “as if” it was a tie. Over the years, though, the word “virtual” has evolved. Now it means something like “computer-generated.”
P.R.: What, more precisely, is the functional characterization of reality that finds its analog in virtual reality?
D.C.: I take your question to be, if I can rephrase it: In what sense is normal reality real, and can virtual reality be real in that way? It’s a great philosophical question. George Berkeley, the great Irish philosopher, said, “To be is to be perceived.” If something looks like a duck, sounds like a duck and so on, it’s a duck. That’s idealism: The world is all in your mind.
The dominant view, though, is that reality is outside your mind. To be real, you need something more than just appearances; you need some underlying powers or potentiality. The great Australian philosopher Samuel Alexander said, “To be real is to have causal powers” — if you can be something that actually makes a difference. Phillip K. Dick once said, “A real thing is something that doesn’t go away when you stop believing in it.” If you’ve got something that is independent of your mind, which has causal powers, which you can perceive in all these ways, to me you’re a long way toward being real.
Things in virtual realities, at least in principle, have all those properties. Say you’re in a virtual world. There are objects there that you can perceive around you. In a virtual world a virtual tree can fall even if I’m not around. A virtual tree has causal powers. A virtual tree falling can cause people to have experiences. It can break something that it falls on in the virtual world, and it can be experienced. Virtual reality is just a different form of reality. But it’s still perfectly real.
P.R.: Why do you think the original intuition on this topic was precisely the opposite, that virtual reality is nothing but instantiated fantasy?
D.C.: This goes back a long way in the history of philosophy. René Descartes said, “How do you know you’re not being fooled by an evil demon right now into thinking this is real when none of it’s real?” Descartes’ evil-demon question is kind of like the question of a virtual reality. The modern version of it is, “How do you know you’re not in the matrix? How do you know you’re not in a computer simulation where all this seems real but none of it is real?” It’s easy for even a movie like “The Matrix” to pump the intuition in you that “this is evil. This isn’t real. No, this is all fake.”
. The view that virtual reality isn’t real stems from an outmoded view of reality. In the Garden of Eden, we thought that there was a primitively red apple embedded in a primitive space […]