Skip to content
Who's in the Video
Anil Seth is a neuroscientist, author, and public speaker who has pioneered research into the brain basis of consciousness for more than 20 years. He is the author of Being[…]
Jonny Thomson taught philosophy in Oxford for more than a decade before turning to writing full-time. He’s a columnist at Big Think and is the award-winning, bestselling author of three[…]
Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week, for free.

What does it mean to be conscious, and why does it feel like something to be you? Neuroscientist Anil Seth argues that consciousness isn’t a mysterious spark but a deeply biological process, one that depends on prediction, perception, and the body’s constant negotiation with the world. 

In this conversation with philosopher Jonny Thomson, he explores how our brains don’t passively observe reality, but rather actively construct it.

You feel as though consciousness has and will one day have explanations which can be reduced to the physical. And so if we were to create an artificial intelligence, which did have the desire or the will to survive, is that when things become dangerous?

- I think that no one should really be actively trying to create a conscious AI. It's like, why would you do that apart from the desire to play God? The difference between conscious systems and non-conscious systems goes down to the level of the fact that we are made of cells that regenerate their own components, that transfer energy into matter and back again. We project consciousness into things that seem human-like in ways which might actually not matter at all. I feel that artificial consciousness, real artificial consciousness, if that's not an oxymoron, might require real artificial life. It's very tempting to think of the brain as a computer. People have been doing it for ages, and it's a powerful metaphor. It's a powerful map with which to explore some of the things that brains do. But we've always used a technological metaphor to understand the brain. And every time we have a metaphor, if we really think that is the thing, now, then we stop looking for what else might be there. And I think we need to start looking for what else might be there that marks a difference between brains and even the most powerful computers.

- [Jonny] Hello, everybody, and hello, Anil.

- [Anil] Hello, Jonny.

- So we're here to talk about consciousness today, and we're going to divide the talk into three acts, really. The first act, we're gonna talk about the current state of consciousness science and what we know at the moment about consciousness. The second act will look at a more speculative view of what we can know about consciousness in the future. And of course, AI. And then the third act will open for questions from the audience, from you guys. So think of some good questions, and at the end we'll take some hands to ask for Anil. So I thought that if we're gonna talk about it for an hour, we should probably define our terms first and ask you what consciousness is. 'cause presumably everybody in this room has consciousness or is conscious right now, we hope. So how do we define it from a scientific point of view?

- It is a good place to start, but it's also a dangerous place to start because I think we can get caught up in definition a bit too much. I think it's important to define it, at least to the extent we are not talking about different things without realizing it. I mean, consciousness on the one hand, I think it's still one of this great mysteries. You know, we have experiences of the world, blah, blah, blah, how does it happen? But it's also the most familiar, we all know what consciousness is, as you said, Jonny, it's what goes away when you go into a dreamless sleep or even more profoundly under general anesthesia. And it's what comes back on, on the other side. It's hard to define without it being a little bit circular. It's like any kind of experience of the world. The redness of red, the pain of pain. And I always like to use the definition from the philosopher Thomas Nagel, who 51 years ago now put it like this. He said that, "For a conscious organism there is something it is like to be that organism." That it feels like something to be a bat, as he put it in his paper. Whereas it doesn't feel like anything to be a table or a glass of water by almost any... Very few people would would claim that. And when you're out under general anesthesia, it doesn't feel like anything to be you then either. I think that's a good definition, because it's very general. It doesn't really make any presuppositions about what it is like to be, just that there's something it is like to be a conscious thing.

- So is it fair to say that we can talk about a human species consciousness, or would we have to look at every individual human consciousness in this room?

- Oh, in this room, I was gonna say, every human would take a long time. Every human in this room... Well, it's likely that we overestimate the similarity of conscious experiences even within a species. So Nagel in his paper talked about a bat. I mean, he was making a larger point about whether consciousness, this fact that some things experience and other things don't, whether that was addressable by science at all. But he kind of became more known for the idea that bats, assuming they are conscious, are gonna have a very different kind of conscious experience. They have echolocation, so their experienced world, we humans can maybe approximate, but we can never really experience what it's like to be a bat without being a bat. And I think the same thing applies in a smaller way within a species too. It's easy to assume that other people experience things in the same way we do because our conscious experience has the character that, especially when it's experiences of the world rather than thoughts, it seems as though it's just there, right? Just seems like you're there, the wall is behind you. That doesn't depend on my brain, you're just there. But that's probably not what's going on. Everybody is creating their own unique world. Now, they won't be completely different I think. Human experience is likely to have a lot of commonalities, but there are going to be interesting differences as well.

- Great, thank you, yeah. And so in your book "Being You," you describe yourself as physicalist, which is where, and correct me if I'm wrong, you feel as though consciousness has and will one day have explanations which can be reduced to the physical. So we can explain consciousness in physical terms. And I thought if we could explore a little bit about what's going on in the brain when we're talking about consciousness, but before we do that, I was wondering if you could give us a quick primer about the ways which we can currently measure what is going on in the brain, the FMRI, sorry, and EEG, and PCI you talk about in the book, if you could quickly could talk us through those?

- Yeah, all lots of things. Those tend to be compressed into acronyms for some reason. But firstly though, yeah, you have all these "isms" in the philosophy surrounding consciousness research. And we'll probably get back to that a bit later maybe. But I always think they should be worn a little bit lightly. So I sort of describe myself as a pragmatic physicalist. Like, I don't believe in it as an ideology, as something that I know to be true, and everything depends on it being true. I just think holding that attitude can deliver the most progress and understanding in practice, even if ultimately it turns out not to be the case, if somehow we were able to discern that. So I think it's a pragmatic perspective rather than a fundamentally held belief about how things are because of course you say, "Reduced to the physical," I mean the physical is pretty mysterious, it's not just neurons or, I mean who knows what matter is really, matter is pretty mysterious and implacable when you get close to it. So being a physicalist is not to sort of be too reductive and say, "Oh yeah, it's just neurons exchanging signals." It could be many, many other things. Brain imaging, so I mean, for me, this is already a consequence of thinking at least from the perspective of consciousness being a property of a physical system. We just have a ton of evidence that conscious experiences in human beings are intimately related to the human brain. And so it makes sense to start by trying to track the footprints of consciousness in the brain, look at brain activity and see how it varies as consciousness changes in human beings. And it's very, for me it's been transformational. It's incredibly exciting that we can do this. You think about some of the other grand mysteries in science, like what happened at the beginning of time. What's the world really made of? You have challenges of looking far away and long ago, or looking at the very small. And these require massive instrumentation, it's really hard. Human brains are pretty accessible. They're human size, there are lots of them to study and they're kind of, that's a great advantage. And the fact that we have these brain imaging methods now since about the 1990s, and they've been widespread. Historically, that's what gave consciousness research, I think, its modern momentum, but there's still a problem because the challenge with imaging the human brain is not that it's far away, or long ago, or tiny, but that it's complex. So we have about 80-86 billion neurons. Each of those is a very complex little biological machine wired up in very intricate ways. And we don't have a sort of James Webb telescope for complexity, we can look at the brain in reasonably high spatial detail, but then we lose time. We can look at it in high time detail, but then we lose space, and in neither case can we look at all 86 billion neurons at once. So we are always trading off things in different ways. So when most people encounter brain imaging, they're usually seeing these FMRI scans. This is Functional Magnetic Resonance Imaging. Wonderful technology, you see these colored blobs and pictures of the brain. It's not actually measuring brain activity, it's measuring something about the metabolic consequence of brain activity. Neurons fire, they consume energy, blood oxygen levels change, magnetic properties change. That's what it's measuring, it's very indirect, and it's over a timescale of seconds. Whereas neurons are on the timescale of milliseconds. That's one technique. In my lab we use a lot of EEG, electroencephalography, it's probably the oldest. Neurons generate little electromagnetic fields when they do stuff. And we can measure the collective impact of those electromagnetic changes, and that's very fast. We do this to have sensors over the brain. You don't need so much fancy kit, but it's fairly low resolution. It's a bit like dangling a microphone above the center of Oxford and trying to figure out what individual people are saying to each other. You can't discern the fine grain chatter between individual people and maybe that's where the detail is. And so then you have the method of sticking wires directly into people's brains, which I do not do in my lab. You have to have good clinical reasons to do that. Then you know exactly where the neurons are and you know exactly what they're doing, but you can only typically measure a few of them at a time. This is growing, people can measure maybe hundreds, perhaps thousands now, but that's a tiny, tiny fraction of the billions that you have. So those are the sorts of techniques that we have to use.

- So if we have these trade offs all of the time when we're measuring consciousness, and even then, you say yourself that it's slightly inadequate at the moment. Do you think the problem of measuring consciousness is one of technology that we haven't got the right scanning equipment available to us? Or is it one of science that we will require a completely different scientific paradigm to accommodate consciousness?

- You're giving me two options, neither of which are very appealing. I mean I think it's both of those and more but slightly different. So the problem of measuring the brain is kind of a technological problem that's gonna depend on scientific innovations, like developing new ways to measure signals that measure really what's happening in the physiology. There are some new ideas, but we're reasonably stuck with what we have. One new idea, for instance, which you can't do in humans, which is really changing things, is called optogenetics. So this is where you modify the genomes of animals, such that neurons actually express, they give out light or they reflect light in different ways. So you can find out a lot that way, but you can't do this in human beings 'cause you've got to genetically modify them. Anyway, that's one problem, that's the scientific, technological problem. That's not measuring consciousness, that's measuring the brain. So then you have to ask how do you connect changes in consciousness to the changes that you see in the brain? And you can do this in a very exploratory way. You know, just put somebody out under anesthesia, see what happens, or watch what happens when they go to sleep or many other experimental paradigms. But you'll see a lot of things change in the brain. How do you know which of those things are specifically to do with consciousness? So then you have to have theories that try and bridge the two. And I think that's kind of where we are now. We have a bunch of theories that make different predictions about what happens in the brain as consciousness changes. But you are right that it's very hard to tell them apart. And part of the problem is that we lack the precision in our measurements, but also the precision in our theories. They don't make that precise predictions just at the moment.

- So hypothetically, do you think there could be a technology which can measure Thomas Nagel's degree of first personhood, qualia? Or are you saying that we will always deal in correlations, that we'll always have to say, "This is the physical thing happening and therefore we have to correlate it to some kind of consciousness as a kind of inference?" Or do you think we will have a technology which can identify first personhood?

- I think we start with correlations, right? When we're trying to understand a phenomenon empirically that is resistive to our understanding. A good way to start is just by seeing, looking for correlations, what goes with what. But that's just the starting point. And you want to go from correlation towards explanation. So that you have a sense of why X goes with Y. I mean, you see there's all these crazy examples of the price of cheese in Wisconsin has correlated over time with a divorce rate in France, right? That means nothing I think, I dunno, maybe it does, but it probably means nothing, it's just a correlation. There's no explanatory insight gained. And if you say, "Oh look, we see activity in whatever area change when you fall asleep" and just leave it at that, that also doesn't really give you much explanatory insight. So you need to get to the stage of like, "Ah, this process happening in the brain, that tells me something about the conscious experience that's going on, or the fact that there is one versus there not being one." And that's partly a technological challenge. I think the more we can do with improving these methods, the better. But I don't think that's the... It would be helpful, but there's an awful lot we can do without that. Part of the challenge is more of a mathematical one. We lack the tools to deal with the complexity of the data that we have, even with the relatively low resolution imaging methods that already exist. And as we get more towards explanation, I think we are making progress in that way. Then will we get as far as answering this hard problem, I think that was mentioned that, "Why is consciousness there at all? What's it got to do with the physical world?" Maybe we will, maybe we won't, but we'll certainly get further towards that as a sort of an endpoint as a goal.

- Great, thank you. So if we're talking about correlations and neural consciousness correlation, you talk about NCCs and the book of acronyms. With the current state of consciousness science, neuroscience at the moment. If we were to give everybody in this room here an EEG or some kind of brain scanner, with what degree of certainty could you infer what is going on in people's minds? Because I know in the book you talk about consciousness level, consciousness content, and consciousness selfhood. With what degree of certainty could you predict what's going on in in seat J4 right now?

- Who's sitting in J4? I don't know. So yeah, I think you're right, well I like to think of these three different aspects to consciousness. Level, how conscious you are. Content, what you are conscious of. And self, the experience of being who you are, the experience of being the subject of experience, which is really a subset of content. It's a subset of your overall conscious experience. But I think it's worth its own category 'cause it's rather special. If everyone was wearing an EEG and the data was good, it's kind of not too difficult to predict whether you are conscious or not, that can be done. Human beings in conscious states, their brain activity is relatively predictable in some ways. You have lots of sort of fast oscillations going on in various frequency bands and there are the more sophisticated ways of measuring this. And when you fall asleep, you see very distinct patterns when you fall asleep and you're not dreaming, same under anesthesia. So that, I think that would be fine for most people. But even then there are weird edge cases, there are people who have brain injuries for instance, and then it might be harder. If someone has something called the vegetative state or the wakeful unawareness state. They might look awake, but they also look like there's nobody at home, and then it becomes a much more tricky problem to predict, tell whether there's any consciousness going on inside. When it comes to predicting what you are conscious of, it's much, much harder. But there is this growing research field of brain reading now. I mean, people used to call mind reading, but now there's brain reading, and brain reading is precisely this brain reading, is taking somebody's brain signals, measuring them somehow and then trying to predict what they are thinking, what they are experiencing. This is now one of those things that's moving from science fiction into science reality. And there are some already, really to me, impressive examples of this that are usually used in clinical cases. So people, for instance, who are paralyzed. You can now begin to read out their movement intentions, so you can help them control robots or even maybe even guide their own arms by stimulating muscles, decode what they want to say, so they can regain the ability to communicate in some cases. So there's a strong impetus for this to be done. But also there's the issue of this is actually a very dangerous technology in many ways. We think we have freedom of speech, which is arguable in some cases, but very few places have freedom of thought. And once you have brain imaging things that are predicting what you're thinking, then you've lost that last bastion of privacy because once you get inside the skull, there's nowhere else left. And also these algorithms that do these predictions are not perfect. So your brain reading might be inaccurate, and also you might be forced to think in a particular way for the thing to work. But technology is moving in this direction, this whole field of Brain Computer Interfaces is really all about this. And actually, it's very exciting, but it's to me also ethically really rather worrying. But right as we stand now, I don't think we'd be able to predict with great certainty what you were all thinking right here and right now, but I think that that might change. I'm advising a couple of companies that are trying to do brain reading with consumer EEG, and they're getting quite far. It's quite astonishing.

- So in your book, you talk about lots of isms around consciousness and about philosophy generally, really, to be honest, but you identify as a physicalist.

- Pragmatic physicalist.

- A pragmatic physicalist, but agnostic about functionism, is that right?

- Yeah, I quite like it. Yeah, I quite like, yeah.

- So you're leaning more towards functionalism?

- Well, I think you've gotta be flexible, like yeah, I'd be disappointed if I held exactly the same view.

- Can you probably explain functionalism first, actually?

- Yeah. So, well it's kind of a sub... So physicalism is the position that consciousness is a property of the physical world, of material stuff generally. It's kind of one of the bedrock assumptions of science as it's been done for the last 100 years, that things are properties of physical stuff, whether it's electrons or some higher level of description. That's physicalism. It's really, originally might have been opposed to something like dualism, which is that yes, you've got physical stuff, neurons and whatever neurons are made of, but then the world of the mind, the world of the mental is completely different. And Rene Descartes was sort of famous for articulating this position. And then right on the other side, you've got idealism, which is the view that actually the physical world doesn't really exist, the only thing that really exists is mental stuff, mind stuff, and what seems to be physical is just a manifestation of the mental. Within physicalism then, within this idea that consciousness is somehow a property of the physical world. One of the most popular flavors of that is functionalism. And that's saying that consciousness is a property of the physical world, it's property of the brain, perhaps in its body, and the bodies in the world. And what matters for consciousness to be a property of the brain is the functional organization of the brain. Not intrinsically what it is, but what it does in some sense, how it's organized. And this can be on the inside too, how it's wired up, what it's causal architecture is. That's all part of functionalism. And that's probably the prevailing idea among most people working in consciousness science from a neuroscience perspective. So it's pretty liberal. It needs to be distinguished from much more specific claims, which we'll get onto, like the brain is a computer, and consciousness is this algorithm. I mean that's a very specific kind of functionalism.

- Yeah, right, we'll come back to that when we talk about AI a bit later on. But I mean before that onto another ism and that's an ism which is very popular in philosophy at the moment and some people in this room even are panpsychists, panpsychism. So I wonder if you could just quickly tell us what Panpsychism is and why you are possibly less excited about it than some people.

- Well, how many people are panpsychists? Let's see. I know, well, half a hand? Half a hand. I knew there's gonna be not many. Not many, well maybe we could explain, maybe more people will become it after I've tried to pour cold water on it and then...

- [Audience Member] What was the question?

- The question is how many people are panpsychists? I'm gonna, all right, so we'll start with that. We'll start with that. So we've had the idea that consciousness is part of the physical world, or it's a property of physical stuff, or that it's everything that exists is mind. Then panpsychism is the idea that, okay, there is a physical world, but consciousness is also fundamental. It's a fundamental property of the world that we inhabit, a fundamental property of the universe. A little of the same status as something like mass or electrical charge, that's it, that's where things bottom out. Most things you can say, the table is something is the way it is because it's made of wood and it has legs and you can keep going down, eventually you get to electrons or quarks, or whatever you want to do, somewhere you bottom out. Panpsychism is the idea that you bottom out at consciousness as well. That it's there from the beginning. There's no point asking how it's generated by something else or arises or emerges from something else. It's just that from the very beginning. So now that I've explained what panpsychism is, I think that's hopefully that's reasonably not inaccurate. How many people feel attracted by that idea? Okay, a few more, now, right. It's still a minority. You are right though, it's having its kind of day in the sun, I think. And yeah, I'm not remotely persuaded by it. And there are a few reasons for it. Firstly, it might be right. All of these positions we've been talking about, none of them can actually be tested. They all kind of can't decide whether it's actually a property of physics or the world is all mental, they're all ways of thinking about things. But Panpsychism strikes me as not a very useful or productive way to think about things, even if it ultimately might be right. And I think there's some very quick reasons for this. One is that it's mainly seems motivated by a suspicion that a physicalist explanation isn't enough. That people think, "Well, you'll never be able to explain consciousness in terms of neurons, or brains, or carbons, or computational, or whatever it might be. You'll never be able to do that. You'll always be missing the point somewhere. You'll always be explaining some other thing that's not really consciousness." And so if you're always missing the point and you're always going to miss the point, then it has to be something else, and one of the other ways think about it is, "Well, if it's just there from the beginning, then great, you don't have to explain how it's produced by or is identical to you something else." But I think that really underestimates how far we can get with materialism. Just because something might seem a bit mysterious now doesn't mean it will always therefore seem mysterious at all. So I'm suspicious of the motivation, and then I don't think it gets, even if you did buy into it, I don't think it gets you anywhere. So panpsychism is often caricatured a little bit as saying that things like tables or spoons are conscious, like everything is conscious. It's not saying that, it's saying that it's fundamental, like an electron might have a little bit of consciousness, but it's going to be nothing like the consciousness of you, or me, or a bat, or a bee. If there's nothing really that you can grab onto there, then it doesn't help you explain anything. The fact that the consciousness of an electron is nothing like the consciousness of me, well therefore what explanatory light, how does it tell me anything about what it's like to drink a glass of water or fall asleep, or dream, it doesn't, it just doesn't. And then the worst thing, oh no, the second worst thing is that not only is it not testable, but it doesn't lead to testable predictions. If you think in this sort of panpsychist way, another way it's often motivated is that science tends to tell us what things do but not what they are. And maybe we're interested in what things are, and of course we are, but what things are is kind of what they do at another level. Some panpsychist arguments say that consciousness is the intrinsic nature of everything. It's what things are that's completely separable from what they do, at the most fundamental level. So even if you have other forces in physics like gravity or the strong nuclear force, they're still about what things do, but consciousness may be actually what things are. But if that's the case, then you are basically... You're admitting that it can never make a difference to anything, if it's fundamentally about what things are, it's nothing to do with what things do or they're disposition to do things, can never make a difference to anything. So that's fairly useless.

- So are you against emergence as a property?

- Absolutely not.

- So the idea that emergence is the idea, correct me if I'm wrong, that consciousness is something which emerges from these physical neural correlations we talked about earlier, but it itself is not reducible to the parts that make it up?

- I like the way of thinking that emergence is all about. And again, it's a very slippery concept that's often used quite lazily. Like you have something, it's often used as a substitute for an actual explanation. Like you have a load of neurons in our brain and you say, "Oh yeah, consciousness emerges from the collective activity of our billions of neurons. But of course, if you just say that, you're not really saying anything at all. Having said that, I think it's a starting point from which you can develop useful explanations. Emergence is a sensible way of thinking about systems. I think often about, in Brighton where I live, in the winter, in the evenings around sunset, these murmuration of starlings start flocking and then they settle to roost on the ruins of the old west pier, it's beautiful. And when they start to flock, the flock seems to have a kind of autonomy, a mind of its own, an existence of its own. It seems to be more than the sum of its parts. Yet of course we know they're just birds flying around. You know, there's nothing spooky or mysterious going on there. They're just birds flying around. But sometimes when they do this, they flock and sometimes they don't. And we can figure out how we might measure that. And that's part of the work we do in my lab, we try and develop mathematical ways of measuring the "flockiness" of starlings. And the thing is, if you can do that with starlings, then you can generalize, and you can say, "Okay, maybe neurons flock, but not in three dimensional space, but in some bigger high dimensional space of however many neurons." Does that explain consciousness? No, I don't think it does, but it can give us something that's a little bit more than a brute correlation. Now because when we do have a conscious experience, one of the things about it is it's unified. Now we have a single experience at a time, and it's composed of many different parts. And in our brain we have many different parts whose activity seems to be more coordinated in particular ways when we're conscious. So maybe the mathematics of emergence give us a handle on how the brain is related to consciousness, which is a very different perspective from just saying, "You have neurons and then consciousness sort of emerges from their activity," like, I don't know, whatever Aladdin rubbing his lamp and what comes out? I've forgotten now.

- Big genie.

- Genie comes out of the lamp, yeah. So that's the wrong way you think of it. I think there are some much more productive ways to think about it, which are also perfectly compatible with the idea that reductionism holds, like the birds again, the flock of birds, nothing new physically comes into the world, right? But the flock is kind of real. The flock being there changes how the birds behave. This table too is in some sense emergent from the things that make it up, but the table is also real, but nothing spooky or magical is happening just because there's a table there, rather than a whole bunch of electrons.

- Let's talk about AI.

- Oh, okay.

- And we mentioned earlier how you're a functionalist and how you feel as a conscious can be explained by the function of the physical composition.

- Well, I think that's like a gambit, right? I think that that's the strategy. I don't know if it's true.

- Somewhat better than agnostic now, you think, maybe. And so we're talking about AI, so when you're writing "Being You." I was reading it a couple days ago, you were actually writing at the time of ChatGPT-3. So this was four years ago. I actually didn't realize how far away that was. Before we talk about AI, could you give us a quick primer about how existing large language models are made and artificial intelligence are made? Because we often talk about neural networks and neural symbolic networks and how they are different if they are different to the human neural networks.

- Yeah, I mean I think there's too much to say about that, which we won't. But I think perhaps the thing to draw out of that is that firstly, these things are amazing. You're right, I remember I look back at that chapter now, I wrote about, my last chapter in the book was about AI and it was written just when language models were breaking through into the public sphere, and they have been transformational. And to me it's amazing how quickly we get used to them. It's like, "Ah, they exist now, we're used to them." How we adapt so quickly, it's really, really strange. And there are many technical details of how they do what they do, but I think the thing to dwell on here is in all the AI systems that are populated, that are populating the world today, whether they're language models, or AlphaFold which predicts protein structures and things like that. They're based, at least in part, if not almost in whole on artificial neural networks. And these are an abstraction. They're an abstraction of brains in general. And basically they're little units that are wired together. I mean, in an actual computer, they're just simulations of these little units, there aren't actually little units. They're wired together, they exchange signals. They look a little bit like a brain, you know, neurons wired up together and so on. And signals flow, and you can apply different training algorithms and enormous amounts of data and wonderful things happen like language models. The problem is, or at least the problem when it comes to thinking about whether AI built this way will ever have all the properties that human brains have is that human brains are much more than that. So these artificial neural networks are a pale abstraction of the real biological complexity of what's inside each of our skulls. Does that matter? Does it matter that there's more going on? It might not, but it really might matter that there is more going on. I think actually when we see some of the limits of what AI can do, they're not necessarily things that are going to be solved just by increasing the amount of computer power or the amount of data. It might be because we've actually accidentally reified a metaphor. We've confused the map with a territory. It's very tempting to think of the brain as a computer. People have been doing it for ages. And it's a powerful metaphor. It's a powerful map with which to explore some of the things that brains do. But we've always used a technological metaphor to understand the brain and it's always been limited. And I don't think there's a reason why this is different. It's just a metaphor that's now reaching the end of its utility. And every time we have a metaphor, if we really think that is the thing, then we stop looking for what else might be there. And I think we need to start looking for what else might be there that marks a difference between brains and even the most powerful computers.

- You mentioned Nagel at the start, and of course Nagel talks about bats, and in your book as well, you're quite happy to say that certain animals potentially could have consciousness, a degree of consciousness, even if it's not the same, as you mentioned earlier, as human consciousness. So why can we not ascribe that also to artificial intelligence? Or does artificial intelligence lack a necessary component that all of the animals that you're quite happy to say are conscious do have and what is that necessary component?

- Yeah, this is literally the billion-dollar question in some ways, isn't it? Given the amount of money that's flowing around in AI these days. I'd have my PhD in AI, but seems to be about 20 years too early for it to be monetarily useful. So the contrast between non-human animals, like, I don't know, a hamster, or a fish and an AI system I think is really, really instructive. I think we can be a lot more confident about consciousness being a property of many non-human animals. There are good questions to ask about whether all non-human animals have some kind of consciousness, or whether it does kind of gray out into nothingness somewhere. For me, I start to feel uncomfortable around insects. I don't really know what to say about insects, but we're all made of the same stuff. Does that matter? Well, we'll get to that. We're all made of the same stuff and we all in some sense have similar problems to solve to keep a body alive, to navigate an environment and to integrate lots of different kinds of information together. We can never be certain about anything. I don't even know whether you are actually conscious, though my prior is fairly high that you are, but we always have to infer. Some philosophers will tell you that you don't even know if you are conscious, which is, who knows. And the further we get from the benchmark human case, then the more unsafe our inferences are. And that's part of the work that needs to be done now, is we have to generalize very slowly and carefully into the zone of uncertainty and try and push that further and further out as we understand more about what is really underpinning consciousness in us and what's not just contingent on the fact that we are humans. Like for instance, if you think consciousness is associated with something really distinctively human, like sophisticated language, other species have language too, but the ability to write poems or something like that, then you're not gonna attribute it to many other species at all. But that's a view that's rooted in our unfortunate human tendency to put humans at the center of everything rather than being an insight into what's actually going on in the world. And I think that's also what misleads us when it comes to AI. We project consciousness into things that seem human-like, in ways which might actually not matter at all. Now up until a few years ago, if anything spoke to you with the fluency of ChatGPT, you'd be very justified in saying that was conscious. If someone with brain injury suddenly started conversing with you in depth, you can be pretty sure they'd regained consciousness. But now we have an example of a system where that inference might not be safe because there are just other ways of doing it, it turns out. Not perfect, anyone who's chatted for long enough, they're not perfect, but they're pretty, pretty good. And I think we project these psychological biases that we all have where we see the world through the lens of being human. And so we overestimate the plausibility of machine consciousness. Nobody thinks that AlphaFold, which is this AI system that folds proteins is conscious, because it doesn't speak to us, but it's basically doing the same kind of thing under the hood. And that's telling, I think. It tells us that our tendency to attribute consciousness to AI is largely a property of what we think rather than what's there. Then there are deeper questions about whether consciousness could ever be a property of computation. And here I think there's a lot of people disagree with each other and with me about this, but I think that brains do a lot more than algorithms and that I think the things that they do that are not algorithms actually do matter for consciousness. So I think AI systems might persuasively appear to be conscious and that's problematic in its own ways, but I think the chances of AI actually being conscious with the current systems that we have, even if you just let them run and see where we are and like ChatGPT-10 or something like that, I don't think we'll get any closer at all. We might get stuff that's much more persuasive than it is, but I don't think we'll get stuff that actually moves the needle on being conscious.

- So what would it take for you to ascribe consciousness to an AI, or do you think even the word artificial consciousness is impossible. Do you think it has to be embodied? Do you think it has to be organic and natural?

- I mean I don't know for sure. I'm always wary of overconfidence in both directions here. I think that's part of the problem. I think people who are very, very confident that conscious AI is coming or it's already here, I think they're unjustified in that level of confidence. I also think people who say it's for sure impossible are also overconfident in completely ruling it out. My bias is that it's very unlikely, and the reason I think it's unlikely is because the kind of explanation of consciousness that's starting to make sense to me is that it's a property of living systems. It's closely tied to the fact that we are creatures with metabolism, with physiology, with all this other stuff that's not computational going on that actually gives us insight into why consciousness is the way it is, why it happens at all. And one thing you can, that sort of leaps out when you look at the brain for what it is and not look at it through the lenses of an algorithm is that in the brain there's no sharp distinction between the mindware, what we think, what our brain or our mind does, and the wetware, the physiology of our brain. There's no sharp dividing line. It's kind of vertically integrated all the way from brain regions down to an individual cell. In a computer, there's a sharp division between software and hardware, which is by design what makes computers useful. Real brains aren't like that. And so there's a kind of through line, at least to me, about how we understand what the brain is doing at a larger scale that might determine whether I'm conscious or how my visual experience is going, and what it's doing at a much smaller scale and of the business of actually staying alive, using metabolism to regenerate itself. There's a through line, at least I think there's a through line. So I feel that artificial consciousness, real artificial consciousness, if that's not an oxymoron might require real artificial life.

- So you mentioned in your book about beast machine theory, which is kinda what you're saying there really, that a consciousness needs to basically look after itself, want itself to live and and to survive. And so if we were to create an artificial intelligence which did have the desire or the will to survive, is that when things become dangerous? You mentioned in the book for example, I can't remember the name of the philosopher who, is it Metzinger who asked for a 30-year freeze on conscious research because programming an artificial intelligence with the desire to keep itself alive, which is what you're saying there, which might be necessary for consciousness, suddenly becomes quite a dangerous prospect.

- Well not really any of those things. So Metzinger, firstly, fortunately he didn't call for a freeze on consciousness research, he called for a moratorium on research into developing machine consciousness. Now honestly, I kind of agree with him, though I think that no one should really be actively trying to create a conscious AI. It's like, why would you do that apart from the desire to play God? It's a bit of a strange goal to have. It's ethically very, very dubious indeed. Fortunately, I think it's very unlikely to happen. There are many other theories that do suggest that consciousness is just a matter of computation. And if they are right, and they may be right, then it's much more likely than I think it is, which would, I think, be a bad thing. But if I'm on the right track, who knows, that something to do with life is important, that is not the same as just programming a robot with a sort of motivation to charge its batteries every now and again because describe that way, you are still kind of importing all the assumptions and the legacy of this computational view that it's programmed to do something. We are not programmed to do that, we're not programmed to have metabolism, we have metabolism. We often talk too loosely about this with our genes as well too. Our genes are programmed, or they program us to do this or that. No, they're just part of the deeply-complex, integrated physiological system that we are. So I don't think it would be enough to have, let's say a fancy humanoid robot that you then program its digital brain to go and make sure it plugs itself in or repairs itself if it gets damaged. My suspicion is that the difference between conscious systems and non-conscious systems goes deeper. It goes down to the level of the fact that we're made of cells that regenerate their own components, that transfer energy into matter and back again. I think that's critical. Of course, I have to say I don't know that that's the case, but I think it gives a good explanatory grip on why consciousness is the way it is. I think more worrying for me than AI in this sense is people are now building these so-called brain organoids. These are collections of brain cells in dishes in the lab, that again for very good reasons, for medical research and so on. And they don't do anything particularly interesting. So people aren't yet very worried about the fact they might be conscious, at least not in the public domain, but they're made outta the same stuff. So a whole kind of area of uncertainty goes away. I mean, who knows whether I'm on the right track that being alive matters in some way. But once you start creating living systems, then I might be right, I might be wrong, it doesn't matter 'cause that's already being done. You've already put that condition in and as these systems get more complex, then they may start to have the capacity to experience in a way that we may never even be tempted to notice because they're not gonna do anything.

- One more question before we pass to the audience for questions and as you were talking now, I was thinking about, might be slightly outside of your area of research, but there's a lot of research at the moment about how conscious might be distributed across different parts of our body, particularly our gut microbe. And that's one thing whether we can look for clues outside of the brain, if we're talking about it as a kind of embodied experience?

- Yeah, this is such an interesting question. I mean, I think another unfortunate habit of neuroscientists, perhaps unsurprisingly given the name is to sort of focus on the brain to the neglect of the rest of the body. I think this has been a problem because brains didn't evolve without the context of the body. And the body is in continual dialogue with the brain. And indeed neurons aren't just in the brain, we have a bunch of neurons in our gut, we've got some around our heart. There's probably neurons elsewhere too. The neurons in our gut generate the most serotonin. This chemical that's often influencing brain function. A lot of that's coming from the gut. The neurons in the gut also generate brain rhythms. They oscillate together and they can be synchronized with rhythms in the main brain. So I have no doubt, well I think it's very, very likely and there's good evidence. I should never say I have no doubt. Always got some amount of doubt. I think there's very good evidence and reason to believe that what brains do is very, very intimately related to what's going on in the body, and that involves neurons outside of the brain as well. It's a very different question to say, "Is your gut conscious?" I think your gut can affect consciousness. Is it itself conscious? I don't know, I think it's unlikely. The reason I think it's unlikely is that there's no good reason for it to have to be conscious. Consciousness for us seems to be associated with guiding behavior and integrating lots of different information in ways which the gut by itself doesn't have to do. And even large parts of your brain, my brain, aren't particularly involved in consciousness. It's not something that is generally useful for everything that's biological. So I don't think your gut is conscious.

- Great, questions from the audience really. I think there is a roaming mic. If anyone upstairs wants to speak, I'll just say it louder for you 'cause there's no mic up there. So if you guys want to speak, please do. Yes.

- [Audience Member 2] You said earlier, and I agree with this, that there's no sharp boundary between the brain's hardware and the mind's software. And so as you say, you need a whole body with a brain in it. Do you think that if computers were big enough to be able to simulate all of that, do you think that it might then be possible to have artificial intelligence?

- Thank you, that's a wonderful question. What if we had enough computational oomph to simulate all the intricate detail right down to whatever level. This is a beautiful question. People talk about this in terms of whole-brain emulation and people actually I think have, or some people have the hope that if they do this and they emulate their own whole brain, they can somehow upload themselves into it and live forever in some sort of weird silicon rapture. The first question is whether there would be anything it would be like to be a whole-brain emulation is another way of phrasing your question, right? I don't think so. And I think the key reason why is you hint at a very important distinction, which is that simulation of something is in general not the same as a recreation of that thing. I mean, one of the reasons computers are so useful is they can simulate basically anything, that's one of their wonderful properties. In most cases, we are completely unconfused about the fact that the simulation is not recreating the thing being simulated. If we simulate a weather system, we do not expect it to actually get windy or start raining inside the computer. I mean, some people say, "Well, maybe for a simulated person within it, it would be wet," but that's building in the assumption that a simulation of brain is enough.

- [Audience Member 2] But you always drop detail when you do it.

- Well, I mean, but the luxury of a thought experiment, let's just assume that you don't and you don't drop any detail. This is why you need a computer the size of the universe or something like that. But even if you do that, even if you had a computer the size of the universe simulating a rainstorm, it's still not wet, it's just a very, very, very, very detailed simulation. The only times where simulation of something actually generates the thing, is if the thing is itself a computation. Like if I simulate an algorithm, like something that maps some numbers to another number, then I'm recreating that algorithm because its nature is computation. So basically it's a gamble. Whole brain emulation would be enough if computation is sufficient for consciousness, then you're okay, but the weird thing is if computation is sufficient for consciousness, then the whole motivation to do this kind of whole-brain emulation is really undermined because the point of doing it is this sort of idea that, "Well, the detail matters," and if the detail matters, then it's very unlikely that computation will be enough because computation is motivated by the idea that the detail doesn't matter.

- Yeah?

- I've always had a problem that there's something that it's like to be me, because one of the most significant conscious activity that I engage in is almost necessarily involves a loss of a sense of self. I mean, if you think of sport, I used to do a lot of sport. At the moment, you are in an important match doing stuff. You are engaged in conscious, intelligent activity, but there isn't really anything that it's like to be, you're just doing it. When you're writing, you could either have a kind of pre reflexive rehearsal of what you're gonna write, or just write or speak. You externalize thought, but you haven't had a pre rehearsal necessarily internally that's the thought that you then express, like, I'm now... Sorry, I kinda slightly choked up on my voice. I'm now trying to speak and I'm conscious of my voice, but I'm not conscious of what I'm gonna say next. It just comes from nowhere and it's conscious activity and at the best, the best questions I've ever asked as an interviewer or somebody in the audience.

- [Anil] Like this one.

- Well, it might be a terrible one, but it's just coming out. It's not like there's something that it's like to be me all the time. I only get there's something that it's like to be me when I stop and then reflect, and that's the kind of confabulation at that point.

- So I don't think I explained myself or Thomas Nagel properly then. It's not about self-reflective awareness. The idea of consciousness being, it being there is something it is like to be you, doesn't mean that you are thinking, "Oh, it's like this to be me. I am playing sport, or I am thinking about what to say." When you are playing sport or thinking about, or talking, or whatever you are doing, there is some experience going on. You do not have to... I think that's part of the force of this definition, that it doesn't assume an ego, it doesn't assume a self, it just assumes that there is some kind of conscious experience happening. A lot of discussion these days about minimal phenomenal experience for a human being. What is the simplest experience that's possible for a human being to have? And one of the interesting angles on this states where there's complete dissolution of the ego. This might be in very, very deep states of meditation. It might be in after some kinds of psychedelics. It might be in some other situations as well. But there seem to be some states where the ego is not just partially gone as it might be when you're playing sport or playing an instrument, let's say, but wholly gone. There is no experience of self whatsoever. The point is that consciousness is still happening. You have no reflexivity on it, but it's still happening. Many non-human animals may have this kind of thing. They may have conscious experiences, but without a sense of self there. The whole point is that all that you are asking in this definition is that experiencing is going on, not that there has to be the experience of being the subject of experience, that's something extra.

- Can I just come back very, very quickly? Doesn't that imply there's an experience? How can you have an experience without an experience then?

- There has to be some locus to the experience, but you're confusing two different things here. You're confusing the fact there has to be something where the experience is happening. There's experiencing going on, but that doesn't have to be a conscious experience for a conscious experience. There just has to be conscious experience. I could lose my sense of ego completely and still be conscious. If these brain organoids get to a sufficient level of complexity, they may have a conscious experience without being at all aware that they are having it, because there is no ego in an organoid. It's odd because we humans, we walk around with our egos all the time. A newborn baby might be a little bit like this, and also ego comes in, it doesn't have to be the full reflective, like, "Oh yeah, I'm thinking this." The fact that we have a first person perspective and distinguish ourselves from the other, that's all part of the ego. All of this can go away, and consciousness is still possible, certainly conceptually.

- Doesn't make sense to talk about consciousness without the ego.

- Yeah.

- Because in your book, for example, you distinguish between intelligence and consciousness, and you said the two are different.

- Yeah.

- So does it make sense to talk about consciousness without the ego or would that not just be intelligence?

- No, you can't subtract an apple from a pear and get a meringue, no. I think they're very different things. So, I mean, the confusion of intelligence and consciousness is, again, one reason I think people falsely think that AI is about to become conscious because we think we're intelligent and we know we're conscious, so we put them together. AI is, I think, a pretty good example of how you can get some forms of intelligence without having to attribute consciousness at all. The experience of the presence of an ego is, I think, part of conscious experience, which is so familiar to most of us for so much of the time that it's very hard to imagine consciousness without ego. But I think it's entirely possible. It may not happen very often, but I think it does happen, I think it can happen and I certainly don't think there's any conceptual problem with thinking about something being conscious without there being an ego. Whether it happens in practice is a different question. Even then, I think the answer is yes, that it does.

- Great.

- Question in the back.

- Yeah, question at the back before I miss.

- Yep.

- Yeah, yeah, the leather jacket, yeah.

- Why are you skeptical about asserting the dependency of life to the consciousness? Because I work with computers and even if we reduce the brain down to how computers work in an algorithmic mechanism, then human body, there's an constant influx of data all the time. Like the replication of it with machines. Like for example, nowadays in AI studies, AI research, there's machine vision, machine speech, but there's no proper exception. How is that to be replicated and how can we design a system that is so lifelike that it's gonna have the constant influx of data and that is gonna be the AGI. Why are you skeptical on that and not so assertive on life creates the consciousness?

- Well, I mean I think life is... So I don't mean to be skeptical of the life being necessary for consciousness, that's the view I'm kind of behind in a way, but I just don't want to misrepresent it as being something we can be a 100% sure of. I think it's a very strong hypothesis that this intimate connection between life and consciousness. Now when it comes to these other developments in AI, yes, we have machine vision, it's quite good. It's different from human vision. There's no sense in which machine vision has to be conscious. Vision is just good at recognizing images and doing visually-guided behavior. We can build in other senses too. Now that robotics is beginning to catch up to AI, we'll be able to build systems where there are things like proprioception which, by the way, is the sense that we all have of body position. We just know where our body is in space. That's proprioception and how it's moving. Of course you can give a robot that sense too. You can even give it the sense of its own interior. What in human beings we call, in animals we call interoception, the sense of the body from within. I think the point is you can do all of this.

- But do we know what all is? Because we don't have a blueprint of brains. So what is the fear of machines becoming human-like in the near future? I mean, I agree with you totally.

- Yeah, two different questions. I mean, one is like, maybe there are other ways of building things that are sufficiently human-like that most people don't care, right? That's entirely possible. Already a lot of people deal with language models in the sense that they feel they're dealing with something sufficiently human-like already. People have relationships with chatbots and things like that. So the fact that we can't replicate perfectly a human being in a sense doesn't matter from our perspective because in many cases we don't even care. But it does matter if we want to know what the system can do and whether for instance it has ethically-important properties like being conscious. Then it really, really does matter. Not how much we're taken in by it, but what's actually going on with it. So that's why I think these things are important and just plugging on extra sensors to a robot, I don't think does the trick. I think it may make it look more persuasive, but I don't think it does the trick.

- Should we try and come to someone else? 'Cause there's loads of hands in the air and I'm conscious of time. Yeah, you had your hand up in the air, go ahead, yeah.

- Forgive me if this is too much like the previous gentleman's one, but do you think if consciousness is so clinical from a physicalist perspective, do you think that undermines free will in some sense? In the sense that Strawson gets at that we're all going towards a certain point? If you can map the human brain in such a way.

- There's always gotta be a question about free will, hasn't there? Always. Did nobody get the no free will instructions on the way in? Okay, no, it's a great question. And I have to say, when I was write... I love thinking and talking about free will. When I was writing the book, I thought like, "I can't write a book on consciousness without a chapter on free will." But it was the one I struggled with by far the most because it just combines so many confusing things together. There's the confusion of is the universe deterministic or not, is there some fundamental randomness? There's the confusion of whether our conscious experiences of causing things to happen actually cause things to happen. And there's all that ethical and moral confusions about, well, if there's no free will then how is anybody responsible for their actions? And you put all these things together and it's no wonder that people get into a horrible mess about it. But, but. I think for me there's a way to think about free will, which makes sense, which means that we have the degree of free will that we need. Not necessarily the degree of free will that you might aspire to. There's a sense of free will in which consciousness sort of swoops into the brain and causes things to happen that wouldn't otherwise happen. Takes advantage maybe of some intrinsic randomness in the universe, loads the dice. I don't think we have that kind of free will. I don't think we need or want that kind of free will. That's a spooky kind of free will, doesn't explain anything. We human beings, we're very complex systems. When we do something, there's a lot of causes which stretch back in time. Some of them come more from within the body than outside the body. Now if somebody hits you, you might respond very quickly and it won't feel voluntary. But if you decide to get up and go and have a drink after this, it feels voluntary. The causes are still all physical causes. But when you do something voluntary that the causes remain a little bit within your body a bit more. I think what we experience as free will is the brain's perception of the causes of the actions that it makes. And when the brain infers that the causes of an action come more from the inside, they have this sort of freedom from immediacy to use the words of a Mike Shadlen, a neuroscientist. When the brain infers that our actions have this freedom from immediacy, we experience them as freely willed. Is this useful? Do we really have free will? Well, yes, it means that the actions we do are more constrained by what we are. And I think that's the kind of free will that we want. It's like a freedom from being forced to do anything else. As much as it's a freedom to do what we want, because what do we want? Well, that depends on what we are. So if we do what we are, we have all the free will that we need, and we experience, it as a utility too because maybe the next time we might do something different. I think the experience of free will marks out those actions that come largely from within, so that as an organism we might learn from them in a particular way and do something different the next time. So that is all the free will I think that any of us actually need to retain our dignity as human beings, which is fine, if you want to.

- So do you think the the causal processes in the brain, they don't undermine free will?

- No, so philosophers sometimes talk about this as compatibilism, that there's a version of free will which is compatible with the universe being deterministic and causal processes in the brain and the body underlying everything. If you give that up, then what you're looking for is either randomness, which I don't think is the kind of free will that we want, just like behaving randomly, or something that's a bit spooky, parachuting in and loading the dice in a more strategic way. So I think free will has to be compatible with the brain being this complex nexus and network of causes, and I think that's fine.

- [Speaker] I think we've got time for two more questions.

- At the front, Alex O'Connor, the CosmicSkeptic.

- Oh thanks, nice of you to shout out. Thank you. Most people think that consciousness requires complexity, but hearing you say two things, the first about how we often project ideas of consciousness based on our human experience of consciousness onto other animals, that might not be what it's like for a bat, but also the way that we were just talking about consciousness without the ego, it seems like a really weird thing to imagine, but people who've done a lot of psychedelics, for example, have experienced that. They can't quite put it into words. So I can imagine someone who, or something which is conscious but has no ego. I can also imagine something which is conscious but can't see, but is still conscious, can't hear, but is still conscious. I can imagine an organism that has no such thing as memory whatsoever, but is still conscious. And it seems that when I start taking those things away and I try to think about what I'm actually left with, consciousness itself, not some of the things consciousness does in the human organism, like memory, self-perception, persistence through time, but just that what it's like to be a thing. It feels like I'm left with something that's not complex at all, but quite, quite rudimentary and simplistic. And of course that opened the door for its position as a fundamental aspect of the universe because it doesn't seem so insane anymore to say that that's something akin to mass or charge. It's incredibly simple, but even outside of that, do you think that consciousness, not human consciousness or some of those things that it does, but consciousness itself requires complexity necessarily?

- Thanks, Alex, I like this a lot, and I can kind of go on the bus with you up to a particular stop, but then I get off at a particular stop too. We'll see where that is. I think it's a really important observation that phenomenologically, which is to say the experience of consciousness, the experience of being conscious, let's say, doesn't have to be particularly complex. Psychedelics can strip away the ego for human beings. There's a beautiful thought experiment, a Greek thought experiment, Avicenna's Man, which imagines a man floating in the sky and progressively every sensory modality is stripped away. And I think in the original it's only talked about the classical ones, the sight, sound, smell, whatever. But of course we have all the internals. You can play the same game and you say things, "Well there's no sight." And always the question is, "Is there still something it is like to be that system as you strip away all these things, and what are you left with if there is?" A sort of pure essence of consciousness, I think this is conceptually possible for sure, it may happen and we can certainly approximate. Does that mean complexity is necessary for conscious? Well here, I think the fact that the experience might be maximally simple doesn't mean that what's underlying it can also be that simple, right? It still might have to have some level of complexity under the hood, if you like, to support even the simplest conscious experience, and now is where I get off the bus because the fact that experience can be maximally simple, I don't think really opens a door to panpsychism any more than having an out-of-body experience opens a door to the idea that your soul can leave your head and go and sit on the balcony. I think it's a mistake to draw conclusions from the content or lack of content of an experience to changing your beliefs about what metaphysical position might might be right. I think you have to justify it in other ways.

- Thank you, everybody. That's, I think all we've got time for. Give Anil an applause. Anil, thank you.

- Thank you, Jonny.

- Thank you.


Related
The hospital where Rainn Wilson’s wife and son nearly died became his own personal holy site. There, he discovered that the sacred can exist in places we least expect it. During his talk at A Night of Awe and Wonder, he explained how the awe we feel in moments of courage and love is moral beauty — and following it might be the start of our spiritual revolution.
13 min
with