This post contains an edited transcription of an awesome conversation that I recently had with David Pearce and Kristian Rönn at the Swedish QRI Strategic Summit in July 2024. Like all of the best conversations it started with a walk outside, and the topics we covered fell broadly under the umbrella of philosophy of science, philosophy of language, and metaphysics about consciousness. I’m excited to share this as a continuation of my last post experimenting with the use of dialogue as a mode of expressing philosophy. But equally am I excited about the quality and depth of this post’s content on a topic so dear to my heart! Thank you also to David for helping me with editing the text 😊.
David is one of the founding figures of transhumanism, a philosophical movement advocating for technology-assisted enhancement of longevity, intelligence, and well-being. He is also a pioneer of suffering abolitionism and has devoted much of his life to advocating for the interests of sentient beings. Kristian is the CEO and co-founder of Normative.io, a carbon accounting platform working toward a future with net-zero greenhouse gas emissions. He is also the author of The Darwinian Trap and possesses a deep philosophical understanding of ethical issues resulting from natural evolutionary processes.
(Talking about the hard problem of consciousness)
David: The intrinsic nature argument essentially says that physics is silent on the intrinsic nature of the physical, but gives an exhaustive account of the structural-relational properties of matter and energy. The essence of the physical – what actually “breathes fire into the equations”, to use Stephen Hawking’s famous metaphor – is what makes the world exist for us to describe. As a hardcore materialist, Hawking assumes that this mysterious ‘fire” in the equations of QFT must be non-experiential. Perhaps Hawking is right. But if we drop that metaphysical assumption and assume that we do know one small part of this ‘fire’ – namely, the part that is instantiated by your mind-brain and the phenomenal world-simulation it runs – then we have a very different conjecture about the intrinsic nature of the physical. This idea [i.e. that physics is silent on the essence of the physical that the mathematical formalism describes] goes back to philosopher Bertrand Russell, and has recently seen a revival (cf. Galen Strawson, Phil Goff). If it’s true, then the hard problem is really just an artefact of bad metaphysics.
You may or may not take the argument seriously, but if you hear the words ‘the intrinsic nature argument’ or ‘non-materialist physicalism’, that’s what I’m alluding to. Non-materialist physicalism has obvious affinities with so-called “constitutive panpsychism”’. But “panpsychism” can evoke property-dualism, whereas non-materialist physicalism is a conjecture about the intrinsic nature of the physical. Only the physical is real. The intrinsic nature of the world’s fundamental quantum fields doesn’t differ inside and outside your head. What makes humans and nonhuman animals special is adaptive phenomenal binding, not consciousness per se.
Arataki: Can I add something that might help clarify this discussion? Broadly, there are two modes of scientific explanation. First, to ‘ground’ the existence of some entity e within a model of the world, you can say that e emerges from the mechanistic interactions between discrete parts of a system. So, if you have an atomistic model of physics, you could argue that certain complex causal chains lead to higher-order effects, such that e exists at this higher level because of how parts of a system are arranged. I think the vast majority of concepts in our public language fit this kind of explanation. For example, take that garbage bin over there. Does it really exist? No, it’s just a useful abstraction from more basic features of physical reality present in that region of spacetime, such as the arrangement of molecules or behaviour of fields. In other words, we can explain it away.
Kristian: Similarly, does wetness really exist? No, it’s just underlying hydrogen molecules interacting with our sensory organs.
David: Well, wetness does exist, but it’s mind-dependent. Minds are physical, and they are spatiotemporally located, so wetness is physically real, just like phenomenal colour is physically real.
Arataki: Yes, if we define wetness as a phenomenal property, then it’s real.
Kristian: Yes, yes.
Arataki: So, that’s one mode of scientific explanation. But the second mode – which is much less common – is when you take two entities that were previously thought to be distinct and prove that they are fundamentally the same, a process called “unification” or “subsumation.” And this is a much stronger method of grounding, because when you say e emerges from something else that is more fundamental, you’re acknowledging that e only exists as a useful abstraction in certain contexts. For example, if I want to dispose of my rubbish, it is very useful to model the world as containing ‘garbage bin’ entities, but they have no privileged existence outside of my model (or those internalised by other humans). Now consider entities like electromagnetism, where electric and magnetic fields were once thought to be separate forces…
Kristian: And Maxwell unified them.
Arataki: Exactly. By collapsing these two entities into one, Maxwell provided us with a more elegant explanation for the underlying phenomena. Another example is Newton’s universal law of gravitation unifying terrestrial and celestial motion. Before that, people used different models to explain the movement of regular objects and the stars, but by introducing gravity Newton was able to explain both phenomena using the same basic principles. This is the type of unification David is pointing to when he talks about physicalism – he’s suggesting we can unify phenomenal consciousness with some fundamental physical entity rather than ‘explaining it away’ using whatever model of reality we happen to have internalised.
David: Essentially, one is transposing the entire mathematical apparatus of existing physics as it is commonly understood onto what is effectively an idealist ontology – though it’s probably best not to use the term ‘idealist,’ as “idealism” carries too much metaphysical baggage. In that sense, non-materialist physicalism is a formally conservative approach. And it’s still just a conjecture. If this conjecture is correct, the hard problem of consciousness doesn’t arise. However, we’re still left with the phenomenal binding problem. If textbook neuroscience is correct and you are essentially a pack of decohered, membrane-bound classical neurons, then you should be a “micro-experiential zombie.” What I meant earlier by saying that brains, as commonly understood, are mind-dependent – that they’re a perceptual artefact – is tied to this idea of quantum mind. Classicality can’t just be assumed, it has to be derived [via the decoherence program].
A lot of physicists and neuroscientists have, at least briefly, considered the possibility that two types of holism that seem classically impossible might be related: the holism of quantum states and the holism of our phenomenally-bound conscious minds and their perceptual world-simulations – the binding problem. But theorists have already done the maths. For instance, they’ve calculated the effective lifetime of superpositions of neuronal processes – neuronal motion detectors, colour detectors, edge detectors – when you see a specific object, like a car. If the effective lifetime of these neuronal superpositions were in the range of milliseconds, then they would be an obvious candidate for a perfect structural match. But in reality, the theoretical lifetime of individual neuronal superpositions must be on the order of femtoseconds or less, which is, intuitively, the reductio ad absurdum of quantum mind.1
I see what is naively considered the reductio ad absurdum of quantum mind as an experimentally testable prediction. For example, I believe that in the interference signature yielded by molecular matter-wave interferometry we’ll detect a perfect structural match. There are also indirect ways to test a “Schrodinger’s neurons” conjecture. For example, replace your V4 cortical neurons (destruction causes achromatopsia) with what would naively be called their silicon functional surrogates. On a classical, coarse-grained functionalist story, replacement by silicon surrogates allows perceptual objects to continue to seem colourful as before. If so, then my account is falsified – end of story! I predict instead total achromatopsia.
I’m much more confident that phenomenal binding is non-classical than I am about any specific quantum mind hypothesis. But fundamentally, there’s no agreed-upon solution to the binding problem. If neuroscience is correct and you’re just a collection of effectively decohered classical neurons, then what you’re experiencing right now shouldn’t even be possible – even if some form of consciousness fundamentalism is true. Why aren’t you just an aggregate of “mind dust”?
Kristian: I still don’t feel like I fully understand. Let’s say we do prove – or at least strongly suggest – that the brain is using quantum phenomena. Why does that lead us to assume that consciousness resides there?
David: Well, the assumption is – against the backdrop of non-materialist physicalism – that consciousness is the intrinsic nature of the world’s fundamental quantum fields. And what we’re trying to explain is how binding is possible. The raw material here is consciousness – the quantum fields themselves. What makes animal minds special, over the past half-billion years or so, is the way consciousness gets bound into these egocentric world-simulations, like the one each of us is experiencing right now.
Arataki: I’d like to quickly add that you don’t necessarily have to take a position on which particular physical theory of consciousness is correct or best accounts for all of these problems – at least, I’m agnostic on this. It’s more important to simply recognise that consciousness has an inherent unity, and explaining this unity is intractable within the classical model (where you only have individual particles that are spatiotemporally distinct, with no absolute reference frame). So, instead of trying to find a common thread that links these ontologically discrete units, you can reframe the question and ask “How do we draw non-trivial boundaries around consciousness to demarcate the phenomenal space of different sentient beings?” Topological segmentation is one solution to this problem. I know Andrés has recently updated his views somewhat, and he now leans more toward quantum field-based explanations. But I still think electromagnetic fields are a good example of a non-materialist physical ontology that explicitly accounts for these boundaries.
Kristian: Yes, I guess the question is, if we have these sorts of ‘bindings’, how do we know if it’s quantum? How do we know if it’s electromagnetic? How do we know if it’s gravitational? Or how do we know it’s not just some kind of chemical reaction?
Arataki: But how could a chemical-based explanation satisfy the constraints of the phenomenal binding problem? Clusters of chemicals don’t have natural boundaries around which we can carve out ontologically unitary entities such as conscious experiences. There’s an intuitive model that says certain endogenous hormones are irreducibly linked to certain phenomenal properties – like oxytocin with the feeling of closeness, or serotonin with the feeling of love.
Kristian: Yeah, why wouldn’t that work?
Arataki: Well, as David mentioned earlier, how do you draw boundaries around the brain as a unit or its modules? You could point to an area and say, “This is where the chemicals are, this is where consciousness is.” But claiming that these boundaries are objective – meaning not conditional upon our preconceptions – seems theoretically impossible from my perspective. Like, I think you actually just need a field-based ontology to non-arbitrarily draw these boundaries, because otherwise you’d have to say, “Okay, this is where the brain ends and the spinal cord begins”, and so on. It strikes me that this is an example of wishful thinking: projecting our model of reality onto reality itself. You know Scott Alexander’s ‘The Categories Were Made For Man, Not Man For The Categories’? Humans evolved to model reality by pointing at and drawing boundaries between phenomena for practical, computational reasons. But when we do this, we have to ask, “To what extent do these categories exist independently of how we naively perceive them?” I want to say that consciousness does exist independently. And if you take this assumption seriously, it really narrows the range of possible explanations for core properties of consciousness, like phenomenal binding.
Kristian: But why does our explanation have to be unambiguous? Is it related to us feeling like we are a coherent, unambiguous entity when we observe things?
Arataki: That connects to what we were discussing earlier, about Brian Tomasik’s writings. If you’re an eliminativist about consciousness – which Brian explicitly is – then it doesn’t really matter how ambiguous your explanation is: you treat consciousness like you treat all other concepts (e.g., ‘garbage bins’). Which is to say “there’s nothing fundamentally there, but this is perhaps a useful abstraction in certain contexts.” Of course, eliminativists still work hard to reduce suffering, but from my perspective, and I think David’s as well, this attitude is difficult to reconcile with their stated philosophical beliefs.
Kristian: Yeah, I agree, something feels off about that.
David: There’s an implicit perceptual direct realism in this eliminativist view. Most people don’t conceive of reality through the lens of having “phenomenal world-simulations”, they just suppose see their local extracranial environment directly.
Arataki: Yeah, and that’s also because we’ve evolved as social animals to create shared semantic reference frames so that we perceive ourselves as inhabiting the same world. On a semantic level, I can point to a tire and say, “Hey, look at that tire!” and you can say, “Yes, I see that tire.” But phenomenologically, we’re experiencing fundamentally different percepts. Not only because of our different physical frames – you’re standing there, I’m standing here, so we each have different perspectives – but also conceptually: we grew up in different cultural contexts (Sweden, United Kingdom, New Zealand) playing different language games. On top of that, while we are phylogenetically related, our nervous systems have evolved different parameters that determine how we map sensory input onto phenomenal experiences. For example, I see a different colour out of my right eye compared to my left.2 So, while it might seem like we’re inhabiting the same reality at the level of semantic description, things are very different at the level of describing the structure of our experience.
Kristian: But digital computers are made out of fields as well, so why aren’t they considered conscious?
David: Indeed, but digital computers can function only because of decoherence – essentially, the suppression of distinctively quantum effects. A superposition of two bits (e.g., a 1 and 0) would be an error, not a functional, adaptive phenomenal state. How is classical computing possible in a fundamentally quantum world? The answer is decoherence. In a fundamentally quantum world, decoherence makes digital computing physically feasible AND simultaneously prevents classical computers supporting minds – phenomenally-bound subjects of experience. The entire empirical realm is computationally inaccessible to digital zombies…
Kristian: Right, right, the answer is decoherence – that’s correct.
Arataki: This isn’t something I can comment on extensively, but in A Paradigm for AI Consciousness which I mentioned earlier, Michael Johnson offers a fascinating take on the question of “what kinds of experiences might digital computers have?”, building upon Stephen Wolfram’s conception of physical objects having a branchial space. My understanding is that he’s not concerned about conscious computers in the ordinary sense where we have to worry about their capacity to suffer (like sentient beings). But he does make an interesting conjecture about the kinds of physical structures – made of awareness – that might result from digital computation.
Kristian: I find it really tricky because when you’re building a philosophical system about consciousness, you need to base it on some fundamental axioms – and people can disagree on those axioms. And the potential ethical implications of those disagreements are enormous. I worry that we could end up simulating a lot of consciousness that isn’t necessarily quantum in nature, but might still suffer in some way.
Arataki: My working assumption is that there exists a ‘meso-level’ where we can analyse the phenomenology of conscious systems (e.g., states of suffering), such as the entirety of my experience in this moment. Within my experience, there are soft boundaries between different varieties of qualia that represent fitness-relevant information, whether about my external environment or internal state. But the boundaries between my experience and your experience are hard – I can’t directly access the qualities of your conscious experience, only through abstract inference (at least, until we build something like a reversible thalamic bridge). But because of Tomasik’s philosophical axioms, he would disagree that this meso-level exists in a privileged sense, any more than, say, a higher-level analysis of the networks we’re embedded in, or a lower-level analysis of the molecular patterns responsible for our physiological functioning. For him, states of consciousness like pleasure and suffering are essentially leaky abstractions arising from conglomerations of physical processes, each with its own utility function that can be satisfied.
Kristian: Yup, so he would probably say that when you train an AI model using a cost function, that cost function feels a little bit painful.
Arataki: Exactly. The key difference here is that Tomasik’s ontological primitives are computational entities, like processes, whereas the only entities I’m confident have inherent existence are experienced qualities (i.e., ‘qualia’), which have boundaries demarcating their phenomenal space. This is a core assumption in my own work, like in my doctoral thesis, I say:
“An entity is sentient if (and only if) it has, or can have, phenomenally unified conscious states.“
For me, this is the fundamental axiom – it’s a hill I’ll stand on until my final argument. Like, I can barely conceive anymore of a coherent worldview in which this axiom isn’t respected.
Kristian: Right, right, right. That was an excellent summary – I feel like I understand your perspectives a lot better now. But, yeah, I still feel unsatisfied because I can’t say that I’m fully convinced by either position. To me, both positions seem fairly reasonable, and that’s super frustrating given the ethical implications. So, I feel like I need to ‘deep dive’ more into the literature and hopefully shift my probability distribution a bit more to one side, because otherwise I don’t know how to act in the future.
David: The nice thing is though, if someone claims that phenomenal binding is non-classical, this isn’t just a philosophical opinion – it can be experimentally refuted.
Kristian: Yep.
David: Now, who’s going to think it’s worth refuting? Only someone who gives at least some modest credence to the conjecture. But if you talk to the average scientist and ask, “Are we quantum minds simulating classical worlds?” you’ll likely get an incredulous stare. And I’m not here to evangelise – if people are interested, I’m happy to chat about it, but you end up spending a lot of ‘weirdness points’ in the process. Indeed, even at our QRI Summit, while it’s been framed diplomatically as solving “the hard problem of consciousness, QRI, and the abolitionist project,” it’s not entirely clear to me how tightly these causes are interconnected. As I mentioned, a wide range of theories of consciousness are consistent with phasing out the biology of suffering.
Arataki: At the very least, I think that physicalist theories of consciousness should be prioritised more because they’re so neglected in mainstream discussions about suffering reduction, and their implications are so radically different from computational theories. Something I always bring up in these discussions is that you can always explain consciousness using whatever model of reality you happen to have internalised – whether it’s computational, spiritual, or whatever. Or you can have faith that something real lies beyond your model and just experiment as much as possible until you find a signal, developing new and better models from that signal. The first theory that sufficiently predicts how to build a causally efficacious technology wins the debate about what kind of thing consciousness is; it’s hard to deny the laws of physics when you’re staring down the barrel of a gun. Say you assume an EM-field theory of consciousness and manage to build a device from first principles that can reliably modulate the phenomenology of any sentient being – nobody is going to question your theory anymore. This is why consciousness science is so confused right now: we have no formal methods to validate our models other than theoretical plausibility. We can of course validate our models on ourselves, but this doesn’t yield the public conditions that empirical science requires.
Kristian: But even if we had that EM-field modulator, someone could still argue from a functionalist perspective: “Okay, this works for this particular substrate, but what about other substrates?” There could be other equivalents, so you haven’t really proven your position beyond reasonable doubt.
Arataki: True, but at the very least, you’ve created a research environment where people are actively testing their theories.
Kristian: Oh yeah, 100%, I agree.
Arataki: Maybe I’ll go out on a limb saying this, but I’m pretty sure everyone alive right now is wrong about consciousness. Like, some people are more wrong than others, but if anyone had an approximately correct model, we wouldn’t even be having this discussion because we’d already have such phenomenology-interfacing technologies.
Kristian: Are there any articles I should read that summarise what you’ve already said? Either ones you’ve written or that you’d recommend?
David: I can give you a couple of Quora answers – they’re really concise. One is on non-materialistic physicalism, the other on the binding problem and possible solutions. The possible solution to the binding problem that I explore – that quantum superposition doesn’t break down in the CNS or anywhere else – presupposes non-materialist physicalism, because without non-materialist physicalism there would be nothing to bind. If I had to assign a credence to non-materialist physicalism, it would be a complete toss-up. But I think a lot of people assume professional physicists understand the intrinsic nature of a quantum field, and laymen would too if they were smarter. No! Ed Witten is smarter than you or I because he’s a brilliant mathematician, not because he has deeper insights into the intrinsic nature of the physical. The true answer could be something literally inconceivable – or we might just find it absurd. But if the hard problem is solvable by the human mind, non-materialist physicalism strikes me as the best bet.
Arataki: As for what I’d recommend for further reading, Andrés has a great post from about a year ago introducing the binding/boundary problem titled The View From My Topological Pocket. If you like that, I’d highly recommend following it up with his EM-field topology paper Don’t forget the boundary problem!, co-authored with Chris Percy. There’s also the Slicing Problem paper, which serves as a good intuition pump against computational theories of consciousness – depending on how unwilling you are to bite the bullet on consciousness multiplying exploits. And honestly, reading Michael Johnson’s work had the biggest impact on my own philosophical approach to consciousness, especially Principia Qualia.
Kristian: I’m a little bit afraid of the length of Principia Qualia. It’s fairly long, right?
Arataki: I’d suggest skipping the first section, which is mostly a critique of the integrated information theory of consciousness. The second section is much more interesting with regard to how it frames the problem of consciousness and its various sub-problems. The most significant contribution is probably the symmetry theory of valence, on which Michael recently published a white paper titled Qualia Formalism and a Symmetry Theory of Valence, where he introduces the theory more formally and with updates since PQ’s initial publication. But related to our earlier discussion on computationalism, there’s actually only a short section in the appendix (page 61) that I’d really recommend. There he describes the concept of frame invariance in the context of mapping the relationship between physical and computational entities. He also touches on this in a shorter blog post called Against Functionalism, and there’s a fantastic reading list at the end of Section III of his Paradigm paper.
By the way, do you guys want to take a selfie?
Kristian: Oh yeah, that would be super nice!
(gets into position)
David: Whiskey and vegan cheese, whiskey and vegan cheese, whiskey and vegan cheese.
(group laughter)
Arataki: This is a great conversation, by the way.
Kristian: Well, thank you for the private tutoring! You must be tired of it by now.
Arataki: No, no, you’re probably one of the most receptive people I’ve talked to about this – and I’ve had many such chats.
Kristian: I always try to enter conversations with a mindset of really trying to understand.
Arataki: Yeah, absolutely. Also, I recognize that I might not be the best person to argue for these positions because the way I reason is very different from most people. Like, a core axiom of my philosophy is that abstractions (e.g., computations, mathematical entities, semantic concepts) lack inherent existence, which to many people seems like a contradiction (given that I’m using words to convey this insight). But I also believe that we cannot ever speak directly about what does inherently exist.3
This is a convergent insight I’ve seen across various areas of serious philosophical inquiry – mostly under the umbrella of metaphilosophy, which asks questions like: “What does it mean to try to describe reality or explain its features?” and “What is the status of an explanation in relation to the phenomenon itself?”. For example, I see this expressed in the writings and Dharma lectures of Rob Burbea about emptiness. But I also see it in the later philosophy of Ludwig Wittgenstein, whose methods for cultivating insight (steeped in the traditions of analytic philosophy) are perhaps the most orthogonal to Buddhism. Still, if you want to have causal power in the world – to influence beliefs and help sentient beings – you can’t become too disassociated from common modes of modelling reality. Even just to have this conversation, I have to assume a bunch of conditions that imbue meaning into the symbols I’m uttering. Then, I have to hope that your internal semantic reference frames are sufficiently synchronised with mine for that meaning to be conveyed. Which is…
Kristian: Frustrating?
Arataki: Frustrating, yeah. Because ultimately, there is no meaning to be discovered in the symbols themselves, or anywhere else in the external world. Like, there’s meaning to the extent that you assume there’s meaning, but you can always drop that assumption, and once you learn this conceptual move there’s no going back.
Kristian: I have some more questions for you experts here. Let’s assume consciousness is identical to some sort of quantum field. How do different types of qualia emerge from that?
David: If you assume the mathematical framework of quantum field theory, the solution to the equations encodes the textures (“what is feels like”) of consciousness. What we lack is some kind of “Rosetta stone” that would let us ‘read off’ the textures from the solutions to the equations. It’s not that the formalism is missing anything relevant – we’re not arguing for “hidden variables” in QM or anything like that. It’s just that we don’t have a Rosetta stone. And while Michael Johnson and others are more confident in finding better explanations for why qualia have the textures they do, I’m very much agnostic on the plausibility of theories like the symmetry theory of valence.
Arataki: This is interesting! Because earlier, I said that I was agnostic about which physical theory of consciousness is correct, but I’m very convinced by the symmetry theory of valence. Indeed, my explanation for valence – and all other properties of consciousness – has everything to do with symmetries in the field being broken, which allows for more information to be represented within each moment of experience, like through different sensory modalities (e.g., sight, sound, touch, smell, etc.). With enough sensory clarity you can observe how experience naturally creates dualities between phenomena – like ‘green or red,’ ‘left or right,’ ‘occupying space or empty of space.’ This becomes more apparent within exotic states of consciousness. For example, if you take a sufficiently high dose of 5-MeO-DMT, you can watch these mechanisms break down in real-time, leaving the experiential subject in a state of ‘pure awareness’ or ‘oneness.’ By repeatedly entering and exiting this state, you begin to notice that our natural state of existence is one of fragmented awareness, scattered into many (seemingly) different parts.
Kristian: I think I understood like 20% of that. But okay, take the qualia of redness – where does that reside in the fields?
Arataki: So, I’d say that redness – or any quality of consciousness – is only what it is because of what it is not. In other words, you can only experience red qualia because it occurs within a space that also contains the possibility for non-red qualia. Evolutionarily, the function of redness (or colour more broadly) is to help us notice fitness-relevant differences between perceived objects in our environment, so seeing ‘red’ is deeply ingrained in how we model reality. But I think this also depends on the way you look at the phenomena. For example, if you’re an experienced meditator, you can learn to relax the mental ‘muscle’ that creates dualities between phenomena, such as differences in perceived colour, and then you can occupy states of consciousness where colour doesn’t exist. Or at least, you would have to apply quite a radical transformation to an experience in order for it to be capable of internally representing colour. But I can’t comment on the exact mechanism responsible for this – it’s more of a phenomenological observation.
(We return to the house the conversation ends)
- Yet if the structural mismatch” between our minds and the CNS is real, this would suggest a kind of dualism. ↩︎
- This trait isn’t actually related to my phenotype; as a child I stared at the sun with one eye held open to see what would happen, and the minor damage I received was permanent. But the point remains! ↩︎
- “The limits of my language mean the limits of my world.” 5.6 Tractatus Logico-Philosophicus (1922). ↩︎