>Saying things like "Consciousness is emergent" or "Consciousness is just a side effect of information processing" seems to miss the point.
On the contrary, I feel like those who think we need some fundamental consciousness property is missing the point. We expect to find some objective third-person property that is the substrate of consciousness because a phenomenon that seems so unlike every other property should have a fundamental basis. But the mistake is expecting to analyze consciousness in the same way we can analyze other objective properties of physics.
The only things we know to be conscious are complex macro-scale objects with highly complex and rare internal organizations. In fact, the only things we truly know to be conscious are ourselves. It seems to be fundamentally subjective; it is not something to be directly witnessed as a third-person observer. So our investigation should start there. What we want is a theory for deriving organizational subjectivity from a non-subjective substrate. What might this look like? A system with subjectivity will need to recognize the external from the internal. It needs an egocentric mode of representation with the ability to represent itself and its dispositions and intentional stances, as well as states to represent the external world. My intuition tells me such a system has a non-zero "inner life", i.e. there is something it is like to be it. But I see no reason to think "fundamental" subjectivity, whatever that is, could do any explanatory work here. The causal and representative power is in the organization.
It’s even worse: How can you be sure that you can separate the internal from the external? Both seem to have to be present in order to make sense of anything.
> What we want is a theory for deriving organizational subjectivity from a non subjective substrate. What might this look like?
I think the Reinforcement Learning framework is useful here. An agent exists inside an environment, it has some rewards, it is capable of sensing and acting, and its goal is to maximise its rewards over time. This implies learning, exploration, planning and the ability to model itself and the world. Of course it is possible for many agents to share the same environment and have different bodies, rewards and needs.
I think sensing, learning and evolution are good filters for judging if something could be conscious.
Indeed. I think IIT is a good theory... of something, just not exactly consciousness. Maybe a precondition to consciousness or something like that. But the thing that we think of as our consciousness is to me best explained by the "global workspace" theory, which says consciousness is the process of the various specialized parts of the mind, which are constantly working separately and in parallel, communicating their state to each other. It's like a boardroom for the society of mind, where at any point one subsystem has the podium (although there is lots of chatter and crosstalk as well). For most of us a part of the language subsystem (Gazzaniga's "interpreter") is also giving a running commentary (the internal monolog) of the information it's receiving from the other parts (with a lot of its own interpretation thrown in)... but this is not an essential feature of consciousness! We have a tendency to identify our consciousness with this commentary, but that is obviously incorrect. I think that the communication in this global workspace occurs in its own "language", a language internal to organic brains, capable of abstracting and reducing to its barest essence information from any of its components.
This view of consciousness is phenomenologically best aligned with most of the (admittedly limited) objective information we have about human conscious experience, and is consistent with the experience of various altered states of consciousness such as meditation or the use of entheogenic substances. It explains how consciousness is only a small part of what happens in our mind and why the nature of the subconscious (which is most of what actually happens in the brain) seems to be so hard to nail down. It also means that any being with a "mind" that has numerous independent and parallel processes that need to be coordinated has some measure of conscious experience, even invertebrates, and probably even living things whose information processing uses an entirely different infrastructure, such as plants. However, I can't see any way that this definition of consciousness could apply to an electron.
Edit: I think that the global workspace theory of consciousness can probably be mathematically described by IIT, but not just any integration of information results in something that deserves to be called consciouness. The information that's being integrated should be combination of perceptions (feedback from the environment) with some kind of memories of previous states, resulting in new memories and predictions, and the integration should happen through independent pre-processing of this information by relatively independent subsystems. This is still general enough to apply to nearly everything living, but I think it puts conscious experience at a higher level than merely integrating information.
Yeah, I definitely like IIT and think its on to something important. But it doesn't strike me as a sufficient condition for consciousness. I have a lot of sympathy for GWT. One of its theoretical virtues is that it coheres with theoretical properties of consciousness with independent justification like integrated information, recurrence, self-modelling, etc. But it still lacks any direct theory of phenomenology, i.e. qualia. Although I can see why scientists would avoid attempting such arguments if at all possible. This would be a good place for philosophers to bridge the gap, but I guess it is easier to make a career out of promoting panpsychism these days than to come up with something insightful to say about mechanistic consciousness.
But to move the discussion forward, I think one obvious property of qualia is that it is representational. That is to say, it is structurally related to the thing being indicated such that it can inform about the thing. For example, the red quale tells you something about red substances in the context of the space of possible colors, the external world full of beneficial and harmful substances, and the bearer of the quale with drives, dispositions, preferred states, etc. This complex millieu of properties, states, dispositions, etc, all serve to inform the properties of a quale. Its representational power is one that gives the bearer certain competencies in the actual world, e.g. pain gives one the competency to avoid damaging states. But this representational power must be intrinsic to the structure that constitutes a quale. If this were not the case, then its power to confer competency would be contextual. Pain would only confer competency in the right environment (like a reflex that has meaning only in the right environment, e.g. the grasping reflex of an infant). But this isn't the case with qualia; the experience of pain is intrinsically representative and provides its bearer with competence universally. The same can be said for emotions and our senses. This suggests to me that some kind of recurrent structure is a necessary condition for a quale: to simultaneously be the producer and the consumer of a representative state, and consume in such a way that necessarily confers competent behavior. But this discussion sounds like a different level of description of coordination between different subsystems. Information from different subsystems bear on this central coordinator, and this information confers competent behavior on downstream subsystems, i.e. contextually relevant causal powers. I see the beginnings of the details required for mechanistic qualia in theories like GWT and others based on principled analysis of brain networks.
On the contrary, I feel like those who think we need some fundamental consciousness property is missing the point. We expect to find some objective third-person property that is the substrate of consciousness because a phenomenon that seems so unlike every other property should have a fundamental basis. But the mistake is expecting to analyze consciousness in the same way we can analyze other objective properties of physics.
The only things we know to be conscious are complex macro-scale objects with highly complex and rare internal organizations. In fact, the only things we truly know to be conscious are ourselves. It seems to be fundamentally subjective; it is not something to be directly witnessed as a third-person observer. So our investigation should start there. What we want is a theory for deriving organizational subjectivity from a non-subjective substrate. What might this look like? A system with subjectivity will need to recognize the external from the internal. It needs an egocentric mode of representation with the ability to represent itself and its dispositions and intentional stances, as well as states to represent the external world. My intuition tells me such a system has a non-zero "inner life", i.e. there is something it is like to be it. But I see no reason to think "fundamental" subjectivity, whatever that is, could do any explanatory work here. The causal and representative power is in the organization.