Computational functionalism doesn't necessarily imply consciousness. Chalmer's zombie arguments clearly show that under this view, it could be the case that you have indistinguishable behavior from a conscious entity, without any of the subjective experience. I never understood why the empirical evidence isn't taken more seriously: there is no system that we know of that we can safely assume to have conscious experience akin to ours that isn't made of stuff similar to us (i.e., other living organisms). I think the only kind of consciousness that matters is sentience, or what it feels like to be something.
I always love to bring this argument up when I talk to people (some variant of Searle's Chinese room argument, I must admit). Suppose you had an abacus and an infinitely long tape. This system is conceivably Turing complete. This means any computation could be carried out on it, including running an LLM. I don't think anyone would argue that the relevant parts of the system have anything like consciousness. Now, speed up the timescale of operations by orders of magnitude such that you get X tokens/second out of the system. It's now producing full-fledge language, and it has behavioral equivalence to humans on many aspects. Same system. Still not conscious.
I think that sentience is the only kind of consciousness that matters: what it feels like to be something. It's not clear why we would think that it feels like anything to be a really fast abacus more than we would think the same about a rock, or a piece of furniture.
Chalmers zombie is a thought experiment, not an empirical observation. Yes, empirical observation is that currently only biological beings have consciousness and subjective experience, but that empirical observation doesn't preclude artificial systems from having it. The empirical fact used to be that only biological things could fly, until we built flying machines.
Empirical evidence is that human biological beings *say* they *believe* they "have" something they call "consciousness". But that is equally true of certain AI "bots" that will *say* the same thing.
The problem is, humans have "reified" some phenomenon and given it a label. It doesn't mean "consciousness" actually has a well-defined, thing-like mode of existence.
According to this argument, consciousness is a conventional label for something of indeterminate nature, arguably an illusion. Perhaps it's accurate to dismiss the entire question by denying it has the status of existence we assume it has.
Regarding the subjective experience, it "feels" (and this is what we go by in the usual what-it-is-like argument) as something entirely personal and embodied (vibratory, juicy, a gut feel, etc.). Phenomenological interviews bear this out if one pursues descriptive depth and completeness. How then could that be "substrate-independent" if it is really "just" a mode of action of the substrate itself?
Computational functionalism doesn't necessarily imply consciousness. Chalmer's zombie arguments clearly show that under this view, it could be the case that you have indistinguishable behavior from a conscious entity, without any of the subjective experience. I never understood why the empirical evidence isn't taken more seriously: there is no system that we know of that we can safely assume to have conscious experience akin to ours that isn't made of stuff similar to us (i.e., other living organisms). I think the only kind of consciousness that matters is sentience, or what it feels like to be something.
I always love to bring this argument up when I talk to people (some variant of Searle's Chinese room argument, I must admit). Suppose you had an abacus and an infinitely long tape. This system is conceivably Turing complete. This means any computation could be carried out on it, including running an LLM. I don't think anyone would argue that the relevant parts of the system have anything like consciousness. Now, speed up the timescale of operations by orders of magnitude such that you get X tokens/second out of the system. It's now producing full-fledge language, and it has behavioral equivalence to humans on many aspects. Same system. Still not conscious.
I think that sentience is the only kind of consciousness that matters: what it feels like to be something. It's not clear why we would think that it feels like anything to be a really fast abacus more than we would think the same about a rock, or a piece of furniture.
Computational functionalism doesn't necessarily imply consciousness. Chalmer's zombie arguments clearly show that under this view, it could be the case that you have indistinguishable behavior from a conscious entity, without any of the subjective experience. I never understood why the empirical evidence isn't taken more seriously: there is no system that we know of that we can safely assume to have conscious experience akin to ours that isn't made of stuff similar to us (i.e., other living organisms). I think the only kind of consciousness that matters is sentience, or what it feels like to be something.
I always love to bring this argument up when I talk to people (some variant of Searle's Chinese room argument, I must admit). Suppose you had an abacus and an infinitely long tape. This system is conceivably Turing complete. This means any computation could be carried out on it, including running an LLM. I don't think anyone would argue that the relevant parts of the system have anything like consciousness. Now, speed up the timescale of operations by orders of magnitude such that you get X tokens/second out of the system. It's now producing full-fledge language, and it has behavioral equivalence to humans on many aspects. Same system. Still not conscious.
I think that sentience is the only kind of consciousness that matters: what it feels like to be something. It's not clear why we would think that it feels like anything to be a really fast abacus more than we would think the same about a rock, or a piece of furniture.
Chalmers zombie is a thought experiment, not an empirical observation. Yes, empirical observation is that currently only biological beings have consciousness and subjective experience, but that empirical observation doesn't preclude artificial systems from having it. The empirical fact used to be that only biological things could fly, until we built flying machines.
Empirical evidence is that human biological beings *say* they *believe* they "have" something they call "consciousness". But that is equally true of certain AI "bots" that will *say* the same thing.
The problem is, humans have "reified" some phenomenon and given it a label. It doesn't mean "consciousness" actually has a well-defined, thing-like mode of existence.
According to this argument, consciousness is a conventional label for something of indeterminate nature, arguably an illusion. Perhaps it's accurate to dismiss the entire question by denying it has the status of existence we assume it has.
Regarding the subjective experience, it "feels" (and this is what we go by in the usual what-it-is-like argument) as something entirely personal and embodied (vibratory, juicy, a gut feel, etc.). Phenomenological interviews bear this out if one pursues descriptive depth and completeness. How then could that be "substrate-independent" if it is really "just" a mode of action of the substrate itself?
Why are you so sure that consciousness is substrate independent? It seems like all evidence points to the contrary.
Which evidence? I'm not aware of any concrete evidence against computational functionalism.
Computational functionalism doesn't necessarily imply consciousness. Chalmer's zombie arguments clearly show that under this view, it could be the case that you have indistinguishable behavior from a conscious entity, without any of the subjective experience. I never understood why the empirical evidence isn't taken more seriously: there is no system that we know of that we can safely assume to have conscious experience akin to ours that isn't made of stuff similar to us (i.e., other living organisms). I think the only kind of consciousness that matters is sentience, or what it feels like to be something.
I always love to bring this argument up when I talk to people (some variant of Searle's Chinese room argument, I must admit). Suppose you had an abacus and an infinitely long tape. This system is conceivably Turing complete. This means any computation could be carried out on it, including running an LLM. I don't think anyone would argue that the relevant parts of the system have anything like consciousness. Now, speed up the timescale of operations by orders of magnitude such that you get X tokens/second out of the system. It's now producing full-fledge language, and it has behavioral equivalence to humans on many aspects. Same system. Still not conscious.
I think that sentience is the only kind of consciousness that matters: what it feels like to be something. It's not clear why we would think that it feels like anything to be a really fast abacus more than we would think the same about a rock, or a piece of furniture.