Computational functionalism doesn't necessarily imply consciousness. Chalmer's zombie arguments clearly show that under this view, it could be the case that you have indistinguishable behavior from a conscious entity, without any of the subjective experience. I never understood why the empirical evidence isn't taken more seriously: there is no system that we know of that we can safely assume to have conscious experience akin to ours that isn't made of stuff similar to us (i.e., other living organisms). I think the only kind of consciousness that matters is sentience, or what it feels like to be something.
I always love to bring this argument up when I talk to people (some variant of Searle's Chinese room argument, I must admit). Suppose you had an abacus and an infinitely long tape. This system is conceivably Turing complete. This means any computation could be carried out on it, including running an LLM. I don't think anyone would argue that the relevant parts of the system have anything like consciousness. Now, speed up the timescale of operations by orders of magnitude such that you get X tokens/second out of the system. It's now producing full-fledge language, and it has behavioral equivalence to humans on many aspects. Same system. Still not conscious.
I think that sentience is the only kind of consciousness that matters: what it feels like to be something. It's not clear why we would think that it feels like anything to be a really fast abacus more than we would think the same about a rock, or a piece of furniture.
Chalmers zombie is a thought experiment, not an empirical observation. Yes, empirical observation is that currently only biological beings have consciousness and subjective experience, but that empirical observation doesn't preclude artificial systems from having it. The empirical fact used to be that only biological things could fly, until we built flying machines.
Computational functionalism doesn't necessarily imply consciousness. Chalmer's zombie arguments clearly show that under this view, it could be the case that you have indistinguishable behavior from a conscious entity, without any of the subjective experience. I never understood why the empirical evidence isn't taken more seriously: there is no system that we know of that we can safely assume to have conscious experience akin to ours that isn't made of stuff similar to us (i.e., other living organisms). I think the only kind of consciousness that matters is sentience, or what it feels like to be something.
I always love to bring this argument up when I talk to people (some variant of Searle's Chinese room argument, I must admit). Suppose you had an abacus and an infinitely long tape. This system is conceivably Turing complete. This means any computation could be carried out on it, including running an LLM. I don't think anyone would argue that the relevant parts of the system have anything like consciousness. Now, speed up the timescale of operations by orders of magnitude such that you get X tokens/second out of the system. It's now producing full-fledge language, and it has behavioral equivalence to humans on many aspects. Same system. Still not conscious.
I think that sentience is the only kind of consciousness that matters: what it feels like to be something. It's not clear why we would think that it feels like anything to be a really fast abacus more than we would think the same about a rock, or a piece of furniture.
Computational functionalism doesn't necessarily imply consciousness. Chalmer's zombie arguments clearly show that under this view, it could be the case that you have indistinguishable behavior from a conscious entity, without any of the subjective experience. I never understood why the empirical evidence isn't taken more seriously: there is no system that we know of that we can safely assume to have conscious experience akin to ours that isn't made of stuff similar to us (i.e., other living organisms). I think the only kind of consciousness that matters is sentience, or what it feels like to be something.
I always love to bring this argument up when I talk to people (some variant of Searle's Chinese room argument, I must admit). Suppose you had an abacus and an infinitely long tape. This system is conceivably Turing complete. This means any computation could be carried out on it, including running an LLM. I don't think anyone would argue that the relevant parts of the system have anything like consciousness. Now, speed up the timescale of operations by orders of magnitude such that you get X tokens/second out of the system. It's now producing full-fledge language, and it has behavioral equivalence to humans on many aspects. Same system. Still not conscious.
I think that sentience is the only kind of consciousness that matters: what it feels like to be something. It's not clear why we would think that it feels like anything to be a really fast abacus more than we would think the same about a rock, or a piece of furniture.
Chalmers zombie is a thought experiment, not an empirical observation. Yes, empirical observation is that currently only biological beings have consciousness and subjective experience, but that empirical observation doesn't preclude artificial systems from having it. The empirical fact used to be that only biological things could fly, until we built flying machines.
Why are you so sure that consciousness is substrate independent? It seems like all evidence points to the contrary.
Which evidence? I'm not aware of any concrete evidence against computational functionalism.
Computational functionalism doesn't necessarily imply consciousness. Chalmer's zombie arguments clearly show that under this view, it could be the case that you have indistinguishable behavior from a conscious entity, without any of the subjective experience. I never understood why the empirical evidence isn't taken more seriously: there is no system that we know of that we can safely assume to have conscious experience akin to ours that isn't made of stuff similar to us (i.e., other living organisms). I think the only kind of consciousness that matters is sentience, or what it feels like to be something.
I always love to bring this argument up when I talk to people (some variant of Searle's Chinese room argument, I must admit). Suppose you had an abacus and an infinitely long tape. This system is conceivably Turing complete. This means any computation could be carried out on it, including running an LLM. I don't think anyone would argue that the relevant parts of the system have anything like consciousness. Now, speed up the timescale of operations by orders of magnitude such that you get X tokens/second out of the system. It's now producing full-fledge language, and it has behavioral equivalence to humans on many aspects. Same system. Still not conscious.
I think that sentience is the only kind of consciousness that matters: what it feels like to be something. It's not clear why we would think that it feels like anything to be a really fast abacus more than we would think the same about a rock, or a piece of furniture.