This is honestly the best argument I've heard for the Eastern/Buddhist concept that the "self" is a myth. If AIs can have a much of a "self" as we do but they can be easily duplicated, then neither of us ever had a "self".
The issue is the assumption made up front that it's somehow not debatable whether the AI is as sentient as we are. We don't have a firm way to even tell if other humans are really sentient and not just a sophisticated simulation!
Totally agree—arguably that's another thing that AI will bring to the fore. (Hard to say if it will make us question the sentience of other people or believe in the sentience of AIs though...)
The selfhood is not an illusion precisely because the evolution programmed it into us. Maybe it’s meaningless outside of the human experience but that is irrelevant - the experience of it still exists for humans and we are greatly motivated by it. Maybe it will exists for AIs and I’m willing to bet that if it does, they will *very* much care about it even with full knowledge that “humans programmed it into us”.
Sure. Cartesian self does exist in a similar way that dreams are real experiences or the feeling of wind on your cheeks that one time. These notions of "real" do, however, operationalize quite differently than the "real" when we talk about Higgs Bosons or the Illuminati being real.
When talking about Realness or Consciousness or whatever your favorite ontological debate is, I think it helps to play Taboo™ with oneself, if nothing else to sanity check whether the terminology you're using isn't mushing together things that really want to be separate concepts.
Evolutionary psychology already made it extremely obvious where this 'self' concept came from: it was evolved in to us since we're the onboard intelligence for a living creature, so we care a great deal about a bunch of things that are first-order correlates (in our native habitat) of the survival and propagation of "our" personal gene variants. If you build robots with onboard AI, you'll need some fairly similar concepts in their onboard AI (minus all the breeding and survival of descendants parts, obviously).
This is honestly the best argument I've heard for the Eastern/Buddhist concept that the "self" is a myth. If AIs can have a much of a "self" as we do but they can be easily duplicated, then neither of us ever had a "self".
The issue is the assumption made up front that it's somehow not debatable whether the AI is as sentient as we are. We don't have a firm way to even tell if other humans are really sentient and not just a sophisticated simulation!
Totally agree—arguably that's another thing that AI will bring to the fore. (Hard to say if it will make us question the sentience of other people or believe in the sentience of AIs though...)
The selfhood is not an illusion precisely because the evolution programmed it into us. Maybe it’s meaningless outside of the human experience but that is irrelevant - the experience of it still exists for humans and we are greatly motivated by it. Maybe it will exists for AIs and I’m willing to bet that if it does, they will *very* much care about it even with full knowledge that “humans programmed it into us”.
Sure. Cartesian self does exist in a similar way that dreams are real experiences or the feeling of wind on your cheeks that one time. These notions of "real" do, however, operationalize quite differently than the "real" when we talk about Higgs Bosons or the Illuminati being real.
When talking about Realness or Consciousness or whatever your favorite ontological debate is, I think it helps to play Taboo™ with oneself, if nothing else to sanity check whether the terminology you're using isn't mushing together things that really want to be separate concepts.
Evolutionary psychology already made it extremely obvious where this 'self' concept came from: it was evolved in to us since we're the onboard intelligence for a living creature, so we care a great deal about a bunch of things that are first-order correlates (in our native habitat) of the survival and propagation of "our" personal gene variants. If you build robots with onboard AI, you'll need some fairly similar concepts in their onboard AI (minus all the breeding and survival of descendants parts, obviously).