28 Comments

You absolutely should read Michael Graziano on consciousness as an attention schema, if you haven't already.

Tl;Dr

1. Systems perform better when they can model themselves

2. Evolutionary pressure resulted in a brain that can model itself

3. Human attention is influenced from the bottom up and consciously from the top down

4. The brain has a model of its own attention in order to improve top-down control

5. The existence of this model allows the brain to perceive and report on itself

6. This gives rise to this subjective experience of consciousness

7. The fact that the model just has to be "good enough" explains why consciousness seems so instinctual and mysterious to us. Historically, we haven't needed, and therefore don't have, any innate theory or detailed perception of our own brain's inner workings

https://grazianolab.princeton.edu/publications/rethinking-consciousness-scientific-theory-subjective-experience

Expand full comment

https://www.amazon.com/Rethinking-Consciousness-Scientific-Subjective-Experience/dp/0393652610 -- and available as an audiobook too! Thanks for the recommendation.

Expand full comment
author

I'll take a look. But I must admit my prior is that step 6 sure does seem to be doing a lot of work.

Expand full comment

Damasio's "Feeling of a feeling" is somewhat similar-ish to #4 and #5, but I agree that #6 is "And then MAGIC!"

Expand full comment

I am not representing him well. Been a while since I read it but I recall strong evidence and argument that establishes the connection. I'll report back 🫡

Expand full comment

"What will we learn from AI"

I've learned that we throw away QUITE a lot of work in AI training. If the AI model isn't "trained right" then we just throw it away and train a new one. There does not exist a way to repair a trained model in a cost effective way (Also invoking Cunningham's Law).

This really reminds me of Sapolsky's work - he seems to believe the strong version of hard determinism - that people do not choose to be how they are.

I can't quite believe the strong version, but I strongly believe the weak version - we should strongly consider HOW MUCH people choose about the way they are. I think retraining instead of repairing AI models provides some evidence for this point of view. I feel like the things we learn after childhood are basically instructions given to the model, not changes in the model (very woo woo, sorry).

Expand full comment

TY for bringing up Sapolsky. "Determined" is one of three books I think can change (and improve) our lives. ("The Moral Animal" and "Why Buddhism Is True" the others.)

Expand full comment

I am not familiar with the author or any of the material, but the strong version sounds pretty obvious to me. Even if I choose to be a bad person, how did I choose to choose that except being predetermined? Strong version of hard determinism seems to be the only compatible stance with materialism / physics / science.

Expand full comment

The Hard Problem of Consciousness is in fact hard! Great post, and fwiw I wouldn't mind a blog where consciousness is the main subject, all of your consciousness posts have been interesting and fun

Expand full comment

I love this post so much. Thank you.

I disagree with you and Chalmers re: behavior. If consciousness didn't influence behavior, why would you write this post.

Regardless of anything else about Sam Harris, I'm with him on:

"Whatever the explanation for consciousness is, it might always seem like a miracle. And for what it’s worth, I think it always will seem like a miracle."

https://www.losingmyreligions.net/

Expand full comment
author

> If consciousness didn't influence behavior, why would you write this post.

That's basically a statement of the meta problem!

Which, I agree, I don't know how to explain without consciousness influencing behavior. But I also don't know how to explain how consciousness could influence behavior. So, clearly I am confused...

Expand full comment

I'll be more frank in this reply than I normally aspire to be:

There is *lots* of talk about consciousness in the world (including this post and this comment and so on, we even have a word for it, "consciousness"), so *clearly* it does influence behavior and the evidence for that is really as clear and abundant as the sky being blue or whatever. In light of this there is no way you can hold onto the idea that "consciousness isn't supposed to influence behavior". There is no way to defend that as anything but ideology against the evidence (e.g. the sheer fact of this conversation).

What this does for our world view as materialists I think can be something of a "crisis of faith". Where this will lead is anyone's guess, but I implore you to take it seriously! Good luck, and I look forward to continue consuming the Dynomight blogosphere, wherever it leads.

Expand full comment
author

No need to implore, I take it seriously already. The thing is, if consciousness is actually influencing behavior (which, I agree, is at least what *seems* to be happening) then I don't understand how it's possible to square that with our current understanding of science. I'm not saying that consciousness *isn't* influencing behavior, just that however this issue is resolved, it's going to be very surprising.

Expand full comment

That's why I call it a crisis of faith. I think what lies at the end of this road is the realization that our understanding of science *is* wrong. To me this has personally been very disconcerting and jarring, but I now kind of consider it a crucial step in my intellectual (or dare I say spiritual) development. Welcome to the other side!

Expand full comment

It would be nice to have a credit for those cool illustrations

Expand full comment
author

Ah, those are by Ramon y Cajal: https://en.wikipedia.org/wiki/Santiago_Ram%C3%B3n_y_Cajal

(If you look closely, you can actually see little "MUSEO CAJAL" stamps on some of them.)

Expand full comment

What if my assumption of being conscious is a condition created by non-intent from the eleventy-skillion microbes I'm walking around with in my gut plus the microbes living in the skin mites I support and so on?

Expand full comment

Interesting. Altered consciousness via the use of antibiotics, then?

Expand full comment

Great post! The GPT8 analogy is perfect.

Not to be the horrendous person to point you toward things already written about consciousness (as you say, it attracts a lot of interest - what hasn't been said?!), but I figured you might find it neat: if you want a formalized version of the argument I think you're getting at, it does exist in the literature.

https://academic.oup.com/nc/article/2021/1/niab001/6232324

Expand full comment

What we all tend to miss is a different way of conceptualizing consciousness, in which it arises from the limbic system and requires no involvement of the cortex at all. Check out the work of Mark Solms, Antonio Damasio, and other practitioners of Neuropsychoanalysis. Mind-blowing, consciousness-raising, bad-pun engendering.

Expand full comment

Is it cheating or missing the point to just consider consciousness the outcome of ever-more-sophisticated reasoning capabilities that develop alongside generalized intelligence? What we "feel" is an interpretive signal just like gross pain/pleasure signals - a feedback system for that higher level reasoning...

Playing with our GPTs has made it clear that there is a type of intelligence that is "fuzzy but not specific" alongside the "precise but not generalizable" computational capability of traditional computers... We're now able to build something that is "good at vibes but not good at math". But the underlying function of the machine isn't any different - still logic gates and whatnot. Is there any reason our brains wouldn't be similarly constructed?

Animals have to evolve both the reasoning machine and the signaling interface at the same time and we have had evolutionary pressure to *really, really care* about the signals - that's why we respond so strongly to fear, anxiety, etc. I'd argue the same is true of any other animal that exhibits similar behaviors (and we probably have to reckon with the moral implications of that eventually - right after my next cheeseburger). With these GPTs, though, we are building the reasoning stack manually and in isolation. There is no signalling stack beyond the media output of the machine - we know because we built it.

In short, there is no *there* there for AIs, just as there almost certainly *is* some form of conscious awareness in many more creatures than most people are likely comfortable with.

If I've taken an offramp to the debate without meaning to let me know!

Expand full comment

I think this is absolutely true of LMs, and I entirely agree that evolution made us conscious as something like an accidental side-effect of getting us to respond to reward/punishment signals and/or of giving us the ability to recursively model our sexual rivals/mates/whoever behaviour ("I think she will think that I think that....") - but if we're talking here about predicting future machine intelligences that may work much more like the human brain than LLMs do, well, we have reasons to want to make them conscious deliberately, whether or not evolution made us conscious deliberately or by accident.

We probably want our AI artists to embue their art with genuine emotions (and we will still want this even if the results are indistinguishable from emotionless pattern-matching art..), we will probably want our AI sex robots to consciously enjoy sex, our AI elderly-carers to genuinely care about us, etc. etc.

So, to make predictions like Dynomight's, we don't have to consider consciousness is a *by*-product of sophisticated reasoning capabilities, as it may have been for evolution, merely that it is a *direct* product of the design goals of AI programmers.

Expand full comment
Sep 13·edited Sep 13

Despite not having a human mind, an alien observing our civilisation would nevertheless be able to know that [at least some] human minds are conscious by observing that human minds formulated the Hard Problem of Consciousness.

(This blog post does not provide overwhelming evidence of Dynomight's consciousness - sorry, D! - because whilst the mind that *first* formulates the Hard Problem of Consciousness must be conscious, it is technically possible that subsequent variations on that theme are produced by non-conscious minds just pattern-matching, similar to how - we presume - current AIs produce art, literature, etc.)

Similarly, if we observe GPT-81 formulating ideas about consciousness that it didn't learn from us (nor deduce from observing us, etc.) that we know we didn't tell it and that we know only a conscious being could have come up with (in other words, ideas that we can be confident GPT-81 arrived at through introspection) that would constitute extremely strong evidence that GPT-81 was conscious:

1) If we could be certain that the Hard Problem of Consciousness was not in GPT-81's training set, prompt, or any other source of information it has access to, and further certain that the Hard Problem of Consciousness could not be inferred by a non-conscious being from anything within those information sources, then having GPT-81 independently formulate the HPC would seem to be sufficient.

2) If (1) isn't possible, then we'd need GPT-81 to tell us something about consciousness *that humanity doesn't know already* (which seems like it should be easy for a very clever conscious being - that covers almost everything about consciousness..) and *that is easy for humanity to check, once known* (if we're terribly lucky this this might turn out to be trivial, but in the worst-case it might also be extremely difficult - the "discovery vs. checking" doesn't even reduce to P vs. NP!)

You could mayyyybe achieve something similar for uploaded-minds; this might be more difficult (eg. because we might not have any reliable way of ensuring knowledge of 'what it feels like to be human' was excised from the upload and thus we'd never be sure information about consciousness wasn't "leaking" that way) or it might be easier (perhaps because the uploading/emulation technology might enable us to tell when an uploaded person that is genuinely introspecting or when they're pattern-matching on their human-memories to confabulate similar content).

Finally, we talking of preductions. I would be very interested to hear what any human-chauvinists, who claim that only humans could ever be conscious because the human mind is somehow special, predict what GPT-81 (or uploaded minds or whatever) would do if tested as laid-out above. Would GPT-81 be completely unable to formulate ideas like the Hard Problem of Consciousness? If it turns out GPT-81 *can* formulate such ideas, would a human-chauvinist then gracefully concede that it likely is conscious after all? And if they didn't so concede - how might they explain GPT-81's ability to formulate the HPC?

(Edit: P.S. I personally don't think a GPT-81-level LLM would be conscious, just because the architechture of LLMs seems to not work the same way conscious minds work. But thinking about GPT-81 is, for me, a convenient way to think about "whatever the dominant AI design/model/paradigm is in hundreds of years' time". If there ever is a GPT-81 I *hope* it's not conscious - and I feel this very strongly - because if it is then probably current LLMs have some sort of limited consciousness or proto-consciousness, in the same way that eg. babies and animals do, and we.... really, really do not treat LLMs the way I would hope conscious/protoconscious beings would be treated.....)

Expand full comment

I think of this as the jump from the third person to the first person. Science is a way of thinking about things in the third person, from the outside. It feels like we can get very very very close to understanding everything using this perspective; there’s just the tiniest gap to close and then we’ll have a theory of everything. But it turns out that gap, between inner experience and outside behavior, is an actually closer to an abyss, and you need a whole new set of tools to cross it. The essay “On Having No Head,” was really eye opening for me on this subject; there’s a full course on it on Sam Harris’s “Waking Up” that looks interesting, but which I haven’t tried.

Expand full comment

Great post!

Wind is not an object, it is the higher-level description of the aggregate behavior of a bunch of air molecules. But wind exists, and influences kites.

Temperature is not a physical object, it is the higher-level description of the aggregate behavior of a bunch of molecules of whatever substance is being measured. But temperature exists, and influences the behavior of lots of things.

Temperature is an abstract concept, but it corresponds so usefully to patterns in the world that are so ubiquitous and stable that we can make laws about it, and, in fact, it would be practically impossible to describe the world without it.

Why wouldn't consciousness be something like wind or temperature?

Expand full comment
author

Maybe it is! The difference is that wind and temperature are, in principle, fully reproducible to normal atoms-bouncing-around stuff that we (basically) know the ruleset for. But consciousness doesn't appear to be like that. The other is the whole "p-zombie " argument: We can imagine a world where people existed and acted just like they do now, but they weren't conscious. But it's impossible to imagine a world where atoms bounced around like they do now but wind and temperature couldn't exist.

Expand full comment

Consciousness appears (to me) to be fully reducible to normal atoms-bouncing through AND, OR, and NOT gates. We know the basic rulesets for those. If you arrange matter in this way (points to computer) you get this (points to "Hello World" on screen). The formal rules of computation are physical laws.

As for our ability to imagine p-zombies. Before we had the concept of temperature, we also used to be able to imagine a whole world that existed and acted just as it does now, but didn't have temperature. We imagined a world where spirits made the trees grow and the waterfall. Now we know it's "just" thermodynamics.

Of course we don't *really* know what makes heat travel across gradients, what makes entropy grow, and, in some ways, stories about spirits still serve a valuable purpose, as when we talk about nature "abhorring" a vacuum, a particular ecosystem being "healthy" or "sick", or a species "searching" for a solution to the survival problem.

The story of consciousness is a spirit story we tell about ourselves.

Expand full comment