16 Comments

I expect that AI will open the door to philosophical conversations considered vital for most of recorded history, but widely considered “meaningless” for the last century or so. Such as: what does “good” mean?

Obviously there’s some relationship to evolution here, because before we learned the scientific method,, the idea of “good” was tied to the idea of vitality. People thought there was a right way for an individual to live and a society to operate. They used a predictive model that said, when persons or societies deviated from that right way of living, Bad Things happened. This looks to me like a conceptual model of long term evolutionary fitness.

I’m looking forward to seeing people try to train models with explicit value systems, using those same value systems to do things like, selectively pay attention to some data and not others. This is something human beings _can_ do, and I think that’s essential for our performance.

Expand full comment
author

Yeah, I think technology is on the way to making what seemed like idle philosophical questions into real practical things that we face. (The scenarios in Reasons and Persons seem more relevant every day!) What I wonder is how much philosophy will end up shaping our cultural reaction. Will we really "think" about things, or will we all just sort of collectively decide. For example, I suspect that philosophy of consciousness probably won't ultimately have much influence on if people decide AIs are conscious or not. People will just interact with the AIs and decide what they decide.

Expand full comment

Thanks for writing this. It seems obvious to me that evolution accounts for 95+% of human learning and 100% of human intelligence. A better way to think of it is how many bits is needed to store the algorithm for learning and/or intelligence, and I think the answer is that it's probably pretty low. The problem is finding it, which clearly takes enormous optimization pressure and large amounts of data and computation, which is what evolution has been doing for the past billion years.

Another way to phrase the same thing is to consider the size of current models. Clearly most of those parameter values are spent storing very impressive amounts of encyclopedic knowledge that no human comes close to matching. I'm confident the parameters of a future very intelligent model without as much world knowledge can fit on a thumb drive, but actually getting to that specific set of descriptive bits will require a fire hose of data and computation, just as evolution has needed.

Expand full comment
author

Thanks, that's a great summary. I guess I largely agree, although I'm less confident. I certainly agree that evolution has had access to an insane amount of "data" and "compute" that will be very hard to copy. But also... maybe not? Maybe evolution hasn't actually been optimizing for intelligence all that hard and the strategy that brings humans intelligence isn't that complicated? Hard to be sure!

Expand full comment

This is a really helpful breakdown of how evolution is a kind of learning — abstract in all the good ways, and without having to rely on words like "ontogenetic"! (My only recommendation for the graph, actually, is to switch where the labels go — I first misread "one short and fleeting lifetime", for example, as being about the y-axis.) D'you have any recommendations for what someone might read who wants to get an even fuller understanding of this?

Expand full comment
author

Can you help me understand what you're thinking with the figure? Something like this: https://ibb.co/xmB6dY1 ?

As for more in-depth stuff, well... Hmmm. There's many different aspects. For biological anchors, maybe this? https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/ (as well as Ajeya Cotra's report). For understanding AI scaling, I think this isn't terrible: https://dynomight.net/scaling/

Expand full comment

I'll take a look at both of those; thanks! For the figure, I drew up a version that makes more intuitive sense to me (warning, I suck at reading charts, and never succeeded at understanding a single one in that second-to-last page of The Economist): https://tinyurl.com/ykp2rj5u

Some changes:

1. I parallelized the titles of the axes: "amount of data" became "quantity of data"; "quality of learning method" became "quality of learning".

2. I made the colors correspond to the axes, rather than to the areas. (This lets you avoid arrows entirely. If that renders the areas too confusing, you could give each a different monochrome pattern; I can share ideas if that's confusing.)

3. I moved "area = intelligence" to the bottom, where I think it benefits from being a sort of stinging punchline. (It could also be moved to the top.)

If this is useful, use it! I really like your writing.

Expand full comment
author

Thanks! I'll ask some other people what they think. I've been staring at this figure so long I have no idea what's clearest anymore.

Expand full comment

I wouldn't rule out proprioception being central to intelligence too hastily

Expand full comment
author

You could be right, but there's one thing I think I forgot to to explain. In the paper I mentioned with "KS" (https://doi.org/10.1007/s00221-021-06037-4) apparently "KS" has had since birth both zero touch and zero proprioception. So if that's right, it's at least possible to be reasonably intelligent without it.

Not that this is totally conclusive. It's only one person, mentioned in one paper.

If other modalities are key, my suspicion is that it's important that we get them with interaction. Proprioception surely helps us learn how to use our muscles. KS seems to have been able to do some approximation of that using vision, but still. There's no evidence that human intelligence is possible without being able to "experiment", and the fact that so many animals have such strong "play" instincts seems important.

Expand full comment

"Even though it comes from evolution, humans are still using some learning algorithm. Just by definition, isn’t it possible to build an AI using the same tricks?"

No Sir, not in my humble opinion. Difference will always be we are etheric (spirit) first, poured in to the physical second. Computers of whatever form are physical first, they will never *have* spirit - unless spirit is defined as electromagnetism ... I guess it probably could be... but AI does't spontaneously form from biological sex via love so they can't possibly 'evolve' ancestrally.

I do love the idea of Joi in Bladerunner 2049, where she - the AI - stores preferences over time and she's so sweet and cool, but... it's a program. There is no authentic, genuine Will to pass along. No subtle EM sheathes and ancestral memory over millennia which define the human state.

The illusion of Joi is just a stick that breaks if it gets stepped on. But it had a form of wisdom... and her sophisticated AI predecessor Rachel gave birth, which is Hollywood using metaphors to spark complex metaphysic ideas...

Expand full comment

This is very insightful, IMO. It is kinda sorta along the lines of my thoughts on consciousness. We ignore evolution at our peril. https://www.mattball.org/2022/09/robots-wont-be-conscious.html

Expand full comment

for what its worth I think the chart makes a lot of sense. the x-axis is a log scale, and the y-axis probably is too

Expand full comment
author

I guess technically it would be better linear but with both of the differences greatly exaggerated. But probably best to think of them as being "vibes" scales.

Expand full comment

> I’ve been feeling grumpy about algorithms deciding what we read, so I’ve decided to experiment with links to writing that I think deserves more attention.

I, for one, think this experiment is a good idea. :)

FWIW I did not find your cartoon confusing.

I think it's interesting how neural networks train on vastly more data than humans and (probably) use much simpler strategies, but they still look more human-like than other kinds of computer systems. Neural networks, like humans, occasionally get things wrong that they really ought to get right. On tasks like video games, traditional computer programs can win by being way faster than humans, whereas neural networks win by playing like humans but better.

Expand full comment

Adding to your point, evolution isn't "survival of the smartest/strongest", it's the survival of the organisms that are most adaptable to a changing environment. The reason we're the dominant organism isn't just that we have big brains, it's that those brains make us extremely adaptable and let us squeeze all the utility we can out of anything. If we watch someone start a fire, we learn how to start a fire.

Also, I would add an intermediate loop - society "evolves" exponentially and even faster than evolution, which lets us gain from things we don't directly experience. Compare the way information is retained across species:

An ant colony: ants can go outside and leave temporary chemicals.

An ape troop: if one ape learns something, it can inform the others, but information will be lost if it's not important enough to remember.

A human: I can read things written centuries ago. (Graffiti written in Pompeii nearly 2000 years ago tells us that Gaius and Aulus were friends.) If I don't understand something I read, I can read something else, and eventually I'll understand it.

In other words, we can retain information over time, and evolution has programmed us to squeeze all the juice we can out of the information we gather. In contrast, LLMs learn very inefficiently. They struggled with math at first because having verbal descriptions of algorithms doesn't mean it can use those algorithms. (They have gotten better, but I don't know whether that's because they know to call a separate tool to crunch the numbers or not.)

Expand full comment