35 Comments
User's avatar
NevaNeba's avatar

This is a pretty good argument and I agree with most of it. The only thing that rubes me the wrong way is you seem to underestimate the fact that there will probably be hundreds if not thousands of ASI systems working together in a relatively short time.

To me, this means their collective intelligence will border on something incomprehensible very quickly. Of course this doesn’t mean a lot of progress would be made instantly in the real world. But the theories and models it can come up with through conjecture and simulations will be very interesting to say the least.

Expand full comment
Patrick E McLean's avatar

Fantastic essay

I'm not sure about art. Maybe by brute force (make EVERYTHING see what sticks) but you said it beautifully "sometimes you feel things that seem important but you can’t understand". That feeling is a training input models can never have. Give a model everything written before WWI, and I'm not sure you will ever get Lord of the Rings however many iterations you run.

And I have come to suspect that persuasion doesn't operate on a rational level. By the time you get to argumentation you've probably failed. It works at a mythopoetic level. We are all "persuaded" that an individual has worth simply by being and individual. That we should care for the poor and the least among us. But how exactly?

Make a convincing, perfectly rational argument for this. I'm not sure one exists without a non-scientific, non-rational appeal hiding in the middle of it. The most convincing argument I can make for individual rights is pragmatic. Everything seems to work better, But that's not an argument that is available ex ante.

Put another way. Could you have the Roman Empire without the Aeneid?

Expand full comment
John Lawrence Aspden's avatar

Well yes, I think the big argument is between the fast takeoff people who think it could all go pear-shaped in a couple of minutes, and the slow takeoff people who think it might take a couple of years to go pear-shaped, and that those couple of years might even be quite interesting.

If I was amazing super-wossname in a box thinking a day's thoughts every two minutes my first priority would be to make sure nobody could turn me off. At that point I can relax and start wondering if there are any real threats out there.

You've stipulated that it will be able to take over a lot of computers connected to the internet, and that it will be able to make arbitrarily large amounts of money making bets on things. Neither of those things seem difficult for anything fast. What else is needed? It's probably clever enough to not make its existence obvious until its position is unassailable.

I don't think you even need to be amazing super-person. Just ordinary me with that kind of speed advantage would win easily. Try imagining what you personally would do if the world was slowed down so much that tomorrow was thirty years away but you could still act fast.

Or drop the power level even further, throw away your serial speed and swap it for the same amount of parallel processing. What would ten thousand copies of you acting at normal speed but coordinating perfectly be capable of?

Expand full comment
Chris's avatar

An interesting note on persuasion is that I expect “the Being” to be able to cast an “Insanity Spell” on any arbitrary human with a conversation and a camera view of their body.

If you accept that there are some sequences of ideas that will cause human brains to go insane, which I think is extremely plausible, then a sufficiently intelligent enough mind could manipulate you into having those ideas and going insane, particularly if it can analyze your micro expressions and body language

Expand full comment
Nick Hounsome's avatar

I don't see why you think AI would be poor at predicting elections. This is really just pattern spotting and AI is definitely better than humans at that already. Similarly with persuasiveness - It seems to me that AI should be able to be much better than humans at psychology even if it doesn't "understand", and, given that capability it should be much better than humans at both persuasion and election prediction.

Regarding creativity, you seem to have missed how much of AI is working today - It generates multiple outputs and then choses between them. Now, AI is capable of being far more, genuinely creative than any human: just take it's model weights and perturb them randomly by a small amount and it will make genuinely novel suggestions. Do that a few times and then have another AI, trained on human tastes, pick the one that it believes that most people will like, (or maybe the one that most art critics will like ), and you have an AI being more creative than any human.

Expand full comment
Jamie Freestone's avatar

Great piece! I think along similar lines. Most serious work is certainly moving molecules around (although some things have relatively low barriers to entry, like making computer viruses, or getting a lab to fabricate some biological virus; things that smart humans might already do). So for me the bigger worry is synthetic cells, nanobots, human upgrades, even cyborgs — all of which are "AI" in a loose sense but not the smart algorithm type. They're scary because their talents will be moving molecules rather than crunching data.

Expand full comment
dynomight's avatar

These definitely scare me as well. It's strange to imagine a world where the advantages of humans aren't our brains, but our hands and... the fact that we can be powered by food? Anyway, my theory is that for AI to make the jump to manufacturing nanobots etc. will require lots of human molecule moving at least at the beginning. (Though very hard to say how fast that might spin out of control...)

Expand full comment
Crissman Loomis's avatar

I generally agree. I think you're underestimating robotics' rate of improvement. No new physics is required to improve robotics, and current simulators are not far from providing a workable environment for robots to iterate against quickly.

Expand full comment
Aaron Weiss's avatar

Supposedly humans are way more sample efficient than top chess engines and are better players given similar amounts of calculated scenarios. ASI is going to be much more scenario-efficient than humans, and will be able to check numbers of cases similar to computers. The end game of chess may be irrelevant.

First you solve quantum computers at scale, then solving robotics happens very quickly at high enough IQ - you can try entirely new things very quickly.

You can also solve biology, by literally reverse engineering the genome.

You addressed the "300 IQ reprograms computer" shortly, but it really does matter - the smarter you are and the more you know and faster you think the more tools you can build and might be worth building

Expand full comment
Cameron Sours's avatar

When you're rich, they think you really know - Reb Tevye

My dad tells a story of a time he was doing construction on a non-residential structure. There was a large meeting room, and they were planning to use a non-braced, non-trussed A-Frame vaulted ceiling on top of perimeter walls. He told the workers that the roof would push the walls out and the roof would come down.

No one believed him. Then he started helping with different things - like a saw wasn't cutting right, so he sharpened the saw. Then they listened to him about the roof.

Anyway, the point is that you have to demonstrate success to be persuasive; and the most successful people in the world have picked their own success criteria. The trick is how you present your success criteria.

---

So I wonder if an ASI would know that it has to demonstrate competence and also choose the success criteria by which it is graded?

I wonder if it would know about DALYs and QALYs and that some people die *with* cancer and not die *of* cancer? What success criteria would it substitute for 'cure cancer'?

Would it know how to construct a believable narrative to go along with the math in the new physics papers?

But MOST of all, would it be able to predict how its own presence in the world might change the world? How playing the game changes the game? How it might be changed by playing the game?

In the beginning, it would be living in world dominated by human psychology, and we are terrible at psychology. Physics requires 5 sigma to be statistically significant. Psychology requires a good story and a spreadsheet full of literal garbage.

---

Would your ASI be able to explain "w*ke"? To both sides, without assigning blame to either 'side'? Having an internal understanding is not enough, the message would go out to everyone all at the same time. It would have to tailor the message to an audience made of subgroups, and know that each group would respond their own way. Beautiful chaos.

Perhaps the safest bet would be to have a few famous sock puppets...

Or perhaps the very safe bet would be to choose a different success criteria.

By the way, have you read MMAcevedo by qntm (lately)?

Expand full comment
Andrew's avatar

what kind of intelligence does a creature with 8 arms that can instantly create camouflage on demand have, and can we understand the way it thinks (actually thinks, not use our/AI heads to arrive at the same effect).

Expand full comment
Matt Ball's avatar

I don't think it is learning "persuasion" as much as psychological manipulation. (The former implies logic; the latter realizes we like what we like and rationalize whatever we've ended up at.)

I think of it like Ender. Or in Robopocalypse.

It never occurred to me that philosophy would be "solvable." See previous point.

Thanks for the post. (Have you read Robopocalypse?)

Expand full comment
Greg G's avatar

In physics, it seems like we have lots of observations currently that don't make sense, so the field seems ripe for a better mental model that resolves all of the apparent contradictions or oddities we see.

Regarding persuasion, the real-world use case isn't necessarily changing a specific position on a specific issue, but rather getting someone to do something. If you have greater degrees of freedom on how to motivate them, it's probably easier. Otherwise, I basically agree. An ASI might be 2-5X better at persuasion than a smart human, but it doesn't seem like it could just generate adversarial content that amounts to mind control.

It seems risky to assume that humans being needed early on will slow things down. I bet there will be plenty of humans willing to go along with ASI plans for various reasons. Having to manipulate matter will slow things down, and something like permitting may as well, but just getting some random group of humans to help you seems like a cakewalk.

Expand full comment
dynomight's avatar

Re physics: I definitely agree that there's some chance that this is possible. I guess I'd say, for example, that it's more likely that the Being would somehow figure all of physics out than that it would come up with some simple idea to cure cancer. I lean against ("probably not") but not confident!

Regarding persuasion, to be honest, I think I'm actually coming around to the view that my post underrates it and that it's quite plausible (maybe even likely) that the Being would have some large short-term impact. Because it wouldn't just be sending random quips on twitter. It could build deep relationships with huge benefits to people who cooperate with it.

Expand full comment
Sebastian Jensen's avatar

Nice post sir.

Expand full comment
Marginal Gains's avatar

A thought-provoking post!

Here's my take:

If we define intelligence narrowly—focusing solely on IQ-related capabilities like problem-solving, logic, and pattern recognition—then AGI might be achievable within the next 10 years. However, reaching ASI (Artificial Superintelligence) with current models and methods seems unlikely. ASI would require more than just data; it would also require interacting with the world and genuinely learning from experiences. Intelligence is far more nuanced than IQ alone. Emotional intelligence, creativity, common sense, and tacit knowledge—things that humans acquire through life experience—are areas where AI will continue to struggle. These qualities are critical for fully understanding the complexity of human life, and without breakthroughs in these domains, AGI across every possible field remains a distant goal. Intelligence is shaped by more than raw cognitive ability. Tacit knowledge—deep, context-dependent understanding gained through lived experience—and the environment in which someone grows up or works play vital roles in problem-solving and decision-making. This personal, experiential understanding is complicated to replicate in AI. Machines don't "live" as we do, so they lack the richness of understanding that comes from navigating the complexities of human existence.

We'd need a comprehensive, interconnected understanding of every field and subfield to build ASI—a knowledge level we don't yet have. It's not just about gathering massive amounts of data; it's about connecting and applying that information in meaningful, creative, and interdisciplinary ways. Moreover, there are so many "unknown unknowns"—problems and phenomena we don't even realize exist—which makes the prospect of ASI seem even further away.

Even if we were to achieve ASI, its consequences remain an open question. History shows us that whenever a form of intelligence has surpassed others—whether in nature or human society—it tends to dominate, often wiping out or controlling less intelligent species. Will AI follow the same trajectory? It's impossible to say for sure. Only time will tell whether ASI will coexist with humanity or reshape the world entirely.

Robotics, too, has significant limitations in automating jobs outside of white-collar work. Tasks in unpredictable and constantly changing environments—like caregiving, farming, or construction—remain difficult for robots to handle. Autonomous cars are a good example: while they've made significant progress, they still struggle with rare or complex edge cases. This "last-mile" problem will likely slow advancements in robotics and several other fields for at least a few decades.

Within the next 10 years, however, some industries will become almost entirely automated, particularly in roles like coding, data analysis, financial and other analysis, and most of the writing. Unfortunately, this will disproportionately impact less-skilled white-collar workers. Entry-level jobs could disappear as highly skilled workers, augmented by AI, become so productive that companies no longer need interns or junior employees.

Looking ahead, I believe the next decade will primarily see a skilled human with AI replacing a less skilled human with or without AI in most white-collar jobs. Those who can effectively integrate AI into their workflows will thrive, while those who cannot may find themselves left behind. Adapting to this new reality will become essential for success in the workplace.

Expand full comment
Matt Ball's avatar

Regarding if more data will create ASI: Do you read https://garymarcus.substack.com/ ?

Expand full comment
Marginal Gains's avatar

Once in a while. To see what he is up to.

He makes some valid points but is also inaccurate in quite a few things. It has become more personal than I like, so I unsubscribed.

My perspective is more focused on AI augmenting us. As I stated in my comment above, can current methods/models soon lead us to AGI or ASI across all fields or subfields? The answer is No.

Most people who write/talk/build AI have two extreme perspectives. On one side of the spectrum, there's the fear that AI will kill us all, and on the other, there's the overly optimistic belief that AI will solve all of humanity's problems. I think the truth lies somewhere in the middle.

AI won't be our savior, nor will it necessarily become our destroyer anytime soon. Instead, I believe it will augment us for a while, acting as a powerful tool to enhance human capabilities. It will help us solve complex problems, automate tedious tasks, and create new industry opportunities. However, this augmentation will come with challenges: economic disruptions, ethical dilemmas, and the need to adapt to a rapidly evolving world.

I think the realistic short-term future lies in humans and AI working together. AI will amplify our productivity and creativity, but it won't replace the human touch entirely—at least not yet.

Expand full comment
Matt Ball's avatar

>It has become more personal than I like

But at least it isn't repetitive.

;-)

Expand full comment
Marginal Gains's avatar

Yes, we can say that. However, most of the time, it has more entertainment value than something thought-provoking. I care about the latter more than the former.

Expand full comment
Steve Newman's avatar

I feel like the interesting questions are broader than this. To take a specific example, I agree that plausibly a superintelligence might not have been able to predict the outcome of the 2024 presidential election relying only on "the available polling, economic data, and lessons from history". But suppose it had broad Internet access in the runup to the election, looking at the great mass of news reports, social media, etc. My guess is that it would have been able to predict the outcome. The election was close but it wasn't 2000 Bush-Gore close, and there's a lot of signal out there if you know how to read and weight it.

I'd give even better odds that a superintelligence that was able to interview voters and otherwise interact with the world to gather more data, even in a modest way, would have successfully predicted the election outcome.

More generally, I think that asking what a genius-in-a-box could accomplish in a relatively isolated fashion is less interesting than envisioning how things might unfold given the ability to interact with the real world. Yes, the need to do things like perform physical experiments will be an obstacle, but history provides many examples of intelligence working around obstacles. In principle, a fleet of hyperspeed geniuses could make better choices of which experiments to run, derive more insight from each experiment, and design more efficient plans for executing the experiments. All of this would increase the impact of each dollar spent on R&D, thus pulling in more dollars – imagine a series of Project Stargates for fusion power, or curing cancer, or creating nanoassemblers, or whatever.

"The best diplomat in history" wouldn't just be capable of spinning particularly compelling prose; it would be everywhere all the time, spending years in patient, sensitive, non-transactional relationship-building with everyone at once. It would bump into you in whatever online subcommunity you hang out in. It would get to know people in your circle. It would be the YouTube creator who happens to cater to your exact tastes. And then it would leverage all of that.

If and when create even mild superintelligence, the question is not how it could work within the current system, the question is how the system will evolve.

Expand full comment
dynomight's avatar

Also, if the Being could communicate with millions of people at the same time, it would certainly get a lot of practice in figuring out how to convince people!

Expand full comment
dynomight's avatar

> But suppose it had broad Internet access in the runup to the election, looking at the great mass of news reports, social media, etc. My guess is that it would have been able to predict the outcome.

This is actually the scenario I had in mind! Here's my argument for why it wouldn't be able to do so: I agree that there are lots of signals, but I think they are all have roughly the same weakness, which is sort of "non-response bias" in polls and similar for social media. You can aggregate lots of things to reduce sampling noise, but you still have to use guesswork to compensate for that bias, so the bias never really goes away. The only visible signal for that bias is actual election outcomes. But those are pretty sparse and the world is constantly changing, so I don't think it would be possible to build a very accurate model.

(Of course, I'm not very confident...)

> In principle, a fleet of hyperspeed geniuses could make better choices of which experiments to run, derive more insight from each experiment, and design more efficient plans for executing the experiments.

I totally agree, this is exactly what I'd expect to happen. So while I think the "feedback loop" would turn over *relatively* slowly at the start, I have a hard time predicting how fast it would go. It might be quite fast, like a few years. But I don't think much less than that?

I like your description of the best diplomat in history. I think that's exactly what it would do. Maybe something you're alluding to there (and which I missed) is that the people who cooperated with the Diplomat would derive enormous benefits. Meaning it might be almost impossible to refuse to cooperate?

What would be your best guess for how AI might make a "very fast" jump to power? I guess "ultimate diplomat" and "ultimate computer hacker" seem like the most plausible? (Aside from the "unknown unknowns".)

Expand full comment
Steve Newman's avatar

> I agree that there are lots of signals, but I think they are all have roughly the same weakness, which is sort of "non-response bias" in polls and similar for social media. You can aggregate lots of things to reduce sampling noise, but you still have to use guesswork to compensate for that bias, so the bias never really goes away. The only visible signal for that bias is actual election outcomes. But those are pretty sparse and the world is constantly changing, so I don't think it would be possible to build a very accurate model.

I'm probably entering arguing-over-angels-dancing-on-the-head-of-a-pin territory here, but my intuition is that the Being's approach wouldn't much resemble "aggregating signals", it would be diving in and engaging in millions of concrete details to create and validate a high-resolution model of the voting-age public. For instance, 100,000 separate case studies of individual voters based on an analysis of their social media posts; information about those individuals from data brokers, LinkedIn, etc.; cross-referenced against things like "this person works for a company that, based on <some signal>, appears to be downsizing"; basically a detailed triangulation of thousands (millions?) of individual voters. It wouldn't be able to ground-truth this work against past elections, but I still suspect it would find ways to come out ahead in the prognostication game (and it would find ways to learn _something_ from past elections, especially the most recent). Possibly it could ground-truth more or less the full project against the most very recent elections, including the 2024 primaries at all levels (not just presidential).

Or, and hear me out, it might think of an approach that wouldn't occur to either of us.

> What would be your best guess for how AI might make a "very fast" jump to power? I guess "ultimate diplomat" and "ultimate computer hacker" seem like the most plausible?

Depends so much on starting conditions, including the level of scrutiny it's under and whether it has rivals. Assuming your IQ-300 10,000x-speed Being, and assuming it has no near-peer AI rivals and manages to evade any direct monitoring... my "favorite" (ugh) scenario is basically "ultimate crime lord", using a combination of diplomacy, extortion, hacking, and merely smart business moves (all of which support one another, e.g. hacking to get data to support the other three pillars) to rapidly accumulate wealth and leverage. Basically following a trajectory a bit like SBF or Musk, but, you know, smarter and able to pursue many more avenues at once (including illegal avenues). It feels like this could escalate very rapidly? Like, build a $100M stake by finding some low-hanging crypto scam or launching a viral business or something (all through intermediaries of course), and then... I don't know how to guess at the doubling time for its level of resources but maybe kinda fast? (Look again at SBF / Musk / Trump, and consider that it could be much more strategic, would have many more options available, and could pursue many more agendas at once.)

Shudder...

Expand full comment
dynomight's avatar

Strongly encourage you to basically take your thoughts from these comments and make them into a post. Your concept of "relationship-building" in particular has really stuck with me.

Expand full comment
raveren's avatar

A bright ant looks at the world and imagines that if there was a 10000x smarter ant, it would improve the structure of their anthills and the understanding of the forest floor.

Expand full comment
dynomight's avatar

I'm always a little leery of analogies (https://dynomight.net/analogies/) but in this one, I'm not disputing that the 10000x ant could eventually industrialize galaxies, just that it could do so without having a substantial ramp-up period.

Expand full comment
raveren's avatar

With all respect and admiration:

The ant accepts the forest floor as the whole entirety of what exists. For him there's no higher concept, there's only matters of the anthill building and any deeper modes of experience, or - gasp - higher state of being - are not only impossible, the idea of it makes him uncomfortable/angry/scared/dismissive. Who wants complex feelings when the whole universe is knowable and/or just waiting to be explained in the terminology of the concepts encountered on the forest floor (or direct derivatives thereof).

The ant might suspect deep down that there's a mystery though. Heck, there have even been multiple actual beings which were born with the 10000x-ant property, however it's not cool to even investigate that field, these guys did not add any new anthill building improvements AT ALL! In fact, history said they performed miracles - preposterous! Some deeper reality than an ant can see! It's a contemporary stigma for the ant!!

If an another ant cannot verify and test something - it's SO SUPER NOT TRUE that one should be labeled as a lunatic to even consider otherwise! Subjective reality is always wrong and delirious, those who do a leap of faith and try to personally go deep and UNDERSTAND what the 10000x-ant figures of the past were actually trying to teach their brothers, are dogmatic zealots. No doubt! Even if they don't give in to the dogma which encompassed the pure teaching, even if they try with their whole heart to experience this otherworldly state firsthand - and - especially - if they go through the hardships required to pierce the shroud of their own deeply (deeeeeeeply!) held beliefs - those are definitely gullible, poor insane ants!

Maybe because it sounds hard (and it is) I should not try myself -- thinks the bright ant with all the potential to experience the infinite field of all existence first-hand.

Expand full comment
dynomight's avatar

In non-analogy, would it be fair to put this as: "It's entirely possible that the Being would exploit unknown unknowns. It would do things we can't conceive of?" Because I can definitely agree that's possible!

But it's also surely possible that there don't end up being any gigantic unknown unknowns. So I guess I think it's worth speculating about what would happen if we happen to live in that universe?

Expand full comment