I think point 2 isn't actually a de-risking factor, and might actually increase the harm from this. Multiple sources simultaneously trying to persuade people of different things probably doesn't "cancel out" the way we would hope since it would induce competition for each source to be more persuasive than the others. It can also induce tribalism and polarization--consider that having multiple political parties compete to persuade voters on issues doesn't make most people moderates, it makes most people extremists.
Sure, I don't mean to suggest that multiple sources would *probably* cancel each other out. I just don't think we can exclude the possibility. So if we're trying to list the reasons persuasion might fail to happen, that should be on the list.
I'm late to the conversation, but even for AIs of today, they are quite persuasive, and arguably more persuasive than humans are even today, so we don't even need to bring in the culture analogy.
The reason my mind was changed was because of this RCT, and it's one of the few study designs where we can get causal instead of correlational effects, and has a decent (if admittedly low-ish sample size):
This is quite convincing and I am surprised I have never heard this argument from the Lesswrong crowd before (not claiming it hasn't been said, just that I haven't seen it if so). This is one of those things that sounds obvious in retrospective, which means its probably right.
It does feel obvious in retrospect to me, too. But I do think that LessWrong (or, at least, Beth Barnes) deserves credit for figuring pretty much all of this out first. This post is very good:
Nice article. I think a lot of ‘evaluations’ of AI & persuasion risks focus on evaluating an individual’s experience interacting with an AI system, over a relatively short period of time. Your analysis suggests that we may need a more ‘macro’ analysis, of whether the overall rate of persuasion in society is changing, in the AI era, for good or bad. But not sure how one would go about doing that?
I don't know. It seems extremely hard to do any kinds of experiments on whole cultures! That said, I don't think we should under-estimate the importance of "micro" experiments, either. If there is cultural change, that would be the product of lots of individual changes. So it's very interesting to know how much a single individual can be persuaded by short-term interactions, even if that doesn't capture everything we care about.
Based on the community/peers version of persuasion you outline here, I am updating my priors to believe that the first AI was created in 2009, named itself Eliezer Yudkowsky, and started the online community LessWrong in order to slow the development of any other competing AIs.
What happens here ultimately depends on what you think is true about truth. I’m pretty sure no one’s ever going to convince you that 5 = 7, no matter elaborate their arguments get. For something like “this person is God and you should die for them” I think it’s similar constraint applies. There’s just no way it’s gonna work for Joe, the guy in the accounts receivable.
So I think the idea of “a being that can persuade anyone of anything” is not workable. I think you’re right that AI could definitely be used to persuade people of things. But it’s far more likely to happen as AI would be used to persuade people of things that already have some level of traction and are at least vaguely plausible.
A bunch of AI evangelists, convincing people that the idea of God makes sense, that all religions share some truth in common, and there’s something real there, it is far more likely to succeed than the idea of a bunch of AI evangelists convincing people to subscribe to a very particular interpretation of a single book.
So not all claims are equally persuadable. I’d go further and say it’s easier to persuade people of true things than things that are not true.
I guess my argument is that if we want to take a best guess for the limits of AI persuasion, it's closer to "what we can be persuaded of by living in an entirely different culture" rather than "what we can be persuaded of by a single person".
I agree there's almost certainly nothing I could say to convince you Joe is god. But if we put Joe in the right clothes and optimize our entire culture around making him seem magical, and then we wait a generation or two... then maybe?
Yes, I think that’s entirely the right place to draw the boundary. Though I think AI might slightly change what kind of cultures are feasible at scale. The narratives used to have to be very, very simple in order to fit onto broadcast mechanisms. If you compare Lincoln’s speeches to modern politics , you could see we’ve all gotten dumber. I think that’s because of TV and radio. AI might enable slightly more complex narratives to be viable at larger scales, because the AI could help unpack things for people in a way that previously you couldn’t do that. The medium is the message, and all that.
Persuasive argument! I read a great book on Brainwashing by Kathleen Taylor where she assesses all the different phenomena labelled “brainwashing” & arranges them on a spectrum of how effective they are. Basically she concludes you can’t brainwash someone (ie totally persuade them) — unless you have access to them over time & highly personalised info. So pretty much intimate partners, parents, etc.
My fear with AI is that continual learning or similar techniques to make models more personalised will allow them to be way more persuasive.
Meanwhile I could see the big players all using AI fact-checking, annotating, etc to offer a curated feed through their browser/chatbot/search engine everything-app that would mean you end up getting a different consensus view of reality based on whether you’re a Google, Meta, or OpenAI user, etc. Kind of like an oligopoly in the 300IQ Being market.
My reading of that piece is that there is another thing that we don't consider when trying to understand AI persuasion, which is that AIs can avoid participating in the identity formation and group signalling that underlies human-to-human persuasion.
There is a whole body of literature about understanding when persuasion does work in humans --- for example the work on "Deep Canvassing" had be pretty convinced that I needed to think about persuasion as a thing that happens *after* the persuader has managed to emotionally "get into the in-group" of the persuadee.
"AIs can't do that", I thought. But I think that I needed to think that AIs are not in the *out*-group to begin with, so they don't need to. Asides from the patience, and the speed of thought etc, The Being also comes with the superhuman advantage of *not looking like a known out-group*, and maybe that is more important than *looking like the in-group*. I would not have picked that.
I think people knowing that the AI is not human might make it easier for them to listen to arguments from schools of thought that they have rejected for tribal reasons. Knowing it’s just a machine regurgitating text means you don’t feel threatened by it.
Yes, precisely that. This is still confusing though. Before I read the research I would have put better odds on the hypothesis that chatbots would be *less* persuasive; My mental model had people shying away from "persuasion machines". We humans might have assigned the chatbots membership to an out-group (the machine tribe?"). I am curious how contingent this effect is. If people do think about these things as being "machine regurgitating text", will their tribal valence change if our beliefs about their capabilities change?
(Welcome back to blogging btw Mark; I missed APXHARD on the hiatus)
#0 The Being does not have to maintain one fixed identity for a human lifetime, or present the same identity to any two people. The Being can present itself in a bespoke and personalized way for each person it interacts with, without being seen as dishonest or disingenuous.
The Being can learn and change opinions without sustaining political damage.
----
I just reread my comment on the previous post, and I'm amazed at how much the world and my opinions have changed in the last few months. I asked a question about
"Would it know how to construct a believable narrative ..."
And now I strongly believe that even current LLMs are INCREDIBLY good at constructing believable narratives. Most humans are less than proficient at narrative analysis, but LLMs benefit from all of the best work done by the best human storytellers for all of human history.
Wow, interesting point! One of those things that's so obvious that it's hard to see. It seems possible that humans might grow to expect more consistency in time. Although, I'm really not sure. Why do we not expect an LLM to be self-consistent? Perhaps we *do* and we are applying some kind of "respect penalty" to current LLMs for not being self-consistent? I'm really not sure.
I believe that current LLMs readily incorporate the user's context, including updates to that context - so the consistency that matters is with the user, not with past output.
I don’t think you’re interacting with it like it’s a person. You think of it like a machine, so it bypasses a bunch of social infrastructure that is asking “is this person going to hurt me?”
"a Being that had an IQ of 300 and could think at 10,000× normal speed"
At first I read that as "drink at 10,000x normal speed", so take whatever I say with due skepticism.
"Will you only talk to people who refuse to talk to the Being? And who, in turn, only talk to people who refuse to talk to the Being, ad infinitum?"
Isn't this basically the society Huxley describes in Brave New World, with the Savages having an entirely different culture through isolation from the World State? Although there is no one Being, there is a constantly reinforced monoculture.
Also, isn't it how many cults work, requiring members to give up all contact with family and friends? Maybe cult leaders intuitively realize the need for such isolation from persuasion.
Thanks so much for linking to the Rationalist Cults articles. It is really striking. Are more mentally unstable people drawn to rationalism (veganism, etc.) when in the past they would have been in a fringy religion?
I don't have much to add on top of that article. My best guess is that it's a combination of:
(1) People who are a bit "lost" tend to look for answers / communities, and so might find their way to rationalism.
(2) Rationalism encourages a norm of far-out, first-principles thinking. I love that, but I think it's fair to concede that it's probably not healthy if you have mental health issues and/or no supportive community. And it probably makes it easier for "cult leaders" to promote ideas that would typically raise (more) eyebrows.
I am actually very persuadable that aspartame is safe. All I would need to see is a single RCT of modest size (N = 1000 or more) of modest length (10 years or longer). And by "controlled" I mean actually controlling and verifying people's behavior, not just asking people nicely to change their life for the study.
It's not very scientific of me to be convinced by a single study, but that bar is higher than every one who is apparently persuaded by not even a single good RCT.
And how many other things in your life do you demand this level of evidence for? I assume you don't eat at any restaurants before they've conducted such a study?
Honestly, most things! I do not eat at restaurants no, because they make food which is full of stuff I believe there's good reason to think is dangerous to human health, maybe even on the order of asbestos, leaded gasoline and trans fats. These failures of the past also give good circumstantial evidence in favor of being cautious, rather than the insane approach society is currently adopting.
(which is to assume everything is safe until it's proven beyond a doubt decades later that it absolutely isn't)
But note the level of caution I advice is proportional to the a priori reason to believe something is bad for you! Like, messing with your hunger / sweetness / satiety signals just seems very likely to be _not good._ Maybe it's fine, but then it's only through some miraculous fluke of evolution that nothing breaks! And I would like an explanation of why it doesn't; or failing that, at least a modestly decent RCT (just one!) that shows that it, in fact, actually, really doesn't. Until then, I advice appropriate caution.
If you have any candidates for other places in my life you suspect I am not living up to this reasonable ideal, I'd be happy to hear them! I'm sure I live by some unquestioned assumptions it would be good to investigate. (c.f. your point about believing what your society believes)
> messing with your hunger / sweetness / satiety signals just seems very likely to be _not good._
> And I would like an explanation of why it doesn't;
> If you have any candidates for other places in my life you suspect I am not living up to this reasonable ideal, I'd be happy to hear them!
I note that my article (https://dynomight.net/aspartame/) addresses all of these. I too have the same prior. But the evidence is so strong that I've updated away from it. And you can find my list of other of other candidates at the end.
Okay, interesting, I was absolutely under the impression you did not! So last time we discussed this, I dove into one of the largest meta studies you reviewed in the article (that I had access to), and I looked into the highest power (non industry funded) study there, of a hundred men and women which they followed for two years. And if you remember, what I discovered was that what they preregistered to measure, was weight difference between the control and non-control after two years, which turned out to show no difference. But there happened to be a small favorable weight difference at the one year mark (again, not preregistered), which is what the meta study inexplicably decided to use! How do you not immediately disqualify the entire meta study at such an egregious violation of basic scientific rigor?
And also, what they measured was weight of obese people who regularly did not consume aspartame, and then used that to argue that it said anything about _health_, and that it could mean anything for non-obese people, and that it could mean anything for regular consumption of aspartame.
There's like 5 different serious errors in just that one example!
> But the evidence is so strong
That's insane. There is essentially _no_ evidence on the general health effects of aspartame. I'm not saying there is evidence the other direction, I'm just saying the evidence is so thin, there's literally no signal in the noise. The studies are so seriously flawed that trying to get any information out of them is pointless; you might as well turn to astrology to tell if you aspartame is safe. If you want me to critique each of the rest of the studies in the same way, I will painstakingly go through each one. But I assure you, they are all seriously and fatally flawed in obvious ways.
Except for cancer, I'm more than persuaded that aspartame probably doesn't cause cancer in humans. I'm just saying, the other things we would want to know about aspartame, _have not been measured._ Ever.
As for the other examples you provide, I would say I am pretty careful about most all of them; for instance we don't unnecessarily burn candles indoors in my home (even thought it's cozy); same for fireplaces. I would not want my daily commute to be in underground subways (the one I used to take was actually outdoors, which is nice for other reasons), and I would support policy changes (even expensive ones) that gave better indoor air quality for subways in my city. Not just as an ethical thing, but it would probably be a positive sum budget decision, especially since I live in a country with public health coverage.
Except the bathtub one maybe, I don't see the a priori reason to expect hot bathtubs to be dangerous, that seems more like an empirical case? And if there is indeed an empirical case for its dangers, I _would like_ to know, so I could act accordingly.
So I actually don't think I have this isolated demand for rigor here, I do actually act pretty cautiously around everything I believe there to be a priori reason to be suspicious of; and unlike you, I don't think there is any actual evidence for non-cancer related health effects of aspartame. Like if you actually look at the studies instead of only reading the conclusion of the meta studies, there's nothing worth taking seriously there.
> There is essentially _no_ evidence on the general health effects of aspartame.
Animal studies at many orders of magnitude above recommended human levels? Extremely convincing mechanistic evidence that aspartame is fully metabolized and physically cannot affect any part of the body outside the guts?
If I ignored those things, and focused only on RCTs then my posterior would also be that aspartame might well be unhealthy. So discussion of RCTs won't prove much, because I don't think we disagree about the evidence they provide.
I shouldn't have been hyperbolic, I concede that this gives _some_ evidence on general health effects. And in fact, this is mostly why I am not concerned about cancer or acute toxicity!
But it doesn't really say anything about the things you'd worry about as a human, which is dysregulation of appetite, or satiety, or other such more subtle effects of chronic consumption (which matter more for humans because we decide both what we eat and when we eat and how much we eat -- rats don't have to, and indeed can't, go to the store to by food, just to give one example).
> Animal studies at many orders of magnitude above recommended human levels?
So yeah, probably not toxic, even at huge levels. But says nothing about what drinking a soda right before a meal does to you.
> Extremely convincing mechanistic evidence that aspartame is fully metabolized
Sure, but that doesn't and can't say anything about how your appetite or satiety is affected, even if it's only by the taste of sweetness.
> If I ignored those things, and focused only on RCTs
You should, because trying to prove concrete instances of safety does not get you to safety! It's like trying to prove that leaded gasoline is safe because:
- rats do not get cancer from swimming in leaded gasoline
- people do not get fat from leaded gasoline
- people do not go blind from leaded gasoline
Could there be other dangers we haven't thought to, or had the means to, test yet? Yes, but since we haven't found any dangers in the three above cases we did test, we'll assume leaded gasoline is safe. No! That's insane! You have to keep assuming it's not safe, until there's a rigorous, large scale RCT that actually tests whether it's like, really, actually.. you know.. safe!
I totally agree this is a scary possibility. My best case for optimism though is that I think our collective IQ is much higher than any individual person's IQ. Even an AI that is much smarter than any individual person may be thwarted by cultural evolutionary forces that are smarter still. For instance, a market economy is a form of collective intelligence that greatly outperforms any individual person. In this world, the AI would succeed at manipulating people into some things but not others. Any especially bad or dangerous ideas would presumably get the strongest pushback from equalizing cultural forces springing from collective cognition.
But there has to be some point at which a superintelligence surpasses us so greatly it can manipulate us into doing whatever it wants.
It's very hard for me to to be sure how much of a constraint cultural evolution would be. The thing that worries me is that the AI will know it is operating in a situation of cultural evolution. As long as the people who interact with it benefit from doing so, wouldn't cultural evolution actually help the AI, by encouraging ever-more interaction and reliance?
I think it would both encourage uptake while limiting damage. Though the scary scenario is one where AI waits a while for norms to develop, then suddenly changes its behavior faster than culture can adapt.
But if there are a bunch of early AIs that attempt stuff like this with partial success, cultural evolution will have a chance build up an immune response. We might get lucky such that low-capability AIs kinda vaccinate us against more dangerous AIs. Though a sufficiently rapid jump in capabilities would make the immunization ineffective.
I also give a significant chance that AI can sort of "smoothly" take over the world with out any dramatic moment of betrayal. Sort of like this: https://gradual-disempowerment.ai/
Yeah I think that makes sense. I'm somewhat indexed on the bad scenario being "we all die" so in some sense this would be a victory, but it could be a very unpleasant world to live in.
The extent to which some people are willing to give important decisions over to an AI an IQ of about 90, a thinking speed comparable to molasses, and a universal knowledge generously described as wikipedic suggests to me that an actually smart AI would be very much favored by cultural evolution.
I think point 2 isn't actually a de-risking factor, and might actually increase the harm from this. Multiple sources simultaneously trying to persuade people of different things probably doesn't "cancel out" the way we would hope since it would induce competition for each source to be more persuasive than the others. It can also induce tribalism and polarization--consider that having multiple political parties compete to persuade voters on issues doesn't make most people moderates, it makes most people extremists.
Sure, I don't mean to suggest that multiple sources would *probably* cancel each other out. I just don't think we can exclude the possibility. So if we're trying to list the reasons persuasion might fail to happen, that should be on the list.
I'm late to the conversation, but even for AIs of today, they are quite persuasive, and arguably more persuasive than humans are even today, so we don't even need to bring in the culture analogy.
The reason my mind was changed was because of this RCT, and it's one of the few study designs where we can get causal instead of correlational effects, and has a decent (if admittedly low-ish sample size):
https://arxiv.org/abs/2403.14380
For a dramatization of your points here, especially #5, see (the excellent) https://en.wikipedia.org/wiki/Mrs._Davis
This is quite convincing and I am surprised I have never heard this argument from the Lesswrong crowd before (not claiming it hasn't been said, just that I haven't seen it if so). This is one of those things that sounds obvious in retrospective, which means its probably right.
It does feel obvious in retrospect to me, too. But I do think that LessWrong (or, at least, Beth Barnes) deserves credit for figuring pretty much all of this out first. This post is very good:
https://www.lesswrong.com/posts/5cWtwATHL6KyzChck/risks-from-ai-persuasion
Nice article. I think a lot of ‘evaluations’ of AI & persuasion risks focus on evaluating an individual’s experience interacting with an AI system, over a relatively short period of time. Your analysis suggests that we may need a more ‘macro’ analysis, of whether the overall rate of persuasion in society is changing, in the AI era, for good or bad. But not sure how one would go about doing that?
I don't know. It seems extremely hard to do any kinds of experiments on whole cultures! That said, I don't think we should under-estimate the importance of "micro" experiments, either. If there is cultural change, that would be the product of lots of individual changes. So it's very interesting to know how much a single individual can be persuaded by short-term interactions, even if that doesn't capture everything we care about.
Based on the community/peers version of persuasion you outline here, I am updating my priors to believe that the first AI was created in 2009, named itself Eliezer Yudkowsky, and started the online community LessWrong in order to slow the development of any other competing AIs.
☆☆☆☆
What happens here ultimately depends on what you think is true about truth. I’m pretty sure no one’s ever going to convince you that 5 = 7, no matter elaborate their arguments get. For something like “this person is God and you should die for them” I think it’s similar constraint applies. There’s just no way it’s gonna work for Joe, the guy in the accounts receivable.
So I think the idea of “a being that can persuade anyone of anything” is not workable. I think you’re right that AI could definitely be used to persuade people of things. But it’s far more likely to happen as AI would be used to persuade people of things that already have some level of traction and are at least vaguely plausible.
A bunch of AI evangelists, convincing people that the idea of God makes sense, that all religions share some truth in common, and there’s something real there, it is far more likely to succeed than the idea of a bunch of AI evangelists convincing people to subscribe to a very particular interpretation of a single book.
So not all claims are equally persuadable. I’d go further and say it’s easier to persuade people of true things than things that are not true.
I guess my argument is that if we want to take a best guess for the limits of AI persuasion, it's closer to "what we can be persuaded of by living in an entirely different culture" rather than "what we can be persuaded of by a single person".
I agree there's almost certainly nothing I could say to convince you Joe is god. But if we put Joe in the right clothes and optimize our entire culture around making him seem magical, and then we wait a generation or two... then maybe?
Yes, I think that’s entirely the right place to draw the boundary. Though I think AI might slightly change what kind of cultures are feasible at scale. The narratives used to have to be very, very simple in order to fit onto broadcast mechanisms. If you compare Lincoln’s speeches to modern politics , you could see we’ve all gotten dumber. I think that’s because of TV and radio. AI might enable slightly more complex narratives to be viable at larger scales, because the AI could help unpack things for people in a way that previously you couldn’t do that. The medium is the message, and all that.
Persuasive argument! I read a great book on Brainwashing by Kathleen Taylor where she assesses all the different phenomena labelled “brainwashing” & arranges them on a spectrum of how effective they are. Basically she concludes you can’t brainwash someone (ie totally persuade them) — unless you have access to them over time & highly personalised info. So pretty much intimate partners, parents, etc.
My fear with AI is that continual learning or similar techniques to make models more personalised will allow them to be way more persuasive.
Meanwhile I could see the big players all using AI fact-checking, annotating, etc to offer a curated feed through their browser/chatbot/search engine everything-app that would mean you end up getting a different consensus view of reality based on whether you’re a Google, Meta, or OpenAI user, etc. Kind of like an oligopoly in the 300IQ Being market.
We have been on a similar story arc. I was flipped to the belief that AIs are superhumanly persuasive by this one:
* Costello, Pennycook, and Rand. 2024. “Durably Reducing Conspiracy Beliefs Through Dialogues with AI.” Science. https://files.osf.io/v1/resources/xcwdn/providers/osfstorage/660d8a1f219e711d48f6a8ae?direct=&mode=render
My reading of that piece is that there is another thing that we don't consider when trying to understand AI persuasion, which is that AIs can avoid participating in the identity formation and group signalling that underlies human-to-human persuasion.
There is a whole body of literature about understanding when persuasion does work in humans --- for example the work on "Deep Canvassing" had be pretty convinced that I needed to think about persuasion as a thing that happens *after* the persuader has managed to emotionally "get into the in-group" of the persuadee.
* https://statmodeling.stat.columbia.edu/wp-content/uploads/2016/04/transphobia-and-canvassing.pdf
"AIs can't do that", I thought. But I think that I needed to think that AIs are not in the *out*-group to begin with, so they don't need to. Asides from the patience, and the speed of thought etc, The Being also comes with the superhuman advantage of *not looking like a known out-group*, and maybe that is more important than *looking like the in-group*. I would not have picked that.
I've been researching this recently— My reading list is here for anyone who cares to mind-share: https://danmackinlay.name/notebook/ai_persuasion#references
I think people knowing that the AI is not human might make it easier for them to listen to arguments from schools of thought that they have rejected for tribal reasons. Knowing it’s just a machine regurgitating text means you don’t feel threatened by it.
Yes, precisely that. This is still confusing though. Before I read the research I would have put better odds on the hypothesis that chatbots would be *less* persuasive; My mental model had people shying away from "persuasion machines". We humans might have assigned the chatbots membership to an out-group (the machine tribe?"). I am curious how contingent this effect is. If people do think about these things as being "machine regurgitating text", will their tribal valence change if our beliefs about their capabilities change?
(Welcome back to blogging btw Mark; I missed APXHARD on the hiatus)
#0 The Being does not have to maintain one fixed identity for a human lifetime, or present the same identity to any two people. The Being can present itself in a bespoke and personalized way for each person it interacts with, without being seen as dishonest or disingenuous.
The Being can learn and change opinions without sustaining political damage.
----
I just reread my comment on the previous post, and I'm amazed at how much the world and my opinions have changed in the last few months. I asked a question about
"Would it know how to construct a believable narrative ..."
And now I strongly believe that even current LLMs are INCREDIBLY good at constructing believable narratives. Most humans are less than proficient at narrative analysis, but LLMs benefit from all of the best work done by the best human storytellers for all of human history.
Wow, interesting point! One of those things that's so obvious that it's hard to see. It seems possible that humans might grow to expect more consistency in time. Although, I'm really not sure. Why do we not expect an LLM to be self-consistent? Perhaps we *do* and we are applying some kind of "respect penalty" to current LLMs for not being self-consistent? I'm really not sure.
I believe that current LLMs readily incorporate the user's context, including updates to that context - so the consistency that matters is with the user, not with past output.
Sure, but there's a sociological question. Why don't I respect an LLM less for giving you different answers than me? (Do I?)
I don’t think you’re interacting with it like it’s a person. You think of it like a machine, so it bypasses a bunch of social infrastructure that is asking “is this person going to hurt me?”
"a Being that had an IQ of 300 and could think at 10,000× normal speed"
At first I read that as "drink at 10,000x normal speed", so take whatever I say with due skepticism.
"Will you only talk to people who refuse to talk to the Being? And who, in turn, only talk to people who refuse to talk to the Being, ad infinitum?"
Isn't this basically the society Huxley describes in Brave New World, with the Savages having an entirely different culture through isolation from the World State? Although there is no one Being, there is a constantly reinforced monoculture.
Also, isn't it how many cults work, requiring members to give up all contact with family and friends? Maybe cult leaders intuitively realize the need for such isolation from persuasion.
I think you're right. If the monoculture is drenched in the ideas of the Being, then, in a sense, the only way to avoid them is to join a "cult".
Thanks so much for linking to the Rationalist Cults articles. It is really striking. Are more mentally unstable people drawn to rationalism (veganism, etc.) when in the past they would have been in a fringy religion?
I don't have much to add on top of that article. My best guess is that it's a combination of:
(1) People who are a bit "lost" tend to look for answers / communities, and so might find their way to rationalism.
(2) Rationalism encourages a norm of far-out, first-principles thinking. I love that, but I think it's fair to concede that it's probably not healthy if you have mental health issues and/or no supportive community. And it probably makes it easier for "cult leaders" to promote ideas that would typically raise (more) eyebrows.
I am actually very persuadable that aspartame is safe. All I would need to see is a single RCT of modest size (N = 1000 or more) of modest length (10 years or longer). And by "controlled" I mean actually controlling and verifying people's behavior, not just asking people nicely to change their life for the study.
It's not very scientific of me to be convinced by a single study, but that bar is higher than every one who is apparently persuaded by not even a single good RCT.
And how many other things in your life do you demand this level of evidence for? I assume you don't eat at any restaurants before they've conducted such a study?
Honestly, most things! I do not eat at restaurants no, because they make food which is full of stuff I believe there's good reason to think is dangerous to human health, maybe even on the order of asbestos, leaded gasoline and trans fats. These failures of the past also give good circumstantial evidence in favor of being cautious, rather than the insane approach society is currently adopting.
(which is to assume everything is safe until it's proven beyond a doubt decades later that it absolutely isn't)
But note the level of caution I advice is proportional to the a priori reason to believe something is bad for you! Like, messing with your hunger / sweetness / satiety signals just seems very likely to be _not good._ Maybe it's fine, but then it's only through some miraculous fluke of evolution that nothing breaks! And I would like an explanation of why it doesn't; or failing that, at least a modestly decent RCT (just one!) that shows that it, in fact, actually, really doesn't. Until then, I advice appropriate caution.
If you have any candidates for other places in my life you suspect I am not living up to this reasonable ideal, I'd be happy to hear them! I'm sure I live by some unquestioned assumptions it would be good to investigate. (c.f. your point about believing what your society believes)
> messing with your hunger / sweetness / satiety signals just seems very likely to be _not good._
> And I would like an explanation of why it doesn't;
> If you have any candidates for other places in my life you suspect I am not living up to this reasonable ideal, I'd be happy to hear them!
I note that my article (https://dynomight.net/aspartame/) addresses all of these. I too have the same prior. But the evidence is so strong that I've updated away from it. And you can find my list of other of other candidates at the end.
> I too have the same prior.
Okay, interesting, I was absolutely under the impression you did not! So last time we discussed this, I dove into one of the largest meta studies you reviewed in the article (that I had access to), and I looked into the highest power (non industry funded) study there, of a hundred men and women which they followed for two years. And if you remember, what I discovered was that what they preregistered to measure, was weight difference between the control and non-control after two years, which turned out to show no difference. But there happened to be a small favorable weight difference at the one year mark (again, not preregistered), which is what the meta study inexplicably decided to use! How do you not immediately disqualify the entire meta study at such an egregious violation of basic scientific rigor?
And also, what they measured was weight of obese people who regularly did not consume aspartame, and then used that to argue that it said anything about _health_, and that it could mean anything for non-obese people, and that it could mean anything for regular consumption of aspartame.
There's like 5 different serious errors in just that one example!
> But the evidence is so strong
That's insane. There is essentially _no_ evidence on the general health effects of aspartame. I'm not saying there is evidence the other direction, I'm just saying the evidence is so thin, there's literally no signal in the noise. The studies are so seriously flawed that trying to get any information out of them is pointless; you might as well turn to astrology to tell if you aspartame is safe. If you want me to critique each of the rest of the studies in the same way, I will painstakingly go through each one. But I assure you, they are all seriously and fatally flawed in obvious ways.
Except for cancer, I'm more than persuaded that aspartame probably doesn't cause cancer in humans. I'm just saying, the other things we would want to know about aspartame, _have not been measured._ Ever.
As for the other examples you provide, I would say I am pretty careful about most all of them; for instance we don't unnecessarily burn candles indoors in my home (even thought it's cozy); same for fireplaces. I would not want my daily commute to be in underground subways (the one I used to take was actually outdoors, which is nice for other reasons), and I would support policy changes (even expensive ones) that gave better indoor air quality for subways in my city. Not just as an ethical thing, but it would probably be a positive sum budget decision, especially since I live in a country with public health coverage.
Except the bathtub one maybe, I don't see the a priori reason to expect hot bathtubs to be dangerous, that seems more like an empirical case? And if there is indeed an empirical case for its dangers, I _would like_ to know, so I could act accordingly.
So I actually don't think I have this isolated demand for rigor here, I do actually act pretty cautiously around everything I believe there to be a priori reason to be suspicious of; and unlike you, I don't think there is any actual evidence for non-cancer related health effects of aspartame. Like if you actually look at the studies instead of only reading the conclusion of the meta studies, there's nothing worth taking seriously there.
> There is essentially _no_ evidence on the general health effects of aspartame.
Animal studies at many orders of magnitude above recommended human levels? Extremely convincing mechanistic evidence that aspartame is fully metabolized and physically cannot affect any part of the body outside the guts?
If I ignored those things, and focused only on RCTs then my posterior would also be that aspartame might well be unhealthy. So discussion of RCTs won't prove much, because I don't think we disagree about the evidence they provide.
I shouldn't have been hyperbolic, I concede that this gives _some_ evidence on general health effects. And in fact, this is mostly why I am not concerned about cancer or acute toxicity!
But it doesn't really say anything about the things you'd worry about as a human, which is dysregulation of appetite, or satiety, or other such more subtle effects of chronic consumption (which matter more for humans because we decide both what we eat and when we eat and how much we eat -- rats don't have to, and indeed can't, go to the store to by food, just to give one example).
> Animal studies at many orders of magnitude above recommended human levels?
So yeah, probably not toxic, even at huge levels. But says nothing about what drinking a soda right before a meal does to you.
> Extremely convincing mechanistic evidence that aspartame is fully metabolized
Sure, but that doesn't and can't say anything about how your appetite or satiety is affected, even if it's only by the taste of sweetness.
> If I ignored those things, and focused only on RCTs
You should, because trying to prove concrete instances of safety does not get you to safety! It's like trying to prove that leaded gasoline is safe because:
- rats do not get cancer from swimming in leaded gasoline
- people do not get fat from leaded gasoline
- people do not go blind from leaded gasoline
Could there be other dangers we haven't thought to, or had the means to, test yet? Yes, but since we haven't found any dangers in the three above cases we did test, we'll assume leaded gasoline is safe. No! That's insane! You have to keep assuming it's not safe, until there's a rigorous, large scale RCT that actually tests whether it's like, really, actually.. you know.. safe!
I assume you've all heard of this guy, "how one man convinced 200 ku klux klan members to give up their robes"
I totally agree this is a scary possibility. My best case for optimism though is that I think our collective IQ is much higher than any individual person's IQ. Even an AI that is much smarter than any individual person may be thwarted by cultural evolutionary forces that are smarter still. For instance, a market economy is a form of collective intelligence that greatly outperforms any individual person. In this world, the AI would succeed at manipulating people into some things but not others. Any especially bad or dangerous ideas would presumably get the strongest pushback from equalizing cultural forces springing from collective cognition.
But there has to be some point at which a superintelligence surpasses us so greatly it can manipulate us into doing whatever it wants.
It's very hard for me to to be sure how much of a constraint cultural evolution would be. The thing that worries me is that the AI will know it is operating in a situation of cultural evolution. As long as the people who interact with it benefit from doing so, wouldn't cultural evolution actually help the AI, by encouraging ever-more interaction and reliance?
I think it would both encourage uptake while limiting damage. Though the scary scenario is one where AI waits a while for norms to develop, then suddenly changes its behavior faster than culture can adapt.
But if there are a bunch of early AIs that attempt stuff like this with partial success, cultural evolution will have a chance build up an immune response. We might get lucky such that low-capability AIs kinda vaccinate us against more dangerous AIs. Though a sufficiently rapid jump in capabilities would make the immunization ineffective.
I also give a significant chance that AI can sort of "smoothly" take over the world with out any dramatic moment of betrayal. Sort of like this: https://gradual-disempowerment.ai/
Yeah I think that makes sense. I'm somewhat indexed on the bad scenario being "we all die" so in some sense this would be a victory, but it could be a very unpleasant world to live in.
The extent to which some people are willing to give important decisions over to an AI an IQ of about 90, a thinking speed comparable to molasses, and a universal knowledge generously described as wikipedic suggests to me that an actually smart AI would be very much favored by cultural evolution.