Links for January
carter, voltmeter, xanadu, psychopathy, pregnancy, yamasuki, Aeg!, AI, life, alcohol, bliss
(1) Jimmy Carter rabbit incident
On April 20, 1979, President Carter was on vacation fishing in a pond in his hometown of Plains, Georgia. After returning to DC, he mentioned to some White House staffers that a large rabbit had swum towards him “hissing menacingly” and he’d had to scare it away. Four months later, press secretary Jody Powell—possibly after a lot of drinking—mentioned this story to Associated Press reporter Brooks Jackson, which resulted in this front-page article in the Washington Post:
The country went crazy and spent more than a week mocking Carter for this ridiculous story—a story that Carter only mentioned in private, to his staffers, and which was apparently leaked to the press by Carter’s own press secretary. But Carter refused to comment.
After Reagan beat Carter in the 1980 election, his administration found a photo taken by a White House photographer and—the rabbit was real:
See also: Max Nussenbaum’s excellent review of Kai Bird’s biography of Carter, The Outlier.
Note:
the fate of the rabbit is unknown.
Does it matter how information gets to you? For example, say a paper comes out that tries yelling at kids in school and finds that yelling is not effective at making them learn faster. You might worry: If the experiment had found that yelling was effective, would it have been published? Would anyone dare to talk about it?
How much should you downweight evidence like this, where the outcome changes the odds you’d see it?
If you’re a true believer fundamentalist Bayesian, the answer is: None. You ignore all those concerns and give the evidence full weight. At a high level, that’s because Bayesianism is all about probabilities in this world, so what could have happened in some other world doesn’t matter.
The voltmeter story is an evocative tale where this conclusion seems intuitive and hard to avoid. But is it really always valid? I constantly see people who are enthusiastic about Bayesian reasoning argue in ways that are inconsistent with this rule. I’m not sure if that’s because they aren’t aware of the rule, or it’s because it’s a bullet they aren’t willing to bite (or both).
(3) PROJECT XANADU®
Founded 1960 * The Original Hypertext Project
“Xanadu®” and the Eternal-Flaming-X logo are registered trademarks of Project Xanadu.
The computer world is not just technicality and razzle-dazzle. It is a continual war over software politics and paradigms. With ideas which are still radical, WE FIGHT ON.
I think this is some kind of way of… visualizing text? But communicated in an unusual way? There’s an example visualization, and—of course—a video narrated by Werner Herzog. I don’t understand what’s happening here, but I’m cheering for them.
(4) Hierarchical Taxonomy of Psychopathology
I often wonder if current mental disorders will still exist in 100 years. Will we talk about someone having “ADHD” or “autism”? Or will those be subsumed by some other classification? Doug points out this article which mentions the Hierarchical Taxonomy of Psychopathology. The idea seems to be that, instead of saying someone “is a psychopath” or “is not a psychopath”, you’d measure to what degree they have dozens of interrelated features of psychopathy:
I have no real opinion about this but I suspect it gets at some deep issues about the meaning of science and human nature, so I’m hoping someone (Experimental History? SMTM?) will explain.
(5) Slightly More Than You Wanted To Know: Pregnancy Length Effects
Everyone agrees that pre-32 weeks is really bad, pre-37 weeks is pretty bad, and post-42 weeks is dangerous. In this post, though, I’ll focus on the sweet spot: the 37-41 week range. If you have a baby in this range, you’re basically in good shape, and should be grateful. But are you in slightly better shape in some parts of the window than others? Let’s find out.
Speaking of parenthood, I’ve long been a fan of the 1971 album Le Monde Fabuleux Des Yamasuki, produced by Jean Kluger and Daniel Vangarde. This is a concept album, with “phonetic pseudo-Japanese” lyrics apparently written using a Japanese dictionary, and recorded with a children’s choir and a black-belt Judo master. It’s an frantic mixture of innocence and mayhem that doesn’t really fit into any existing musical genre, but I always felt that it shared some of the spirit of modern electronic music. I only recently realized that Daniel Vangarde is the father of one of the members of Daft Punk. (The robot with the grey head in Epilogue.)
Speaking of old music, Stopp, Seisku Aeg! is a song recorded by Velly Joonas in Estonia in 1983, apparently an arrangement of Frida’s I See Red. I was extremely suspicious that this song was some kind of retcon. It just seemed too good, too adapted to 2025-era tastes to have been made in the USSR 40 years ago. But as far as I can tell, it’s real and Velly Joonas is a public musician, painter and poet to this day.
What can you predict about the future of AI if you take numbers seriously, in particular amounts of (a) money, (b) FLOPS, (c) data, and (d) energy? Seems like… a lot? This post should probably be mandatory reading for anyone interested in where AI is going. Short, but with exquisite information density. Despite the title, it’s almost all more general than just o3.
You’re familiar with cells. (You’re made of them.) You’re also surely familiar with viruses, which are small bits of genetic code (DNA or RNA) surrounded by a protein coat. Viruses infect cells by having proteins on their coat bind to specific molecules on the cell membrane. Viroids are pieces of RNA that are similar to viruses but with no protein coating. All known viroids only infect plant cells. They are can survive without a protein coat because they have an extremely stable (circular) structure and often just rely on damage (e.g. from insects) to get into cells.
Well, good news! We now have a brand new lifeform. Obelisks are viroid-like things that probably live in your mouth.
Here, we describe the “Obelisks,” a previously unrecognised class of viroid-like elements […] We find that Obelisks form their own distinct phylogenetic group with no detectable sequence or structural similarity to known biological agents. Further, Obelisks are […] detected […] ~50% of analysed oral metatranscriptomes (17/32).
Also:
Given that the RNA sequences recovered do not have homologies in any other known life form, the researchers suggest that the obelisks are distinct from viruses, viroids and viroid-like entities, and thus form an entirely new class of organisms.
You may have heard the Surgeon General recently warned that alcohol causes cancer. You, being a reader of this blog, are already aware of this [1, 2]. But the report is still worth looking at as an example of science communication. Here’s an excellent summary of the relevant mechanisms: (Mechanism D was news to me.)
This summary of how much risk of cancer rises with alcohol consumption also deserves praise for being unusually non-misleading:
While the upward-sloping arrows a bit much, this gets a lot of things right:
✔️ compares to base rate
✔️ shows absolute risks, not some screwy “percentage change in hazard ratio” nonsense
✔️ shows 0% and 100%
✔️ not horrendously ugly
(If you’re wondering why the absolute risk of cancer is so much higher for women than for men, I think it’s mostly that men are at much higher risk of dying from other stuff like cardiovascular disease. You can’t get cancer if you die from something else first.) [Update: Wrong! Men are at higher risk of most cancers. This chart probably looks different because it only includes certain “alcohol-related cancers”, one of which is breast cancer. Thanks to Elizabeth for the correction.]
Most public health communication drives me crazy by being too “opinionated”. Are the increases from 16.5% to 21.8% for women or 10.0% to 13.1% a lot? Should everyone stop drinking? The right answer is to shut up and let the reader decide for themselves. It’s refreshing to see someone actually do that.
Of course I don’t love the unambiguous causal language (“increases”) when it’s all observational data. But we have no choice but to rely on observational data since we can’t have nice things.
Everyone complains when the government is incompetent. But it’s equally important to celebrate when the government gets things right. So to all the government employees who made this happen—well done.
(11) Tyler John, back in 2022:
I always tell my friends:
it’s nothing, really. Don’t worry about repaying me.
It was never about you anyway, you just happen to be among the most efficient means to filling the cosmos with bliss-maximising Dyson spheres.
Really, I should be thanking you.
> Does it matter how information gets to you? For example, say a paper comes out that tries yelling at kids in school and finds that yelling is not effective at making them learn faster. You might worry: If the experiment had found that yelling was effective, would it have been published? Would anyone dare to talk about it?
> How much should you downweight evidence like this, where the outcome changes the odds you’d see it?
> If you’re a true believer fundamentalist Bayesian, the answer is: None. You ignore all those concerns and give the evidence full weight. At a high level, that’s because Bayesianism is all about probabilities in this world, so what could have happened in some other world doesn’t matter.
I'm confused by this: if you receive information via a channel that you know applies selective filtering, your knowledge of the filtering *must* affect how you update, mustn't it?
I'm not practiced in formal Bayesian reasoning, so I can't express this in the appropriate technical language; with that caveat:
Suppose I believe that the New York Times is selective in their reporting of natural disasters, they are more likely to report disasters in some countries and less likely in others. They report on a disaster in Japan. I can 100% update that there is, in fact, a disaster in Japan.
Then I open my morning edition of Weird School Experiments Daily and I see the report of a finding that yelling at kids is not effective. It's just one study, so even setting aside my concerns regarding reporting bias, I can't fully trust it. I should update somewhat in the direction of believing that yelling at kids is not effective to make them learn faster.
How much should I update? Well, imagine a world where:
- This particular question is the subject of 10 studies each year.
- The reality is that yelling is *slightly* effective, such that any given study has a 50% chance of concluding it is effective.
- My prior happened to be precisely correct – I believed that yelling is slightly effective.
If I update on published studies, and only the studies which find that yelling is ineffective are published, then 5 times each year I'll update somewhat in the direction of yelling being ineffective – pushing me away from the truth. If I update on published studies, and all of the studies are published, then I'll update back and forth but my belief will tend to stay in the vicinity of the truth.
So it seems that it is unambiguously incorrect to ignore known (or strongly suspected) selective filtering in an information channel? If I know about the selective reporting of yelling studies, and I see a study finding that yelling is ineffective, it seems like I could respond to this in one of two ways:
A) Ignore it. It contains zero bits of information, because I know a priori that all studies I read on this subject will contain this conclusion.
B) Perform some complicated analysis of how many studies I suspect are conducted on this topic, how many studies I see published, and what the actual effectiveness of yelling must be to result in the observed number of stories being published.
All of this ignores the fact that when we require a study to find that yelling is ineffective, we have not fully constrained the contents of that study – it might find that the effect size was almost large enough to be significantly significant, or it might find a much smaller (or even negative) effect size; there are all the details of how the study was conducted (e.g. sample size), etc. So I could get more sophisticated than B. But I think the point stands that knowledge of the properties of the information channel is important.
My presumption is that you could fit all this into formal Bayesian reasoning if you expand your analysis to include priors around what experiments are performed and which of those will be published. But also that your head would explode.
#4 strikes me as an exercise in epicycle-fitting, but I mean that in a descriptive way rather than a disparaging way. One clue we’re still dealing with geocentric psychology here is that the taxonomy is based on symptoms rather than causes. Imagine doing this for physical diseases instead—if you get really good at measuring coughing, sneezing, aching, wheezing, etc. you may ultimately get pretty good at distinguishing between, say, colds and flus. But you’d have a pretty hard time distinguishing between flu and covid, and you’d have no chance of ever developing vaccines for them, because you have no concept of the systems that produce the symptoms.
I think approaches like this, which administer questionnaires and then try to squeeze statistics out of them, are going to top out at that level. They’ll probably make us better at treating mental disorders, but not much better.