I can certainly say as a today-minted dynomight fan (ACX sent me here on the causal-structure prediction markets post) that this blog is well above replacement. Words are very very cheap to produce already, and I suspect the special sauce in making a good blog is instead persistent good taste, closer to being a museum curator than being a carpenter. The same "taste" organ seems to be necessary to develop an AI dynomight too!
On the philosophy question, this is because mathematicians can write a two-sentence definition in a logical language where either you get it immediately or know that you don't understand what they mean. For philosophers, a two-sentence definition very frequently causes people to believe they have understood without actually understanding.
Thanks! Re philosophy, can you sketch this out a bit better? I agree that it's much easier to misunderstand a two-sentence definition in philosophy, but why does it follow that you should stick to original sources?
The pithy answer is that the entire book is the definition. There's no shorter way to define the idea than to read out the entire book to someone.
This is because there are two kinds of debates in philosophy: debates about the actual important substance that we're trying to discover facts about, and debates about what a word means.
The second is such a humongous proportion of philosophy debates that early Wittgenstein thought this was the *only* kind of debate philosophers had, and once you clear up all the definitions and the way people use words, philosophy is done, and we can move on to empirical facts.
I don't think this is the case, and that the first type of argument is the kind that I want to get to, so in order to make that possible, we need to have some way of settling an argument of the second type.
So we treat the original source as the canonical handle. Philosophy students still often only read excerpts from the original sources or lecture summaries, but it's possible for these to have incorrect definitions in a way that's impossible for the original.
So say we want to create a New Encyclopedia of Philosophy and declare it to be the canonical set of definitions, like how Bourbaki did.
I'll try to argue here why the new encyclopedia could be essentially no shorter or clearer than the original because in order to fully define a concept X, we need every sentence of the book.
I'll avoid messy issues of interpretation and the socially-constructed meaning of words by assuming that the book could be unambiguously converted into a logical language where each sentence is a proposition about X.
X is in the set of possible concepts C, and the set is extremely "dense" in philosophy unlike in mathematics.
That is, the book is a set of propositions {P_1(X), P_2(X), ..., P_n(X)}, and we know P_1(Y) and P_2(Y) and P_3(Y) and ... and P_k(Y) for k<n, but we still can't be sure that Y=X.
In math, if we know one P_1(Y) (the definition), we can usually be pretty sure Y=X.
And since people disagree on what set C we're even working with, it's difficult to say "my new definition 'Q(X)' is equivalent to 'P_1(X) and ... and P_n(Y)'" without Q(X) being the trivial conjunction.
Of course, some books are easier to summarize, but since it's so hard to close the argument on whether your definition is complete (at least hard compared to in math), it's easier to just refer to the infallible-by-construction original source.
PS: Often, the book will still be valid for a number of different concepts C' subset of C. The question "Does each of these x in C' forms its own valid interpretation of the book---or do we need to bring in further propositions from author's other work/social context?" forms the basis of the theory of interpretation.
I'm not sure I find your argument at the end convincing. It appears to presuppose that we are interested in the *exact same* concept as in the original writing. But that's not what happens in other fields—the core concepts also evolve.
Hmm, I'm used to mathematics where we want to be sure that when I use the term "group" I want to be sure that I'm talking about the same objects that Cayley was talking about, even if we are looking at it from a different perspective.
Like groups used to be thought of as a subset of permutations closed under composition, but Cayley showed that we could instead understand them as an algebra with three axioms.
This is why Cayley's Theorem was so important, in proving that we could let go of the old baggage and work with a new presentation since we're sure they refer to the same objects.
In philosophy, if you want to move the discourse forward, usually you either have to argue that your reuse of old terminology is actually identical, or if you're interested in changing the conceptual framework, you invent a whole basket of neologisms to get your work done.
Do you have an example in mind of this kind of deliberate concept drift?
I find the chess comparison fascinating, because we didn't all stop playing chess in 1997. Thanks to streaming and the pandemic, it's more popular than ever! And some people play against computers, but far more often, they use technology to play against other people, even though it's far less efficient/convenient to do so.
What did short stories do to qualify for the obsolete list? It seems to be a reasonably popular prose form with a long history and unique structural advantages in exploring focused ideas that might not sustain a longer treatment. I agree they aren't as popular as novels or short form non-fiction but they do seem to have a niche.
In the early 1900s the short story was a primary form of mass entertainment. Whereas now they're sort of a niche art form. They still exist but so do horses and paper letters. I think it's fair to say that *as a form of mass entertainment*, the short story is 99% dead.
(Although I am confused why novels survive as a form of mass entertainment and short stories don't.)
I suspect that novels persist because they have the space to do more world building, character development and narrative complexity, all in one package, but with the possibility of adding more packages. Short stories can pull that off too (see e.g. Robert E. Howard for world building) but since the packages are very small it is difficult to fit all the content and a complete self contained narrative into the package and make it coherent. If you try to string it along like old form comics or serials people start to ask "Can I just wait and buy the collection all at once?" because reading a week's worth of content then waiting a month for the next is annoying; you lose track of the story, especially if the narrative isn't all contained to the smaller package and you have to remember all the plot beats from the previous number of months when you read the new.
If you are a serious reader, chances are you would like to consume a whole lot of writing at your pace, which is presumably much faster than a person can write, and thus prefer as much story as can fit in your hands and be purchased reasonably. Just like how if you drink a lot of milk (or have kids) you don't buy milk in pints every few hours but get a gallon or two at a time. A 50 gallon drum in a standalone cooler is probably more than you want (ahem War and Peace ahem) but a few gallons is more manageable and strikes a good balance.
I would also note that the worst thing about a really good novel is that it ends. Adding more endings via the short story can be annoying when you have a good one and constantly thing "I would have liked to learn more about that."
This all sounds plausible. But if this was right then shouldn't the novel have always dominated the short story as a form of mass entertainment? If the novel was basically just better, then why were they both both mass entertainment 100 years ago? Did it just take things this long to sort of reach equilibrium?
I think, and I will admit that this is based very heavily on the market for sci-fi/fantasy/horror so might be way off generally, that the early market for writing in the early 20th century worked a lot more like broadcast TV/movies did in the mid to late 20th century. The demand was somewhat niche for many writers, and consisted both of serious readers and people who wanted something to read while being on a train for 2-3 hours. That latter category and niche markets lent themselves to short form work that could be produced reasonably quickly and distributed in periodicals. Authors could make a steady living many small works if they became fairly popular, either by stand alone stories or chunks of novels in installments. Much lower upfront investment to get the very unreliable gains. Sometimes authors made the move from short stories to novels, or a mix of both, but short stories were a good way to get a start since publishing houses seemed unwilling to take the risk of novels on unknown authors.
Likewise with TV vs movies: tv shows were aimed at the audience who wanted something to watch on the regular, and were willing to go with low production value stuff if it was readily available in bite sized chunks, while movies were bigger investments with unreliable payoffs if they didn't appeal to a wide audience. If movies were usually 3-4 hours long it would be a better analogy, mind you.
English and historical philosophy folks still did this in 2016. I was kind of shocked when I was invited to present work on Adam Smith at a conference and everyone else was just reading their texts... "So... in my tribe we make some slides and talk through the high level ideas, so that's what I did. Sorry."
What do you consider to be an "explainer post"? It's not entirely clear to me how large a category that is meant to be / where the boundaries fall, and that in turn seems importantly upstream of a lot of the issues you discuss.
Yes, you'd think I would have explained that! I was thinking of it as a somewhat narrow category, something like "an explanation of a well-understood topic". I meant this to be fairly narrow, in a way that it wouldn't apply to most posts on this blog. The closest would be things like:
I wanted a narrow definition because it seems to me that "explanation of well-understood topic" is the category of short-form writing that's most likely to go extinct soon, or at least the category where that argument is the easiest to make. Some of these arguments may also apply to varying degrees to broader definitions, or other types or writing, or other human-created things, but this is a case where I thought it was most productive to sort of zoom in.
. "But the upside is all of humanity having access to more accurate and accessible explanations of basically everything". Who is this "all of humanity" that you refer to? The humanity around me mostly does not want accurate explanations about much. And so far, in my experiences with AI, if I ask it about hazardous material shipping regulations to New Zealand, I still have to snag a couple of neurons from the breakroom to determine if what AI has told me is correct. I don't know how there could ever be an AI that can be trusted without checking its response on anything. AI is a fine tool, but context is a human thing.
I don't suppose that unlimited accurate explanations would turn Earth into a utopia. But they would be cool! (I agree current AI is wildly inaccurate, but in that paragraph we're assuming that gets solved.)
Philosophers want you to read the original texts because they want you to derive your own interpretations of texts. If you read someone else's modernized rephrasing of a text, it will be colored by their own interpretation, or the consensus interpretation. Even philosophical texts that are theorem heavy are not *only* sets of theorems with a single objective interpretation, because all texts fall within a wider conversation or school.
And that leads to another reason why I might want a human explainer post: because it's a particular human's interpretation or point of view, especially if it's a topic with strong political, artistic or humanistic elements where there is no single objectively correct explainer possible. If I'm reading an explainer on "What is love?" or "Why is the Mona Lisa considered beautiful?" or "How do the Republicans differ from the Democrats?" I don't necessarily want the sort of consensus "on the one hand, on the other hand, on the third hand" explanations that AI produces. Nor would I want an opinion from an opinionated AI. I want *this guy's* explainer.
Maybe it's because I have a parasocial relationship with him. Maybe it's because I agree with him or I violently disagree with him. But mostly because the whole point of reading how someone else explains "What is love?" is to enjoy a brief connection with another human being. To share the human condition for a moment. That's not replicable by AI. That's also why AI art tends to be so disappointing when you find out it's AI art, even if it's pixel perfect flawless. The connection is not there.
Re philosophy, I'm sure what you say is correct, but what I don't understand is what's different about philosophy vs. physics that makes this a productive attitude in philosophy but not in physics? Approximately 0% of students learning Newtonian physics read the Principia. That would be considered insane, because however smart Newton was, people today can write things that can convey the ideas much more efficiently.
Put another way, I guess it's not obvious that deriving interpretations of old texts would necessarily be a valuable thing to do? In many fields it is not considered important. In some (say history) it seems obviously valuable. What places philosophy in the latter category?
First, science actually has a mechanism to progress through falsification. Old theories can be proven wrong and superseded by new theories. So sometimes the old texts are not worth reading again because we've since shown they're wrong. The same isn't true of philosophical or humanistic or artistic texts. There is no superseding work for Wittgenstein; there are only responses to Wittgenstein.
Second, I think that old scientific texts actually are worth reading in the original form. The Principia contains a lot more than the laws of motion and gravitation that you can get from a high school textbook. It contains the proofs, and how Kepler's laws can be derived from them, and the masses of the bodies of the solar system, the motion of the sun, and a lot more cool stuff. All that is elided from our modern texts because they aren't useful or valuable. But what's useful or valuable is subject to... interpretation. A lot of cool stuff is actually lost when people stop reading the Principia or On the Origin of Species (Darwin's original concept of evolution has more nuance than textbook versions) or the original Einstein Annus Mirabilis papers. So I actually don't agree that newer versions capture everything in the old versions, just the parts filtered through writers' interpretations, or the consensus on what's most useful.
I think there are two aspects to explainers: 1) they’re generally useful for the person writing them as a thinking tool, and it feels better to have other people looking at them 2) in most situations we don’t just want an explanation we want a point of view. I think that’s a little more than curation - like I’ve read enough of your blog to have a sense of your general alignment with relation to mine, and so an explainer from your point of view is interesting to me as it informs my own. I don’t think that’s parasocial - I think it’s understanding that there are many lenses to see the same information through and I can more easily model your perspective than an LLMs (in the same way I can trust the perspective of a person more than a corporation, if that makes sense)
> 3. “Here’s a bunch of evidence, about which I supposedly have no opinion.”
I personally strongly prefer voice #2. So it's interesting that—I think—that's also the voice that best supports the kind of "multiple lenses" view you're talking about.
I find the thought of a wonderful internet newsletter like yours dying off as people become ever more isolated and solipsistic incredibly depressing--so, reason eight, maybe? I don't know. I have a very hard time seeing the bright side of these changes, if they are to occur, and I'm shocked at how dispassionate your remain when discussing possible outcomes.
There's another reason, which is that if people simply outsource thought to AIs, then there's no real point reading anything or learning anything. It would be a waste of time. You could acquire knowledge over years of reading and thinking, but that knowledge would be obsolete compared to the latest model, trained on the latest data. So why bother?
People read explainer posts (and non-fiction books, although not much any more), because there is some value in having knowledge. That value is partly social (status, attention), but also personal. It can be useful to understand the world. But as general knowledge is devalued, there is less reason to invest effort in acquiring it.
The tragic consequence might be that people become even more narrowly focused on local, situated knowledge that retains value, and don't bother acquiring general knowledge. Wikipedia killed paper encyclopedias sitting on shelves. AI might kill general knowledge in the brain.
In the other cases, one thing is a substitute for another.
It is convenient to have a global language, so English has become the de facto world language, and thus the default second language. It didn't replace some other world language, but it replaced other second languages.
Wikipedia is a superior replacement for paper encyclopedias.
AI knowledge can be a superior substitute for knowledge in the brain.
Biological purpose is not an instrument, and it has no replacement. Genetic engineering is a technology. In some cases, it might replace premature death, and in the long run, natural selection with artificial selection. But that doesn't change or disrupt biological purpose.
I didn't phrase my question correctly. Biological purpose would technically not be disrupted as long as organisms continue to reproduce.
I meant to ask about artificial selection replacing natural selection. What would be the purpose of reproducing if essentially any kind of human or organism could be created to have whatever genes and traits that the creators want it to have, independent of whatever genes and traits the parents have?
Intuitively, I feel like genetic engineering would fundamentally alter the meaning of life. For example, there are some aspects of the human form that impose constraints on how intelligent humans can be. For example, there is probably a genetic adaptive coherence between the size of the human pelvis and the size of the human brain. There are multiple reasons for this, due to trade-offs between ease of childbirth, running speed, intelligence, etc.
I hypothesize that a centaur-like form might be optimal to create maximally intelligent humanoids. Such a physical form could facilitate having a bigger pelvis size without imposing limitations on mobility and such, or so I'm guessing. I wonder if said life forms could be born with bigger (and thus more intelligent) brains than what bipedal humans have to potentially offer.
That's just an example of something that I've been wondering about with respect to genetic engineering. It might be cool to genetically engineer ultra-intelligent centaur-like humanoids who are naturally more intelligent than what our bipedal species could ever physically be. But I think it would feel odd and bizarre to have them replace bipedal humans and our way of life.
That's not to say that we want or should try to have them replace us. But if we had the choice and the agency to feasibly create such a new species of beings, I think it would open debates about whether we should create them or not. More generally, I don't know what paths humanity would take if artificial selection completely replaced natural selection. I don't think I'd want to live in such a world.
Creating organisms by design makes them into technology, and it does raise interesting issues about the distinction between biology and technology.
Most varieties of grape and apple are "reproduced" by cutting and grafting. They don't even reproduce by cloning, because a cutting is grafted onto the roots of another plant. So, even without genetic engineering, we can create hybrid forms that are somewhere between biology and technology.
I can imagine producing designer pets in a factory, without the normal chain of reproduction and selection. That would make them more like technological products than organisms. We already neuter most pets, making them "lifeless" parodies of real organisms, essentially.
We can imagine a race of beings that reproduces by designing and producing their offspring. However, that would still be reproduction, as long as the parents transmit the complex trait of designing and producing new organisms to their offspring. There would still be natural selection.
On the other hand, if the beings were created by a system, such as a factory, and they didn't inherit information from parent beings, then they would be technology. They would be part of the system, not reproducing machines themselves. The anime Ergo Proxy touches on this idea a bit.
I've heard about grafting before. That's actually how *most* commercially produced fruits are produced (not just grapes and apples), and why seedless fruits are so common these days. It's really terrible what humans are doing to the world's food supply. https://www.youtube.com/watch?v=H8c1ObYSlQI
Indeed, there are a lot of things to think about here. I am only an ant compared to the world's greater geniuses.
I think you massively underestimate parasocial relationships.
Gen Z and onwards often get their news and facts from Tiktok. This is a terrible source for every reason except that it's incredibly tuned for building parasocial rapport.
And quite regularly someone is discovered due to being hot or doing something funny or being in an interestingly incongruous situation, and then goes on to do a lot of thing explaining.
What is dead, though, and I mourn its passing, is the Tech Blog. People used to do these for CV brownie points / networking for opportunities, they were incredibly useful to stumble across, and they're dead because they're easy to make something that looks like it via AI and nobody can now wade through all the AI slop to find the actual useful people.
Re 2: yes, I care a _lot_ about explainers being written in an engaging style. I care about the quality of the author’s writing at least as much as the choice of topic. I routinely DNF explainers on fascinating-sounding topics because the author isn’t quite good enough at making them fun. I would read an explainer by you on all kinds of topics that I wouldn’t bother to click on from a less talented writer. Maybe this is because I am dumb and intellectually incurious and therefore unable to focus on learning things if they’re not entertaining enough. Maybe it is because there isn’t a bright line between style and content—a good writer understands what is intellectually interesting and valuable in the same way they understand what is entertaining, because those two features are secretly kind of the same thing. Whatever. It is what it is. Not saying AI won’t figure this out, but merely being able to present true facts and explanations is definitely not enough.
I'd add that a lot of explainer posts are something along the lines of "This thing is different from what everyone thinks", which might not strictly be an explainer (and rather an opinion/contribution). But still very often they are framed as explainers, and its a major reason why people read them. Often an explainer will have an intro which is counter-positioning it to the status quo of understanding.
Example: your heritability explainer starts by quoting Wikipedia as the status-quo of the (mis)understanding around heritability, giving a justification for what you are about to write.
One of the reasons current AI couldn't write your post is because it will give the distilled status-quo understanding which will be close to the wikipedia mumbo-jumbo.
I can certainly say as a today-minted dynomight fan (ACX sent me here on the causal-structure prediction markets post) that this blog is well above replacement. Words are very very cheap to produce already, and I suspect the special sauce in making a good blog is instead persistent good taste, closer to being a museum curator than being a carpenter. The same "taste" organ seems to be necessary to develop an AI dynomight too!
On the philosophy question, this is because mathematicians can write a two-sentence definition in a logical language where either you get it immediately or know that you don't understand what they mean. For philosophers, a two-sentence definition very frequently causes people to believe they have understood without actually understanding.
Thanks! Re philosophy, can you sketch this out a bit better? I agree that it's much easier to misunderstand a two-sentence definition in philosophy, but why does it follow that you should stick to original sources?
The pithy answer is that the entire book is the definition. There's no shorter way to define the idea than to read out the entire book to someone.
This is because there are two kinds of debates in philosophy: debates about the actual important substance that we're trying to discover facts about, and debates about what a word means.
The second is such a humongous proportion of philosophy debates that early Wittgenstein thought this was the *only* kind of debate philosophers had, and once you clear up all the definitions and the way people use words, philosophy is done, and we can move on to empirical facts.
I don't think this is the case, and that the first type of argument is the kind that I want to get to, so in order to make that possible, we need to have some way of settling an argument of the second type.
So we treat the original source as the canonical handle. Philosophy students still often only read excerpts from the original sources or lecture summaries, but it's possible for these to have incorrect definitions in a way that's impossible for the original.
So say we want to create a New Encyclopedia of Philosophy and declare it to be the canonical set of definitions, like how Bourbaki did.
I'll try to argue here why the new encyclopedia could be essentially no shorter or clearer than the original because in order to fully define a concept X, we need every sentence of the book.
I'll avoid messy issues of interpretation and the socially-constructed meaning of words by assuming that the book could be unambiguously converted into a logical language where each sentence is a proposition about X.
X is in the set of possible concepts C, and the set is extremely "dense" in philosophy unlike in mathematics.
That is, the book is a set of propositions {P_1(X), P_2(X), ..., P_n(X)}, and we know P_1(Y) and P_2(Y) and P_3(Y) and ... and P_k(Y) for k<n, but we still can't be sure that Y=X.
In math, if we know one P_1(Y) (the definition), we can usually be pretty sure Y=X.
And since people disagree on what set C we're even working with, it's difficult to say "my new definition 'Q(X)' is equivalent to 'P_1(X) and ... and P_n(Y)'" without Q(X) being the trivial conjunction.
Of course, some books are easier to summarize, but since it's so hard to close the argument on whether your definition is complete (at least hard compared to in math), it's easier to just refer to the infallible-by-construction original source.
PS: Often, the book will still be valid for a number of different concepts C' subset of C. The question "Does each of these x in C' forms its own valid interpretation of the book---or do we need to bring in further propositions from author's other work/social context?" forms the basis of the theory of interpretation.
I'm not sure I find your argument at the end convincing. It appears to presuppose that we are interested in the *exact same* concept as in the original writing. But that's not what happens in other fields—the core concepts also evolve.
Hmm, I'm used to mathematics where we want to be sure that when I use the term "group" I want to be sure that I'm talking about the same objects that Cayley was talking about, even if we are looking at it from a different perspective.
Like groups used to be thought of as a subset of permutations closed under composition, but Cayley showed that we could instead understand them as an algebra with three axioms.
This is why Cayley's Theorem was so important, in proving that we could let go of the old baggage and work with a new presentation since we're sure they refer to the same objects.
In philosophy, if you want to move the discourse forward, usually you either have to argue that your reuse of old terminology is actually identical, or if you're interested in changing the conceptual framework, you invent a whole basket of neologisms to get your work done.
Do you have an example in mind of this kind of deliberate concept drift?
I do like what you write, how you write, and what you select to explain to us. And I will keep liking it even in the AI era.
I find the chess comparison fascinating, because we didn't all stop playing chess in 1997. Thanks to streaming and the pandemic, it's more popular than ever! And some people play against computers, but far more often, they use technology to play against other people, even though it's far less efficient/convenient to do so.
Yeah, so far this analogy is looking pretty good! https://dynomight.net/llms/#2-chess-humans-and-chess-ais
Personally, I'm looking forward to AI readers :V
What did short stories do to qualify for the obsolete list? It seems to be a reasonably popular prose form with a long history and unique structural advantages in exploring focused ideas that might not sustain a longer treatment. I agree they aren't as popular as novels or short form non-fiction but they do seem to have a niche.
In the early 1900s the short story was a primary form of mass entertainment. Whereas now they're sort of a niche art form. They still exist but so do horses and paper letters. I think it's fair to say that *as a form of mass entertainment*, the short story is 99% dead.
(Although I am confused why novels survive as a form of mass entertainment and short stories don't.)
I suspect that novels persist because they have the space to do more world building, character development and narrative complexity, all in one package, but with the possibility of adding more packages. Short stories can pull that off too (see e.g. Robert E. Howard for world building) but since the packages are very small it is difficult to fit all the content and a complete self contained narrative into the package and make it coherent. If you try to string it along like old form comics or serials people start to ask "Can I just wait and buy the collection all at once?" because reading a week's worth of content then waiting a month for the next is annoying; you lose track of the story, especially if the narrative isn't all contained to the smaller package and you have to remember all the plot beats from the previous number of months when you read the new.
If you are a serious reader, chances are you would like to consume a whole lot of writing at your pace, which is presumably much faster than a person can write, and thus prefer as much story as can fit in your hands and be purchased reasonably. Just like how if you drink a lot of milk (or have kids) you don't buy milk in pints every few hours but get a gallon or two at a time. A 50 gallon drum in a standalone cooler is probably more than you want (ahem War and Peace ahem) but a few gallons is more manageable and strikes a good balance.
I would also note that the worst thing about a really good novel is that it ends. Adding more endings via the short story can be annoying when you have a good one and constantly thing "I would have liked to learn more about that."
This all sounds plausible. But if this was right then shouldn't the novel have always dominated the short story as a form of mass entertainment? If the novel was basically just better, then why were they both both mass entertainment 100 years ago? Did it just take things this long to sort of reach equilibrium?
I think, and I will admit that this is based very heavily on the market for sci-fi/fantasy/horror so might be way off generally, that the early market for writing in the early 20th century worked a lot more like broadcast TV/movies did in the mid to late 20th century. The demand was somewhat niche for many writers, and consisted both of serious readers and people who wanted something to read while being on a train for 2-3 hours. That latter category and niche markets lent themselves to short form work that could be produced reasonably quickly and distributed in periodicals. Authors could make a steady living many small works if they became fairly popular, either by stand alone stories or chunks of novels in installments. Much lower upfront investment to get the very unreliable gains. Sometimes authors made the move from short stories to novels, or a mix of both, but short stories were a good way to get a start since publishing houses seemed unwilling to take the risk of novels on unknown authors.
Likewise with TV vs movies: tv shows were aimed at the audience who wanted something to watch on the regular, and were willing to go with low production value stuff if it was readily available in bite sized chunks, while movies were bigger investments with unreliable payoffs if they didn't appeal to a wide audience. If movies were usually 3-4 hours long it would be a better analogy, mind you.
"Is this related to the fact that philosophers go to conferences and literally read their papers out loud?"
At least among analytic philosophers these days, presenting by reading a paper aloud is pretty rare and generally frowned upon.
English and historical philosophy folks still did this in 2016. I was kind of shocked when I was invited to present work on Adam Smith at a conference and everyone else was just reading their texts... "So... in my tribe we make some slides and talk through the high level ideas, so that's what I did. Sorry."
What do you consider to be an "explainer post"? It's not entirely clear to me how large a category that is meant to be / where the boundaries fall, and that in turn seems importantly upstream of a lot of the issues you discuss.
Yes, you'd think I would have explained that! I was thinking of it as a somewhat narrow category, something like "an explanation of a well-understood topic". I meant this to be fairly narrow, in a way that it wouldn't apply to most posts on this blog. The closest would be things like:
- https://dynomight.net/ethylene/
- https://dynomight.net/death-penalty-france/
I wanted a narrow definition because it seems to me that "explanation of well-understood topic" is the category of short-form writing that's most likely to go extinct soon, or at least the category where that argument is the easiest to make. Some of these arguments may also apply to varying degrees to broader definitions, or other types or writing, or other human-created things, but this is a case where I thought it was most productive to sort of zoom in.
. "But the upside is all of humanity having access to more accurate and accessible explanations of basically everything". Who is this "all of humanity" that you refer to? The humanity around me mostly does not want accurate explanations about much. And so far, in my experiences with AI, if I ask it about hazardous material shipping regulations to New Zealand, I still have to snag a couple of neurons from the breakroom to determine if what AI has told me is correct. I don't know how there could ever be an AI that can be trusted without checking its response on anything. AI is a fine tool, but context is a human thing.
I don't suppose that unlimited accurate explanations would turn Earth into a utopia. But they would be cool! (I agree current AI is wildly inaccurate, but in that paragraph we're assuming that gets solved.)
Somebody has to write the posts or papers that the AI is trained on so that it "knows" things and can answer questions.
Philosophers want you to read the original texts because they want you to derive your own interpretations of texts. If you read someone else's modernized rephrasing of a text, it will be colored by their own interpretation, or the consensus interpretation. Even philosophical texts that are theorem heavy are not *only* sets of theorems with a single objective interpretation, because all texts fall within a wider conversation or school.
And that leads to another reason why I might want a human explainer post: because it's a particular human's interpretation or point of view, especially if it's a topic with strong political, artistic or humanistic elements where there is no single objectively correct explainer possible. If I'm reading an explainer on "What is love?" or "Why is the Mona Lisa considered beautiful?" or "How do the Republicans differ from the Democrats?" I don't necessarily want the sort of consensus "on the one hand, on the other hand, on the third hand" explanations that AI produces. Nor would I want an opinion from an opinionated AI. I want *this guy's* explainer.
Maybe it's because I have a parasocial relationship with him. Maybe it's because I agree with him or I violently disagree with him. But mostly because the whole point of reading how someone else explains "What is love?" is to enjoy a brief connection with another human being. To share the human condition for a moment. That's not replicable by AI. That's also why AI art tends to be so disappointing when you find out it's AI art, even if it's pixel perfect flawless. The connection is not there.
Re philosophy, I'm sure what you say is correct, but what I don't understand is what's different about philosophy vs. physics that makes this a productive attitude in philosophy but not in physics? Approximately 0% of students learning Newtonian physics read the Principia. That would be considered insane, because however smart Newton was, people today can write things that can convey the ideas much more efficiently.
Put another way, I guess it's not obvious that deriving interpretations of old texts would necessarily be a valuable thing to do? In many fields it is not considered important. In some (say history) it seems obviously valuable. What places philosophy in the latter category?
First, science actually has a mechanism to progress through falsification. Old theories can be proven wrong and superseded by new theories. So sometimes the old texts are not worth reading again because we've since shown they're wrong. The same isn't true of philosophical or humanistic or artistic texts. There is no superseding work for Wittgenstein; there are only responses to Wittgenstein.
Second, I think that old scientific texts actually are worth reading in the original form. The Principia contains a lot more than the laws of motion and gravitation that you can get from a high school textbook. It contains the proofs, and how Kepler's laws can be derived from them, and the masses of the bodies of the solar system, the motion of the sun, and a lot more cool stuff. All that is elided from our modern texts because they aren't useful or valuable. But what's useful or valuable is subject to... interpretation. A lot of cool stuff is actually lost when people stop reading the Principia or On the Origin of Species (Darwin's original concept of evolution has more nuance than textbook versions) or the original Einstein Annus Mirabilis papers. So I actually don't agree that newer versions capture everything in the old versions, just the parts filtered through writers' interpretations, or the consensus on what's most useful.
I think there are two aspects to explainers: 1) they’re generally useful for the person writing them as a thinking tool, and it feels better to have other people looking at them 2) in most situations we don’t just want an explanation we want a point of view. I think that’s a little more than curation - like I’ve read enough of your blog to have a sense of your general alignment with relation to mine, and so an explainer from your point of view is interesting to me as it informs my own. I don’t think that’s parasocial - I think it’s understanding that there are many lenses to see the same information through and I can more easily model your perspective than an LLMs (in the same way I can trust the perspective of a person more than a corporation, if that makes sense)
Good point, well said! There's an interesting connection here to the three "voices" you can take when writing. (https://dynomight.net/writing-advice/#:~:text=three%20styles)
> 1. “This is the Truth, only fools disagree.”
> 2. “Here’s what I think and why I think it.”
> 3. “Here’s a bunch of evidence, about which I supposedly have no opinion.”
I personally strongly prefer voice #2. So it's interesting that—I think—that's also the voice that best supports the kind of "multiple lenses" view you're talking about.
I find the thought of a wonderful internet newsletter like yours dying off as people become ever more isolated and solipsistic incredibly depressing--so, reason eight, maybe? I don't know. I have a very hard time seeing the bright side of these changes, if they are to occur, and I'm shocked at how dispassionate your remain when discussing possible outcomes.
Don't worry, I'm not going anywhere!
There's another reason, which is that if people simply outsource thought to AIs, then there's no real point reading anything or learning anything. It would be a waste of time. You could acquire knowledge over years of reading and thinking, but that knowledge would be obsolete compared to the latest model, trained on the latest data. So why bother?
People read explainer posts (and non-fiction books, although not much any more), because there is some value in having knowledge. That value is partly social (status, attention), but also personal. It can be useful to understand the world. But as general knowledge is devalued, there is less reason to invest effort in acquiring it.
The tragic consequence might be that people become even more narrowly focused on local, situated knowledge that retains value, and don't bother acquiring general knowledge. Wikipedia killed paper encyclopedias sitting on shelves. AI might kill general knowledge in the brain.
English dominance is killing the need to learn other second languages (not completely, but to a great degree).
Wikipedia killed paper encyclopedias sitting on shelves.
AI might kill general knowledge in the brain.
Will genetic engineering kill or disrupt biological purpose as we know it?
I don't see why it would.
In the other cases, one thing is a substitute for another.
It is convenient to have a global language, so English has become the de facto world language, and thus the default second language. It didn't replace some other world language, but it replaced other second languages.
Wikipedia is a superior replacement for paper encyclopedias.
AI knowledge can be a superior substitute for knowledge in the brain.
Biological purpose is not an instrument, and it has no replacement. Genetic engineering is a technology. In some cases, it might replace premature death, and in the long run, natural selection with artificial selection. But that doesn't change or disrupt biological purpose.
I didn't phrase my question correctly. Biological purpose would technically not be disrupted as long as organisms continue to reproduce.
I meant to ask about artificial selection replacing natural selection. What would be the purpose of reproducing if essentially any kind of human or organism could be created to have whatever genes and traits that the creators want it to have, independent of whatever genes and traits the parents have?
Intuitively, I feel like genetic engineering would fundamentally alter the meaning of life. For example, there are some aspects of the human form that impose constraints on how intelligent humans can be. For example, there is probably a genetic adaptive coherence between the size of the human pelvis and the size of the human brain. There are multiple reasons for this, due to trade-offs between ease of childbirth, running speed, intelligence, etc.
I hypothesize that a centaur-like form might be optimal to create maximally intelligent humanoids. Such a physical form could facilitate having a bigger pelvis size without imposing limitations on mobility and such, or so I'm guessing. I wonder if said life forms could be born with bigger (and thus more intelligent) brains than what bipedal humans have to potentially offer.
That's just an example of something that I've been wondering about with respect to genetic engineering. It might be cool to genetically engineer ultra-intelligent centaur-like humanoids who are naturally more intelligent than what our bipedal species could ever physically be. But I think it would feel odd and bizarre to have them replace bipedal humans and our way of life.
That's not to say that we want or should try to have them replace us. But if we had the choice and the agency to feasibly create such a new species of beings, I think it would open debates about whether we should create them or not. More generally, I don't know what paths humanity would take if artificial selection completely replaced natural selection. I don't think I'd want to live in such a world.
Creating organisms by design makes them into technology, and it does raise interesting issues about the distinction between biology and technology.
Most varieties of grape and apple are "reproduced" by cutting and grafting. They don't even reproduce by cloning, because a cutting is grafted onto the roots of another plant. So, even without genetic engineering, we can create hybrid forms that are somewhere between biology and technology.
I can imagine producing designer pets in a factory, without the normal chain of reproduction and selection. That would make them more like technological products than organisms. We already neuter most pets, making them "lifeless" parodies of real organisms, essentially.
We can imagine a race of beings that reproduces by designing and producing their offspring. However, that would still be reproduction, as long as the parents transmit the complex trait of designing and producing new organisms to their offspring. There would still be natural selection.
On the other hand, if the beings were created by a system, such as a factory, and they didn't inherit information from parent beings, then they would be technology. They would be part of the system, not reproducing machines themselves. The anime Ergo Proxy touches on this idea a bit.
Anyway, lots of questions/issues to think about.
I've heard about grafting before. That's actually how *most* commercially produced fruits are produced (not just grapes and apples), and why seedless fruits are so common these days. It's really terrible what humans are doing to the world's food supply. https://www.youtube.com/watch?v=H8c1ObYSlQI
Indeed, there are a lot of things to think about here. I am only an ant compared to the world's greater geniuses.
I think you massively underestimate parasocial relationships.
Gen Z and onwards often get their news and facts from Tiktok. This is a terrible source for every reason except that it's incredibly tuned for building parasocial rapport.
And quite regularly someone is discovered due to being hot or doing something funny or being in an interestingly incongruous situation, and then goes on to do a lot of thing explaining.
What is dead, though, and I mourn its passing, is the Tech Blog. People used to do these for CV brownie points / networking for opportunities, they were incredibly useful to stumble across, and they're dead because they're easy to make something that looks like it via AI and nobody can now wade through all the AI slop to find the actual useful people.
Re 2: yes, I care a _lot_ about explainers being written in an engaging style. I care about the quality of the author’s writing at least as much as the choice of topic. I routinely DNF explainers on fascinating-sounding topics because the author isn’t quite good enough at making them fun. I would read an explainer by you on all kinds of topics that I wouldn’t bother to click on from a less talented writer. Maybe this is because I am dumb and intellectually incurious and therefore unable to focus on learning things if they’re not entertaining enough. Maybe it is because there isn’t a bright line between style and content—a good writer understands what is intellectually interesting and valuable in the same way they understand what is entertaining, because those two features are secretly kind of the same thing. Whatever. It is what it is. Not saying AI won’t figure this out, but merely being able to present true facts and explanations is definitely not enough.
I'd add that a lot of explainer posts are something along the lines of "This thing is different from what everyone thinks", which might not strictly be an explainer (and rather an opinion/contribution). But still very often they are framed as explainers, and its a major reason why people read them. Often an explainer will have an intro which is counter-positioning it to the status quo of understanding.
Example: your heritability explainer starts by quoting Wikipedia as the status-quo of the (mis)understanding around heritability, giving a justification for what you are about to write.
One of the reasons current AI couldn't write your post is because it will give the distilled status-quo understanding which will be close to the wikipedia mumbo-jumbo.
So argument k+1: Alpha over status-quo