20 Comments
Comment deleted
Expand full comment
author

thank you, if you want to be me, I'm 💯% down for trading

Expand full comment

I've always wanted to be able to do this for social media feeds. Like, hey man you want to swap profiles for an hour?

Like imagine if you could browse Spotify impersonating the profile of your friend that has really awesome taste in music - you'd get a glimpse into what the AI is recommending to them, which is likely outside your filter bubble

Expand full comment
Comment deleted
Expand full comment

Cool, sounds good! I guess alternatively you could make your discover weekly or release radar public somehow (or maybe liked songs?). not entirely sure how to do that, but that might be minimal effort/risk to achieve something similar

Expand full comment

Are you familiar with the 'smiling curve' of value add? Ben Thompson writes about this a lot.

https://stratechery.com/concept/aggregation-theory/smiling-curve/

I suspect that what these LLM's and Image Generation models will do is end up reducing a lot of 'intellectual grunt work' involved in a content pipeline that currently looks roughly like:

ideation -> implementation -> distribution

One big change recently was distribution going from 'expensive broadcast media' (newspaper, radio, TV) to 'cheap multicast media' (email, social media). That caused all kinds of problems.

What the LLM's might do is simply cheapen the _implementation_ portion of the pipeline, which could have the effect of moving all the value creation to the 'ideation' portion of the pipeline so that it looks more like this

ideation -> (neural network) -> (social media)

For example, i have a novel, kind of written. By this, I mean I have a one sentence, one paragraph form, one page form, etc. If I could feed this into an LLM trained on _my writing_ personally, that would be awesome.

This suggests something like the horses and railroads section for written content.

I seriously doubt the 'LLM's get better and everyone gets fired' scenario plays out, because most of what i pay for, when i pay for content, is the relationship itself. The thing I think I'm really paying for is "whatever that thing is that drives ideation", which i think is, itself, a function of personal values and lived experience.

It's also far easier to imagine lots of indie games with graphics that look pretty decent, since a HUGE portion of game development costs involve the absurd quantities of content developers need to create. I think you might even see an interplay of LLM's and art geneation software.

An artist puts into the LLM: "a description of a science lab in a high school in the 1980's, in the dark, with one computer screen on." The LLM then turns this description into a bunch of text describing all the objects and their positions. Then this text is parsed by an 3d image generation tool, which uses the text to generate a scene graph, and then renders all the individual objects in the scene graph in parallel.

The end result is we could be in a world where someone could write up a movie script and turn it into a decently produced feature length film, or write up a description of a short game: "players cooperate to herd cats through an escher-like landscape while pursued by the sleepwaking ghost of Werner Herzog" and a future version of unity spits out an actual playable binary.

Combine 'really cheap prototypes' with modern entrepreneurial models, and i think should likely see an explosion of cool indie shit coming around the corner.

Expand full comment
author

I wasn't familiar with the smiling curve, thanks for the reference! I'm really hesitant to make predictions, but I do feel like people tend to under-rate (1) that LLMs might mostly act as a productivity enhancer and (2) that writing is fundamentally a social activity and people might decide that they just don't want to read LLM-created content.

I can imagine these reversing. Like, if LLMs can do a better than humans job of summarizing all information and giving accurate predictions, then I guess leaders will be basically forced to use them. I can also imagine the social thing reversing, but it would require us to feel like we are in a relationship with the LLM. I guess that's possible, and maybe LLMs even have an advantage in that they could actually talk to everyone as much as people want. But both of these appear to require LLMs far beyond what we have now, and it's hard to predict when they might be reached. (Sometime between 3 years and a century?)

Expand full comment

> (Sometime between 3 years and a century?)

Hah!

To me, these LLM's are more or less lookup tables. I think a sufficiently large lookup table is indistinguishable from true intelligence. The question is, just how big that lookup table has to be to fool most people.

Expand full comment

Completely agree on the productivity enhancer and relationship importance points, but to borrow apxhard's " ideation -> implementation -> distribution" model, I wouldn't feel cheated on the relationship aspect if an LLM did the implementation as long as the ideas still came from my favorite authors.

If we break writing down into relationship based (e.g. having a following on Substack) and not relationship based (e.g. clickbait), the latter will resemble the human calculator model, where an editor at Buzzfeed can just tell an LLM "give me the top 10 smergs that will blerg your merg" and it gets made, no clunky payrolled human writer needed. But the former *might* be a productivity boost akin to either tractors or mass production if

1) LLMs get good enough at copying an author's existing style

2) most readers are like me and would be satisfied with the level of authenticity as long as the author ideated it and trained the model to write in their style

If both of those assumptions are true, budding writers will have extra pressure when creating their portfolios, since they'll be locking in their style to be created by LLMs forever, unless they consciously decide to write more manually for re-training purposes down the line.

Expand full comment

Interesting.

Maybe this stuff ends up pushing writing into something like a barbell distribution. You have content where the relationship doesn't matter at all ("where is the trunk release on a 2015 honda CRV?"), and then content where basically all there is, is the relationship.

Expand full comment

Totally! And that distinction has elements of the painting vs photography story: if you want to accurately represent something visually, do the easy new technology, but the original retains some ascetic/social appeal and is still done, just less often and by fewer

Expand full comment

I am wondering about the unforseen consequences of reducing the friction that currently acts as a kind of flood valve on human produced writing and content. Are we to expect a deluge of bad ideas written in generically acceptable forms? If it is relatively pain-free to have the LLM whip up a screen play or novel or blog post, will we all be buried in an avalanche of shitty content? Is this already happening? How will we wade through it? Will we have to rely on creative curators - in which case, are we destined to revert back to having aesthetic gatekeepers, just like the ones we thought we were circumnavigating with the advent of the internet and self-publishing? Is that Irony?

Expand full comment

That already is happening. Even with human production, the computers now allow pretty much anyone int the world to produce text, audio or video and distribute it. So curation occurs both algorithmically and socially. The "Algorithm" is already having a big day in the culture. LLMs will only exacerbate this trend.

Vernor Vinge covers this obliquely in several of his novels with regard to education. High school stops being about learning any particular thing, but about learning to navigate in an infromation-rich world. How do you synthesize, analyze and identify truth? School projects are about are applied generation of good and content. If you can synthesize a custom-pet on demand with an LLM that understand gene-phenotype mapping, how do you crowsource good ideas, know what is trending socially, how do you 3d-print a pet-womb, how do identify the instructions for such a thing without getting caught by quackery?

Expand full comment

Value-aligned personal models trained on your preferences and knowing your stated values with a little bit of common good that will do be reading all the generated (and manual) stuff on the internet and produce ranked best works, optionally summarized and translated to your preferred style.

Expand full comment

Great post. Exposing the many paths forward is a great thought exercise and has opened my mind to a structured way to explore these different progress metaphors. Thank you.

Expand full comment

Great post, thank you! This question is a bit normative, but which of these outcomes do you think would be the best for humanity? Which would you prefer to see? Feel free to interpret "best" in whatever way aligns with your values.

Expand full comment
author

Interesting question, I honestly never thought about it. I suppose that artisanal goods and mass production is the most clearly positive of the scenarios: Basically a lot of writing that people don't like doing gets automated, but everything else continues as now. So there don't seem to be any clear losers. There are also a bunch that sound vaguely negative when we write them down but the analogy is clearly positive in retrospect and basically no one would suggest getting rid of them (E.g. tractors, photography, calculators).

Expand full comment

Yeah I think your last point here is a good one. For almost all of these analogies, they were either adopted and improved humanity or not great and not adopted (scissor doors, segways). I know these examples are not exhaustive, but that alone gives me a bit of hope.

Expand full comment

In the early days of desktop publishing (aka 30 years ago) there was the great JPEG debate. Scanned images for four-color catalogs and magazines (there was very little digital photography yet) were stored as TIFF files with lossless compression. They were huge, particularly for for the storage and network standards of the time. But there was a fight on, with some people advocating JPEG compression, which sacrifices some accuracy for the ability to make the image smaller. At the time, the algorithms used for compression were crude compared to today, and they could leave ugly, visible artifacts in the images, particularly when they were printed at large sizes on high-quality presses. But the people pushing the tech said it would get better. Or that the general public wouldn't care, because they wouldn't notice the difference. And by the time digital cameras (which all use lossy compression) were commonly available a decade later, they were right. Now, everyone uses some form of lossy compression for images from the ones you see on the screen to glossy magazine covers. The original images taken by the camera are lossy, and each resizing or resaving just introduces new artifacts and nobody ever notices.

Expand full comment

The chess analogy does not quite go through yet, because chess is a competitive game with a fixed and fully transparent metric for success. There is never any doubt who won, and you can aggregate performance over multiple games to get a robust and undeniable sense of who or what is doing best. There are many domains that are like this, but writing isn't one of them. You could operationalize writing by making it into a competition, but that would always be vulnerable to people saying "that's not the kind of writing I want."

In that vein, there are some experiments that I'd like to do, such as formal debates or moot court sessions. Once LLMs start winning those competitions against humans, we could start to say that they really are better. And, as you say "who cares?" becomes an appropriate question. What if lawyers who rely heavily on LLMs begin to outperform those who don't? What if that starts to happen in cases argued before real courts? I think we would HAVE to care. I'm not sure how that would play out.

Arguably, being able to make appropriate use of technical means is a reasonable requirement for lawyers and other professionals. Indeed, some professions have specific rules about being up to speed technically. After all, the professionals have a duty to their clients, and if technical means can help, they should use them. A civil engineer who insisted on using only pencil and paper for calculations that a computer can do more easily and reliably would be disbarred in a heartbeat.

Expand full comment

One book I read years ago was by a woman economic historian (name forgotten) whose topic was the history of technology advancements and how their usage and application evolved over many years. Her main point was about how initially, humans would just hook up the new technology to the old process or plant and equipment and carry on. For example, we would take the old flour mill that water had driven and plug it into an electric motor. Real breakthroughs, often many years later, only came when we started to understand the distinctive essence of the new technology and we would design and build our process from the ground up to take into account the full potential of that new capability. So what LLMs transform may take a while to emerge…

Expand full comment

Viclav Smil (How the World Really Works) said that forecasting the effects of new technology is a mugs game, but if you have to do it, start by throwing out the Armageddon scenarios and the techno-utopian ones (like the Singularity), and then basically do what you did in this post: look what happened before when radically new technologies came along.

Even if forecasting is a mugs game, I agree with apxhard that LLMs will be used to do all the essential writing that no one wants to do - user manuals come to mind. The speed of LLMs might also make them valuable for generating up-to-the-minute news content. There's money in getting the news out first after all. Once you remove all the tedious and stressful writing, why would we want to stop doing the enjoyable kind?

Writing, when it comes down to it, is about communicating the thoughts in one persons brain via text into someone else's brain. We don't have a reason to care what an LLM is thinking, so if we know something is written by an LLM, will we care enough to make the effort to read it, once the novelty wears off?

You could argue that it's the quality of the idea that makes something interesting, but that's only true in some circumstances. Scientific ideas are judged on how well they represent reality and how much predictive power they have. Emotional writing, writing with emotional content, will probably always seem fake coming from an LLM. I think humans are likely to be almost as good at detecting fake emotional writing as fake facial expressions, but even if we fall for it at first, as soon as we find out there was no human behind it, I think we'll want to ditch it and go looking for the real thing.

Just a side note re # 8, check out the pre-fab housing market again before making your next real estate purchase. The state of the economy aside, housing seems almost as ripe for technological disruption as writing. The number of new companies and products coming out is almost as interesting to watch as the LLM space.

Expand full comment