15 Comments

But why would we have wanted to? Aren't there other dimensions of the capability of technology to explore that may contribute more to the ongoing survival of our planet and US? having computers create verbiage based on what words we already use to communicate does not mean they have the real capacity to think and solve problems that are real...Pshaaw

Expand full comment

"...the most dangerous thing isn’t the details of the model, the most dangerous thing is the demo."

Great insight. Which is why I think the obsession with "safety" for AI chatbots is useless. The tech is out there, and it's going to be used in ways the AI censors might not approve of.

Anyway, "safety" is a valid concept for nuclear weapons, but not for words spewed out by a machine. No matter how offensive these words may be -- as we have seen with the often humorous examples from ChatGPT, Bing, etc. -- they are not "unsafe" in any objective sense. So with regard to this issue, my rallying cry is: Free speech for AI!

Expand full comment

How common is it for AI researchers to withhold their source (or other implementation details) on the grounds of AI-safety? (I have a vague impression that it does seem to happen at least sometimes..)

If researchers *are* withholding on AI-safety grounds it seems quite probable that they've already followed the same line of reasoning as you; if so, it occurs to me that there's probably a good chance they're actualy withholding for other reasons, whilst disingenuously paying lip-service to AI safety.

Of course, this is still useful analysis whether or not researchers are presently withholding - even if they're not doing so right now, I'm sure many will wish to do so at some point!

Expand full comment

It's not obvious to me that GPT-2 would have had enough training data in 2005. It was trained on 40GB of text scraped from upvoted comments on reddit, over 8 million documents. Could we have done that back then?

Expand full comment

Speaking as an AI researcher at Lawrence Livermore National Laboratory, I can assure you that we do care about AI!

Expand full comment

Minor quibble -- I think this is poorly-worded: "GPT-2 was the first language model to break through into public consciousness as being impressive"

99% of the public, including myself (a close reader of rationalist blogs, and casual follower of AI generally) never heard of GPT-2. This post is literally the first I've heard of it. It wasn't until last year the Scott Alexander, for example, started doing posts about impressive art AIs.

GPT-2 may have been the first language model to break through into the consciousness of *AI experts and enthusiasts* as being impressive...

Expand full comment