15 Comments

But why would we have wanted to? Aren't there other dimensions of the capability of technology to explore that may contribute more to the ongoing survival of our planet and US? having computers create verbiage based on what words we already use to communicate does not mean they have the real capacity to think and solve problems that are real...Pshaaw

Expand full comment

"...the most dangerous thing isn’t the details of the model, the most dangerous thing is the demo."

Great insight. Which is why I think the obsession with "safety" for AI chatbots is useless. The tech is out there, and it's going to be used in ways the AI censors might not approve of.

Anyway, "safety" is a valid concept for nuclear weapons, but not for words spewed out by a machine. No matter how offensive these words may be -- as we have seen with the often humorous examples from ChatGPT, Bing, etc. -- they are not "unsafe" in any objective sense. So with regard to this issue, my rallying cry is: Free speech for AI!

Expand full comment

How common is it for AI researchers to withhold their source (or other implementation details) on the grounds of AI-safety? (I have a vague impression that it does seem to happen at least sometimes..)

If researchers *are* withholding on AI-safety grounds it seems quite probable that they've already followed the same line of reasoning as you; if so, it occurs to me that there's probably a good chance they're actualy withholding for other reasons, whilst disingenuously paying lip-service to AI safety.

Of course, this is still useful analysis whether or not researchers are presently withholding - even if they're not doing so right now, I'm sure many will wish to do so at some point!

Expand full comment

It's historically been rare, but has become much more common in the last few years. The first big example of this happening was... umm... GPT-2. But "safety" can mean lots of things. I think the concern for GPT-2 wasn't that it would it would gain consciousness and steal all the nuclear launch codes but that people would misuse it for spam or harassment or something.

Expand full comment

Good point. I would suppose that withholding source/implementation would indeed improve "safety" in the sense of raising the barrier to development enough that low-level malicious actors like spammers and trolls would have trouble clearing it, yes - but also, following that line of thinking does seem to take us some way towards (*furtive glance over shoulder*) ..security through obscurity.

Expand full comment

It's not obvious to me that GPT-2 would have had enough training data in 2005. It was trained on 40GB of text scraped from upvoted comments on reddit, over 8 million documents. Could we have done that back then?

Expand full comment

I *think* so, though I don't have any easy proof of this. My intuitive argument is that modern language models are trained on the datasets with more than a trillion tokens, while GPT-2 was trained on only 21 billion. It *seems* like there isn't 50x as much text accessible now as in 2005. But I'm not aware of anyone that actually built a dataset of text that was that big back in 2005.

Expand full comment

Speaking as an AI researcher at Lawrence Livermore National Laboratory, I can assure you that we do care about AI!

Expand full comment

I maintain that LLNL probably wouldn't have wanted to dedicate all it's compute resources back in 2005 for months on end to experimenting with language models, but I take the correction! :)

Expand full comment

Yes, I think you’re right in that regard!

Expand full comment

Updated to clarify that I was speaking of 2005, thanks.

Expand full comment

Minor quibble -- I think this is poorly-worded: "GPT-2 was the first language model to break through into public consciousness as being impressive"

99% of the public, including myself (a close reader of rationalist blogs, and casual follower of AI generally) never heard of GPT-2. This post is literally the first I've heard of it. It wasn't until last year the Scott Alexander, for example, started doing posts about impressive art AIs.

GPT-2 may have been the first language model to break through into the consciousness of *AI experts and enthusiasts* as being impressive...

Expand full comment

Well there is this: https://slatestarcodex.com/2019/02/19/gpt-2-as-step-toward-general-intelligence/

But I don't necessarily dispute your broader point: GPT-2 definitely didn't make nearly as big a splash as GPT3 which in turn wasn't nearly as big as ChatGPT. I still think that GPT-2 was much bigger than the things that came before, but I think it's sort of exponentially increasing and there's no real argument for why you should set the threshold at GPT-2. I think I did that because that was when LLMs came to my personal attention, which isn't the best justification...

Expand full comment

makes sense!

Expand full comment

Updated to "In 2018, GPT-2 was the start of large language models breaking through into public consciousness as being impressive." (I originally had more hedging but I don't want to get too far into the weeds...)

Expand full comment