Discussion about this post

User's avatar
DH's avatar

"...the most dangerous thing isn’t the details of the model, the most dangerous thing is the demo."

Great insight. Which is why I think the obsession with "safety" for AI chatbots is useless. The tech is out there, and it's going to be used in ways the AI censors might not approve of.

Anyway, "safety" is a valid concept for nuclear weapons, but not for words spewed out by a machine. No matter how offensive these words may be -- as we have seen with the often humorous examples from ChatGPT, Bing, etc. -- they are not "unsafe" in any objective sense. So with regard to this issue, my rallying cry is: Free speech for AI!

Expand full comment
M M's avatar

It's not obvious to me that GPT-2 would have had enough training data in 2005. It was trained on 40GB of text scraped from upvoted comments on reddit, over 8 million documents. Could we have done that back then?

Expand full comment
13 more comments...

No posts