Discussion about this post

User's avatar
Greg G's avatar

My starting assumption is that preventing people from building any harmful AIs is basically impossible in the long run, and that attempting to do so is counterproductive because it will disproportionately affect ethical AI creators rather than the underground, unethical ones that will try to evade restrictions, causing adverse selection.

So, other initiatives to prevent the possible bad outcomes from AI, rather than preventing bad AI itself, seem quite important even if technically orthogonal. I'd like to see a lot more work in the world going to tracking and mitigating new pathogens, keeping tabs on radioactive materials, and all the other stuff that will make society more resilient to bad actors.

Expand full comment
Ivo's avatar

I think option 1 is the unwritten, unstated, intrusive-thought-like, cognitive-dissonance-causing, anti-memetic plan of every major AI lab. And their AI will enthusiastically execute that plan anyway, so they may as well try to get it to do that at least somewhat in their preferred way.

Expand full comment
8 more comments...

No posts