site banner

Danger, AI Scientist, Danger

thezvi.wordpress.com

Zvi Mowshowitz reporting on an LLM exhibiting unprompted instrumental convergence. Figured this might be an update to some Mottizens.

9
Jump in the discussion.

No email address required.

I suspect that the first few AIs attempting to take over the world will probably suck at it (as this one sucked at it) and that humanity is probably sane enough to stop building neural nets after the first couple of cases of "we had to do a worldwide hunt to track down and destroy a rogue AI that went autonomous".

We're still doing gain of function research on viruses. There's basically no reason to do it other than publishing exciting science in prestigious journals, any gains are marginal at best. Meanwhile, AI development is central to military, political and economic development.

I mean, sure, GoF still going on is bananas, but we've stopped doing other things (including one which was central to military and economic development and which we didn't need to stop i.e. nuclear power). I'm not ready to swallow the black pill just yet.

Indeed, and as I've touched upon in previous posts, there is a degree to which I actually trust the military, political and economic interests more than I trust MIRI and the rest of the folks who just want to "publish exciting science" because at least they have specific win conditions in mind.