domain:amphobian.info
I have a short Substack post about AI regulation which is itself a teaser for my much longer article in Areo Magazine about AI risk & policy.
When Chat GPT 3.5 was released, I was tribally. But within weeks my emotions switched tribes, even though my actual rational opinions have been more or less consistent. Basically
-
We have almost no real-world understanding of AI-alignment, we need open, visibile experimentation to get it.
-
There really is risk, AI-development needs legal limits,
-
Those limits should be more about rule-of-law than administrative power.
-
The goal is to create and delimit rights to work on AI safely.
But do read the actual articles to unpack that.
That's what I want, but what I'm afraid we'll get (with the backing of the AI-risk community) is a worst-of-both-worlds. Large unaccountable entities (i.e. governments and approved corporations) will develop very powerful Orwellian AIs, while squelching the open development that could (a) help us actually understand how to do AI safety, and (b) use AI in anti-Orwellian tools like personal bullshit detectors.
I understand the argument that crushing open development is Good Actually because every experiment could be the one that goes FOOM. But this Yuddist foomer-doomerism is based on an implausible model of what intelligence is. As I say in Areo (after my editor made it more polite):
Their view of intelligence as monomaniacal goal-seeking leads the rationalists to frame AI alignment as a research problem that could only be solved by figuring out how to programme the right goals into super-smart machines. But in truth, the only way to align the values of super-smart machines to human interests is to tinker with and improve stupider machines.
Any smart problem-solver must choose a course of action from a vast array of possible choices. To make that choice, the intelligence must be guided by pre-intellectual value judgements about which actions are even worth considering. A true-blue paperclip maximiser would be too fascinated by paperclips to win a war against the humans who were unplugging its power cord.
But even if you did believe in foom-doom, centralising development will not help. You are just re-inventing the Wuhan Institute for Virology.