[ad_1]

The Nuclear Non-Proliferation Treaty, activated in 1970, has been relatively successful in limiting nuclear proliferation.  When it comes to nuclear weapons, it is hard to find good news, but the treaty has acted as one deterrent of many to nation-states acquiring nuclear arms.  Of course the treaty works, in large part, because the United States (working with allies) has lots of nuclear weapons, a powerful non-nuclear military, de facto control of SWIFT, and so on.  We strongly encourage nations not to go acquiring nuclear weapons — just look at the current sanctions on Iran, noting the policy does not always succeed.

One approach to AI risk is to treat it like nuclear weapons and also their delivery systems.  Let the United States get a lead, and then hope the U.S. can (in conjunction with others) enforce “OK enough” norms on the rest of the world.

Another approach to AI risk is to try to enforce a collusive agreement amongst all nations not to proceed with AI development, at least along certain dimensions, or perhaps altogether.

The first of these two options seems obviously better to me.  But I am not here to argue that point, at least not today.  Conditional on accepting the superiority of the first approach, all the arguments for AI safety are arguments for AI continuationism.  (And no, this doesn’t mean building a nuclear submarine without securing the hatch doors.)  At least for the United States.  In fact I do support a six-month AI pause — for China.  Yemen too.

It is a common mode of presentation in AGI circles to present wordy, swirling tomes of multiple concerns about AI risk.  If some outside party cannot sufficiently assuage all of those concerns, the writer is left with the intuition that so much is at stake, indeed the very survival of the world, and so we need to “play it safe,” and thus they are lead to measures such as AI pauses and moratoriums.

But that is a non sequitur.  The stronger the safety concerns, the stronger the arguments for the “America First” approach.  Because that is the better way of managing the risk.  Or if somehow you think it is not, that is the main argument you must make and persuade us of.

(Scott Alexander has a new post “Most technologies aren’t races,” but he doesn’t either choose one of the two approaches listed above, nor does he outline a third alternative.  Fine if you don’t want to call them “races,” you still have to choose.  As a side point, once you consider delivery systems, nuclear weapons are less of a yes/no thing than he suggests.  And this postulated take is a view that nobody holds, nor did we practice it with nuclear weapons: “But also, we can’t worry about alignment, because that would be an unacceptable delay when we need to “win” the AI “race”.”  On the terminology, Rohit is on target.  Furthermore, good points from Erusian.  And this claim of Scott’s shows how far apart we are in how we consider institutional and also physical and experimental constraints: “In a fast takeoff, it could be that you go to sleep with China six months ahead of the US, and wake up the next morning with China having fusion, nanotech, and starships.”)

Addendum:

As a side note, if the real issue in the safety debate is “America First” vs. “collusive international agreement to halt development,” who are the actual experts?  It is not in general “the AI experts,” rather it is people with experience in and study of:

1. Game theory and collective action

2. International agreements and international relations

3. National security issues and understanding of how government works

4. History, and so on.

There is a striking tendency, amongst AI experts, EA types, AGI writers, and “rationalists” to think they are the experts in this debate.  But they are only on some issues, and many of those issues (“new technologies can be quite risky”) are not so contested. And because these individuals do not frame the problem properly, they are doing relatively little to consult what the actual “all things considered” experts think.

The post The Nuclear Non-proliferation Treaty and existential AGI risk appeared first on Marginal REVOLUTION.

[ad_2]

Source link

(This article is generated through the syndicated feed sources, Financetin doesn’t own any part of this article)

Leave a Reply

Your email address will not be published. Required fields are marked *