More broadly, I think AI Alignment ideas/the EA community/the rationality community played a pretty substantial role in the founding of the three leading AGI labs (Deepmind, OpenAI, Anthropic), and man, I sure would feel better about a world where none of these would exist, though I also feel quite uncertain here. But it does sure feel like we had a quite large counterfactual effect on AI timelines.

That is from “habryka, Ben Pace” on LessWrong blog.  As you might expect, I would give those comments a different valence, nonetheless they are insightful.  Here are my points:

1. It is truly remarkable how much influence the cited movements have had.  Whether or not you agree in full (or at all), this should be recognized and respected.  Kudos to them!  And remember, so often ideas lie behind technology.

2. Anthropic has announced a raise of $5 billion and is promoting its intention to compete with Open AI and indeed outdo them.  The concept “Solve for the equilibrium” should rise in status.

3. You cannot separate “interest in funding AI safety” (which I am all for) from “AI progress.”  That by now should be obvious.  No progress, no real interest in safety issues.

4. To this day, the Doomsters are de facto the greatest accelerationists.  Have you noticed how the Democrats (or Republicans) “own” certain political issues?  For instance, voters trust the Democrats more with Social Security, and the mere mention of the topic helps them, even if a Republican has a good point to make.  Well, the national security establishment “owns” the ideas of existential risk and risk from foreign powers.  The more you talk about doomsday issues, the more AI risk gets slotted into their purview, for better or worse.  And they ain’t Brussels (thank goodness).  To the extent the Doomsters have impact, their net effect will be to place the national security types in charge, or at least to raise their influence.  And how do they think that is going to work out (on their own terms)?  Perhaps they would do better to focus on mundane copyright and libel issues with LLMs, but that is not their nature.

The post EA, AI, and the rationality community appeared first on Marginal REVOLUTION.


Source link

(This article is generated through the syndicated feed sources, Financetin doesn’t own any part of this article)

Leave a Reply

Your email address will not be published. Required fields are marked *