Elon Musk Joins Vitalik Buterin in Crucial AI Development


Elon Musk Joins Vitalik Buterin in Crucial AI Development

  u.today 27 August 2024 12:57, UTC

The rise of artificial intelligence (AI) has been tremendous. From OpenAI’s ChatGPT to major firms like Apple jumping on the bandwagon, we appear to be living in the golden age of AI. However, it should be noted that “with great power comes great responsibility,” a critical message given by various Spider-Man movies.

The rise of AI is expected to bring significant potential risks to people around the globe. Consequently, industry leaders like Elon Musk and Vitalik Buterin have been very vocal about AI regulations to counter these global risks. Musk, who also operates an AI firm known as X.AI Corp., believes in the potential of this technology, as well as advocating industry regulations.

Need for AI regulations

California has drafted an AI safety bill, SB 1047, to counter the risks and dangers of artificial intelligence. It is reported that this bill, if passed, would hold developers, who invest over $100 million to develop an AI model, more responsible for it. These developers will have to responsibly follow certain measures, such as safety testing of their models.

This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill.

For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk…

— Elon Musk (@elonmusk) August 26, 2024

Reacting to this development, Tesla CEO Elon Musk tweeted that the SB 1047 bill should be passed by California. Although Musk mentioned that his thoughts may upset some people, he has been advocating for AI regulations for more than 20 years. The billionaire thinks that the AI industry should be regulated just as other tech related sectors are regulated to avoid potential risks.

Vitalik Buterin’s concerns

Commenting on Musk’s tweet, Ethereum (ETH) founder Vitalik Buterin also talked about the need for these regulations on the AI space. But he also raised concerns about the efficiency of these bills or drafts. Buterin questioned whether the SB 1047 bill would be put in use to go after open weight models, pretrained models offered for further development.

What’s the best evidence that the bill is going to be used to go after open weights?

I know that earlier versions of the bill had a full shutdown req that’s incompatible with open weights, but that’s been removed. And I know that some AI safety people have expressed support for…

— vitalik.eth (@VitalikButerin) August 27, 2024

The Ethereum founder expressed favor for the bill’s “critical harm” category. He said that a charitable reading of bill SB 1047 stated that the aim is to put safety testing procedure in place. Consequently, if developers or firms discover “world-threatening capabilities/behavior” in their models, then they will not be allowed to launch them.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top