Join the hunt for $12,000,000+ in NEXO Tokens!

Learn More

Vitalik Buterin warns about dangers of superintelligent AI

Buterin advocates for a cautious approach to AI development, focusing on open, accessible models.

Vitalik Buterin warns about dangers of superintelligent AI

Share this article

Ethereum co-founder Vitalik Buterin warned about the rapid development of artificial intelligence (AI), particularly superintelligent AI. He believes a hypothetical type of AI that surpasses human intelligence in all aspects poses a significant danger.

“Superintelligent AI is very risky and we should not rush into it,” Buterin said in a recent post on X.

He also criticized Sam Altman’s proposal for a $7 trillion investment in an AI semiconductor super farm.

According to a February WSJ report, the CEO of OpenAI sought to secure between $5 trillion and $7 trillion to boost global capacity to produce advanced chips specifically designed for AI. The current shortage of these chips is seen as a bottleneck for AI development, including OpenAI’s projects.

Apart from the potential risks of superintelligent AI, Buterin’s concerns center on the dangers of power concentration. He advocates for a decentralized AI ecosystem, promoting a focus on open-source AI models that run on regular consumer hardware.

This approach is considered a safer alternative to highly concentrated AI controlled by a few powerful entities. Open models are also considered less likely to pose an existential threat than AI controlled by corporations.

“A strong ecosystem of open models running on consumer hardware are an important hedge to protect against a future where value captured by AI is hyper-concentrated and most human thought becomes read and mediated by a few central servers controlled by a few people. Such models are also much lower in terms of doom risk than both corporate megalomania and militaries,” he added.

The Ethereum co-founder also voiced support for regulatory distinctions between “small” and “large” AI models. However, Buterin raised concerns that an overreach in AI regulation might inadvertently push everything into the “large” category over time, hindering the development of open, accessible AI.

Share this article