🎉 Hey Gate Square friends! Non-stop perks and endless excitement—our hottest posting reward events are ongoing now! The more you post, the more you win. Don’t miss your exclusive goodies! 🚀
🆘 #Gate 2025 Semi-Year Community Gala# | Square Content Creator TOP 10
Only 1 day left! Your favorite creator is one vote away from TOP 10. Interact on Square to earn Votes—boost them and enter the prize draw. Prizes: iPhone 16 Pro Max, Golden Bull sculpture, Futures Vouchers!
Details 👉 https://www.gate.com/activities/community-vote
1️⃣ #Show My Alpha Points# | Share your Alpha points & gains
Post your
Vitalik is afraid of AGI and ASI: humans should focus on intelligence enhancement tools first, not letting AI replace humans
Ethereum co-founder Vitalik Buterin shared his unique perspective on AGI (Artificial General Intelligence) and ASI (Artificial Superintelligence) on Twitter, emphasizing the focus on AI tools rather than the pursuit of the development of superintelligent life forms to replace humans. He is quite afraid of the excessive development of AGI and ASI.
AGI is an artificial intelligence that maintains civilization independently.
Vitalik defines AGI as a rather powerful AI. He said that if all humans suddenly disappeared, and this AI was installed into robots, it could run independently and maintain the development of the entire civilization. He added that this concept will upgrade from the 'instrumentality' of traditional AI to a 'self-sustaining life form'.
Vitalik pointed out that current technology cannot simulate such a scenario, and we cannot really test whether AI can sustain civilization without humans. It is even more difficult to define the standard of 'civilizational development,' and which conditions represent the continued operation of civilization. These questions themselves are very complex, but this may be the main distinguishing source that people can directly differentiate AGI from ordinary AI.
(Note: Self-sustaining life form refers to a biological organism or life system that is able to obtain and utilize resources to sustain life activities, adapt to environmental changes, and continue to survive under certain conditions.)
Emphasizing the importance of smart assistant tools, not replacing humans with AI
Vitalik's definition of Artificial Superintelligence (ASI) is when the progress of AI surpasses the value that human participation can provide, and reaches a stage of complete autonomy and greater efficiency. He cited the example of international chess, which has only truly entered this stage in the past decade, with AI's level already surpassing the best performance achieved through human-AI collaboration. He admitted that ASI makes him feel afraid, because it means that humans may truly lose control over AI.
Vitalik argues that instead of developing superintelligent life forms, it is better to focus on developing tools that can enhance human intelligence and capabilities. He believes that AI should assist humans rather than replace them. He thinks that this development path can reduce the risk of uncontrollable AI while improving the overall efficiency and stability of society.
The threat of unemployment brought about by generative AI: Will Amazon workers be fully replaced by a fleet of machines?
This article Vitalik is afraid of AGI and ASI: humans should focus on enhancing tools instead of letting AI replace humans. It first appeared in ChainNews ABMedia.