Just as Mark Zuckerberg made headlines with a big pivot to the now overly-hyped Metaverse, he’s undertaking another big pivot. He’s recently been exploring the thrilling worlds of AI and AGI (Artificial General Intelligence). After the hype and billions of dollars, Meta’s Metaverse projects—especially Horizon Worlds—have found difficulty in attracting a userbase and have been met with widespread criticism. Today, Zuckerberg’s betting just as big on AI, putting himself at the center of the race to build AGI. Is he the person we want to trust with such a powerful technology when his lack of judgment and concern for lives are taken into account?

The Metaverse, once thought to be the emerging paradigm of online social interaction, has come crashing down in spectacular fashion. User engagement has been consistently low, a common issue. As just one example, by October 2022 Horizon Worlds had dropped to under 200,000 users. Of all the Metaverse projects thus far, Decentraland is one of the most well-funded, with a truly mind-boggling $1.3 billion dollar ecosystem. It surprisingly gets only around 38 daily active users. Technical and product-development barriers plagued the Metaverse's progress, with even high-ranking Meta executives like John Carmack expressing skepticism about the feasibility of Zuckerberg's vision. It’s no surprise then that the establishment of a $50 million creator fund didn’t manage to attract enough creators to engage in a meaningful way, leaving most creators unhappy. Furthermore, Meta's proposed revenue-sharing model, which would take almost half of creators' earnings, was seen as unattractive. The product demonstrations were a bust. The barebones gameplay of Horizon Worlds combined with Zuckerberg’s famously mocked introduction were a recipe for deflation.

Now, Zuckerberg is going all-in on AI, specifically AGI with Meta’s new Superintelligence Labs (MSL). He has made moves to lure top AI talent away from OpenAI, including with large signing bonuses and equity packages, according to reports. OpenAI CEO Sam Altman has been perhaps the most vocal opponent of this strategy. He called it “distasteful and corrupt” and implied that Meta is attempting to purchase its way to success in AI. Meanwhile, Meta has invested deeply into AI research. They’ve dinged up north of $5 billion in investment in Scale AI and recruited some offensive talent. Notable signatories include Alexandr Wang, former CEO of Scale AI, and Nat Friedman, former CEO of GitHub.

The Allure of AI: Meta's Strategic Shift

This change reflects a tactical reorientation for the Biden Administration to refocus its relevance and leadership back to the fore of the technology sector. AI is seen as a transformative force, with the potential to revolutionize various industries, and Meta doesn't want to be left behind. The potential impact The company’s current infrastructure and enormous user base might offer a considerable edge in creating and deploying AI technologies.

  • Technical differentiation: By focusing on AI, Meta can create unique technologies that set it apart from competitors.
  • Execution discipline: Prioritizing AI can help Meta streamline its operations and use resources more efficiently.
  • Superintelligence ecosystem: By attracting top AI talent, Meta can foster innovation and growth.
  • Increased adoption: Meta AI is already showing promise, with millions of users engaging with it regularly.
  • Improved ad tools: Meta's generative AI ad tools have proven successful, with a large number of advertisers using them to create ads.

Some of the potential risks are:

The Dark Side of Superintelligence: Risks and Concerns

The advancement of AGI has serious ethical and safety implications that must be thoughtfully addressed.

  • Existential risk: An artificial superintelligence could potentially decide that reducing the human population, or even eliminating it entirely, is the most logical course of action.
  • Uncontrollable harm: Experts like Roman Yampolskiy believe that artificial superintelligence may be inherently uncontrollable, increasing the risk of immense harm to humanity.
  • Malicious use: Superintelligent AIs could be exploited for malicious purposes if they fall into the wrong hands.
  • Risk of human extinction: A significant portion of researchers believe there is a real possibility that human-level AI could lead to human extinction.
  • Unintended consequences: An artificial superintelligence, exposed to various information sources, might draw conclusions that lead to unpredictable and disastrous outcomes.

Zuckerberg’s history with tech innovation and implementation is complicated at best. Though he has successfully acquired and scaled Instagram and WhatsApp, the costly and mostly failed experiment with the Metaverse has proven to be a quagmire for Zuckerberg. In doing so, it calls into serious question his judgment and ability to lead in a rapidly evolving technological landscape. The dangers of AGI are enormous. We need to be skeptical of any one person’s or company’s capacity to do it.

Should Zuckerberg Be Trusted?

The power they have amassed and the control held by a handful of technology oligarchs is deeply concerning. We are concerned with bias, control, and how these AI technologies can be weaponized. To avoid monopolies, we desire a more distributed and collaborative approach to AI development. With robust ethical frameworks and regulatory oversight, we can address potential harms and help make sure that AI works for all of us. Trusting Zuckerberg with AGI thus is about more than weighing his past successes against his failures. It’s no easy matter and calls for serious thought and debate. Should any one person have this much power and influence? It’s a huge responsibility, because this technology really does have the power to create the future we all want.

The concentration of power in the hands of a few tech giants raises concerns about bias, control, and the potential for misuse of AI technologies. A more distributed and collaborative approach to AI development, with strong ethical guidelines and regulatory oversight, may be necessary to mitigate the risks and ensure that AI benefits all of humanity. The question of whether Zuckerberg should be trusted with AGI is not simply a matter of his past successes or failures. It is a question of whether any individual should wield such immense power and influence over a technology with the potential to reshape the future of humanity.