Should AI Be Open-Source? Behind the Tweetstorm Over Its Dangers
- Layla
- Mar 11, 2024
- 3 min read
Updated: Apr 13, 2024
Two of venture capital's most prominent figures, Marc Andreessen and Vinod Khosla, have spent the past several days throwing jabs at X over one of Silicon Valley's most divisive issues:
Should artificial intelligence be developed openly or behind closed doors?
Proponents of open-source AI technologies, such as Andreessen, argue that they promote open science, greater transparency, and a way to prevent Big Tech interests from monopolising a powerful technology. Closed AI supporters, such as Khosla, argue that companies or other private entities provide a way to protect against its potential dangers and abuse.
Open-source AI is freely available for the public to build on and share, whereas closed-source, or proprietary, AI is privately controlled and shared by its developers. But the two approaches are not mutually exclusive; they can coexist. One example is when businesses built private systems on top of open-source code.
The X debate was sparked by Elon Musk's lawsuit against OpenAI and its CEO Sam Altman, and it highlights the difficulty of finding clear answers to questions about the distribution and safety of AI—especially since regulators, Big Tech firms, scientists, and governments still don't know how far and how quickly the technology will advance.
Among the tech titans involved in the debate, Meta has advocated for open-source AI and made its Llama 2 model available for download and modification by the general public. Paris-based Mistral AI has released models with open "weights," which are numerical parameters that comprise a model's inner workings. Meanwhile, OpenAI and Anthropic, the industry's two largest AI startups, sell closed-source AI models.
Andreessen wrote on Saturday that Khosla was "lobbying to ban open-source." The statement from Andreessen Horowitz's co-founder came after Khosla expressed support for Altman and OpenAI in the wake of Musk's lawsuit, which claims both violated the company's founding agreement to commit to public, open-source AI by putting profit first.
The founder of Khosla Ventures, who also supports OpenAI's for-profit arm, responded that AI is similar to nuclear weapons, and that open-sourcing it risks national security. Khosla's recent post references Ilya Sutskever, OpenAI's technical visionary, who stated that "it's totally OK not to share the science."
A Khosla Ventures spokesperson cited a previous Khosla post supporting open-source technologies while arguing that large AI models are a "national security and technology" advantage that must be closely guarded.
Andreessen Horowitz did not return a request for comment.
Both camps generally agree that large language models—the algorithms that power ChatGPT and are trained on massive amounts of data—are not a fully developed technology. ChatGPT and other AI tools can produce hallucinations, biassed results, and harmful or offensive output. Furthermore, they are extremely expensive to use and train, and they consume enormous amounts of energy.
According to some open-source supporters, the technical gaps in these AI models require them to be developed in the open, among a community of scientists and academics, before they are closed down by commercial interests and possibly reach artificial general intelligence, a hypothetical form of AI in which a machine can learn and think like a human.
Large language models—the algorithms that power ChatGPT and are trained on massive amounts of data—are widely acknowledged to be a developing technology.
"We believe that for the first time, we are deploying a technology at scale that we don't truly understand," said Ali Farhadi, CEO of the Allen Institute for AI, a nonprofit research organisation founded in 2014 by late Microsoft co-founder Paul Allen. "We don't know how to control these systems."
"We believe that for the first time, we are deploying technology at scale that we don't truly understand," said Ali Farhadi, CEO of the Allen Institute for AI, a nonprofit research organisation founded in 2014 by late Microsoft co-founder Paul Allen. "We don't know how to control these systems."
Altman has stated that OpenAI takes its safety obligations seriously, and that AI should be developed with extreme caution, but that it also has enormous commercial potential.
The Open-source AI movement
The open-source software movement, which began decades ago with the popularity of projects such as Linux, provides some insight into where this iteration of the open versus closed debate could go: Open-source software underpins nearly every type of technology, including cloud computing, which has helped companies like Amazon grow into behemoths.
However, because they are easy to download and modify, they have also posed cybersecurity risks to businesses and governments.
According to experts, closed and open-source technologies have always existed alongside one another. Meta's vice president of generative AI, Ahmad Al-Dahle, believes it is a "false dichotomy" that either side will win. "I think there's room for both," he said.
"Fundamentally, open-source will have a very important role," said Ori Goshen, co-founder and co-CEO of AI21, an AI startup that builds proprietary models. "There is a world where even proprietary providers like ourselves today, the base models will become open source, but everything else will be your most treasured intellectual property."
Komentar