Artificial Intelligence 6 min read

OpenAI Announces Plans to Release a New Open‑Source Large Language Model

OpenAI is set to launch its first open‑source large language model in four years, sparking debate over how this move could reshape the competitive landscape of AI, affect models like LLaMA, and intensify the open‑source versus closed‑source rivalry with Google.

DataFunSummit
DataFunSummit
DataFunSummit
OpenAI Announces Plans to Release a New Open‑Source Large Language Model

OpenAI is preparing to release a brand‑new open‑source language model, marking the first such announcement in four years since GPT‑2.

This development raises questions about whether it will alter the competitive dynamics of large‑model AI, especially concerning the LLaMA family of models.

Since the debut of ChatGPT, numerous open‑source initiatives have emerged, many inspired by Meta’s models, including Stanford’s Alpaca, Berkeley’s Vicuna, Kaola, ColossalChat, and a Chinese‑medical‑knowledge‑fine‑tuned LLaMA variant from Harbin Institute of Technology. Some of these models can even run on mobile devices.

According to the UC Berkeley‑run Chatbot Arena rankings, many open‑source models now trail only behind GPT‑4 and Claude.

It remains uncertain whether the upcoming model will serve as a true “alternative” until its official release, and it is unclear if OpenAI will position it to compete directly with other open‑source projects.

Reports from The Information, citing insiders, suggest the new open‑source model is unlikely to compete head‑to‑head with GPT.

At the same time, some observers note that this development could increase pressure on Google.

Open Source vs. Closed Source

The question of whether AI models should be open‑source or closed‑source has become a hot topic.

A leaked internal Google document warned that open‑source large models are rapidly eroding the market positions of OpenAI and Google.

The document warned that unless the closed‑source stance changes, open‑source alternatives could eventually render models like ChatGPT obsolete.

In this arms race, both Google and OpenAI appear to lack a clear moat.

Many open‑source challenges have already been addressed, such as running on low‑power devices, scalable personal AI, and multimodal capabilities.

Although OpenAI and Google currently hold quality advantages, the gap is narrowing quickly.

Recent weeks have seen continuous progress across open‑source AI teams, both in model development and applications.

For example, AI startup Together raised $20 million in seed funding after building an open‑source LLaMA‑based model and cloud platform.

The open‑source movement is also manifesting offline, with large gatherings celebrating the community.

HuggingFace, the “open‑source hub,” has launched a suite of large‑model tools and hosted the “Woodstock of AI” event, attracting over 5,000 participants.

Stability AI, behind Stable Diffusion, and Lightning AI, creators of PyTorch Lightning, are also planning open‑source meetups.

Critics argue that both OpenAI and Google set a risky precedent by releasing unmonitored models, whose dangers are real.

Previously, while the core components of large‑tech models were not fully replicable, the open‑source community understood the basic “secret sauce.” Now, that transparency is disappearing.

What are your thoughts on this development?

References: [1] https://www.reuters.com/technology/openai-readies-new-open-source-ai-model-information-2023-05-15/ [2] https://www.theinformation.com/articles/open-source-ai-is-gaining-on-google-and-chatgpt [3] https://venturebeat.com/ai/open-source-ai-continues-to-celebrate-as-big-tech-mulls-over-moats/

artificial intelligencelarge language modelsOpenAIOpen‑Source AIAI competition
DataFunSummit
Written by

DataFunSummit

Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.