OpenAI Launches Mini Version of Its Most Powerful Model Yet

OpenAI Introduces GPT-4o Mini

Advertisement

New Model Announcement: OpenAI CEO Sam Altman announced the launch of a new AI model, “GPT-4o mini,” on Thursday. This release represents OpenAI’s latest effort to expand the use of its popular chatbot.

Model Capabilities: OpenAI describes GPT-4o mini as “the most capable and cost-efficient small model available today.” The company plans to integrate image, video, and audio functionalities into this model in the future.

About GPT-4o: GPT-4o mini is derived from GPT-4o, OpenAI’s fastest and most powerful model, launched in May. The “o” in GPT-4o stands for “omni,” reflecting its enhanced capabilities in handling audio, video, and text across 50 different languages with improved speed and quality.

Advertisement

Company Background and Goals: OpenAI, backed by Microsoft and valued at over $80 billion by investors, was founded in 2015. The company faces pressure to maintain its leadership in the generative AI market while managing significant expenditures on processors and infrastructure necessary for building and training its models.

Multimodality Focus: The introduction of the mini AI model aligns with OpenAI’s aim to lead in “multimodality,” offering various types of AI-generated media—text, images, audio, and video—within a single tool: ChatGPT.

Executive Insights: Last year, OpenAI Chief Operating Officer Brad Lightcap emphasized the importance of multimodality: “The world is multimodal. If you think about the way we as humans process the world and engage with the world, we see things, we hear things, we say things—the world is much bigger than text. So to us, it always felt incomplete for text and code to be the single modalities, the single interfaces that we could have to how powerful these models are and what they can do.”

Availability: GPT-4o mini will be accessible starting Thursday to free users of ChatGPT, as well as ChatGPT Plus and Team subscribers. It will be available to ChatGPT Enterprise users next week, according to OpenAI’s press release.