Last night, OpenAI unveiled two new reasoning AI models, the o3 and o4 Mini. These models consider multiple angles before responding to an answer to provide the best answer.
The o3 model is OpenAI’s most advanced reasoning achievement. It has shown superior performance over the company’s previous models in assessments that measure abilities such as math, coding, science, reasoning, and visual perception. In contrast, the o4 mini model is a good option for developers who want to choose the optimal model for their applications by balancing speed, cost, and efficiency.
Unlike previous versions, these two models can use the tools available in GPT chat, such as web search, Python code execution, image analysis, and image generation. Starting today, these models are available to users of Pro, Plus, and Team plans, along with a special version of the o4 mini called o4-mini-high, which devotes more time to compiling more accurate responses.
These models are part of OpenAI’s effort to lead the way in the intense global AI competition against companies such as Google, Meta, xAI, Anthropic, and DeepSeek. Although OpenAI pioneered the introduction of reasoning models with the release of the o1 model, competitors quickly introduced models with similar or even better performance. Reasoning models are currently leading the field due to AI labs’ efforts to improve the systems’ performance. OpenAI CEO Sam Altman announced in February that the company would devote more resources to developing the advanced technology that underpins the o3 model. However, competitive pressure has forced the company to change its plans.
OpenAI said that the o3 model performed world-class on the SWE-bench test, which measures the ability to code without using custom frameworks, with a score of 69.1%. The o4 mini model also performed close to the o3, with a score of 68.1%. The previous o3 mini model scored 49.3%, and the Claude 3.5 Sonnet model scored 62.3%.
OpenAI claims that the o3 and o4 mini are the first models capable of “thinking in pictures.” Users can upload images to ChatGPT, such as whiteboard sketches or diagrams, in PDF files. The models analyze the pictures in their “chain of thought” process and provide appropriate responses. The models can even understand blurry or low-quality images and perform tasks like zooming in or rotating images.
In addition, the o3 and o4 mini can run Python code directly in the browser via ChatGPT’s Canvas feature and search the web for live information.
The three models (o3, o4 mini, and o4-mini-high) are available to developers via the Chat Completions and Responses APIs. This allows engineers to design applications with consumption rates based on these models.
Using the o3 model costs $10 per million input tokens (equivalent to about 750,000 words, longer than the Lord of the Rings book series) and $40 per million output tokens. The o4 mini model costs the same as the o3 mini: $1.10 per million input tokens and $4.40 per million output tokens.
OpenAI has announced that it will release a version called o3 Pro in the coming weeks. This version will use more computational resources to provide more accurate answers. It will be available exclusively to Pro plan users on ChatGPT.
Sam Altman noted that o3 and o4 mini will likely be OpenAI’s last standalone reasoning models before the introduction of GPT-5. The company plans to merge traditional models, such as GPT-4.1, with reasoning models in GPT-5.
Instagram has taken a new step towards increasing user engagement by introducing a new feature…
Hill Climb Racing Unblocked Games Premium is more than just a racing game—it's an addictive…
If you're a fan of high-speed racing games and dream of unlocking every superbike without…
If you're a fan of immersive platform puzzle games, you've probably heard about LIMBO. But…
After much struggle, much controversy with the TON ecosystem, many adventures to migrate to the…
If you're a quirky, skill-based arcade games fan, you've probably come across Walk Master Unblocked…