OpenAI Replaces Standard Model
From May 5, 2026, ChatGPT users will encounter a new standard: GPT-5.5 Instant replaces GPT-5.3 Instant as the automatic model in the service. OpenAI announced this in a blog post. The update is being rolled out to all users and is presented as the most significant leap in accuracy and user-friendliness since the previous generation.
OpenAI describes the model as an “instant” variant of the larger flagship GPT-5.5 – optimized for faster responses in everyday use.
Half as Many Hallucinations – According to OpenAI
The most striking claim is related to factual accuracy. The company states that in internal evaluations, GPT-5.5 Instant produces 52.5% fewer hallucinated claims than its predecessor on high-risk questions in medicine, law, and economics. In demanding conversations, the number of erroneous claims is said to have fallen by 37.3%.
It is important to emphasize that these figures originate from OpenAI's own internal tests and not from independent third parties. Independent verification of such claims has not yet been publicly released.
Shorter and More Precise Answers
The model is also trained to respond more concisely. OpenAI states that GPT-5.5 Instant uses 30.2% fewer words and 29.2% fewer lines than GPT-5.3 Instant. The company describes this as a conscious choice to reduce superfluous formatting and what is characterized as unnecessary use of emojis – without the tone becoming cold or impersonal.
New Memory Control Feature
One of the most concrete new features is the introduction of so-called “memory sources.” This function allows users to see which context – such as stored memories or previous conversations – has influenced a response. Users can delete or correct outdated information.
However, OpenAI clarifies that the model does not necessarily show all factors that have shaped a response, which limits full transparency.
This feature is activated across all ChatGPT models, not just GPT-5.5 Instant.
How GPT-5.5 Instant Measures Up Against Gemini 1.5 Pro
Research sources indicate that Google's Gemini 1.5 Pro still holds a significant advantage in at least one important area: the context window. Gemini 1.5 Pro supports up to one million tokens – equivalent to over 700,000 words of text, one hour of video, or eleven hours of audio – while GPT models have traditionally had a window of 128,000 tokens. OpenAI has not specified the exact context size for GPT-5.5 Instant.
Both models support multimodality – meaning they can process text, images, and more. However, Gemini 1.5 Pro has placed particular emphasis on the integration of audio and video, while GPT-5.5 Instant, according to OpenAI, particularly strengthens its image analysis and STEM-related questions.
On the cost side, direct comparison is difficult since GPT-5.5 Instant is primarily offered as a consumer service through ChatGPT, while Gemini 1.5 Pro is also available via API with documented prices.
What This Means for Users
For most ChatGPT users, the transition happens automatically – no manual update is necessary. The focus on reduced hallucinations and more transparent memory management is especially relevant for professional users in healthcare, law, and finance, where misinformation can have serious consequences.
But ultimately, independent benchmarks and real user experience will determine whether OpenAI's promises of dramatically improved accuracy are actually met in practice.