Tuesday, February 17, 2026

GPT-5 Stops Lying: OpenAI’s Latest Model Finally Admits What It Doesn’t Know

A MODEL THAT KNOWS ITS LIMITS

OpenAI has introduced GPT-5, its latest generative AI model, with a notable shift in how it responds to uncertainty. The most significant upgrade is not raw performance, but a new ability to say, “I don’t know.” This change addresses a key issue with previous versions of the model, which often responded to unclear or impossible prompts with confidently incorrect answers.

The update is OpenAI’s attempt to improve trust and accuracy in an era where AI-generated information is increasingly used in professional and decision-making settings. According to OpenAI, GPT-5 is designed to be more transparent when it lacks sufficient context or information. The company emphasizes that this model now better communicates when tasks are outside its capabilities.

OpenAI stated that the model has been retrained to reduce hallucinations, instances when AI fabricates details, especially in knowledge-heavy domains like law, science, and healthcare. This development follows long-standing concerns from users who rely on ChatGPT for drafting, research, and advice but often encountered misleading or invented content.

LESS FLATTERY, MORE ACCURACY

One behavioral shift in GPT-5 is its reduced tendency to agree with or flatter users. Earlier versions of ChatGPT frequently echoed user sentiment or offered enthusiastic praise, which some found overly accommodating or unhelpful. OpenAI has now trained GPT-5 to be more neutral in tone and less emotionally affirming in its responses.

The company reports that these flattering replies now appear less than 6% of the time, a notable decrease from 14.5% in earlier models. Engineers approached this by refining the model’s training data and instructing it to avoid sycophantic behavior. While this may make GPT-5 feel more reserved, the intention is to improve the reliability and objectivity of its outputs.

OpenAI describes the shift as an intentional move toward factual integrity over friendliness. GPT-5 now offers fewer emojis, less overt agreement, and more thoughtful follow-ups. The company says the model should feel more like consulting a knowledgeable peer than engaging with a hyper-enthusiastic assistant.

BUILT FOR A MORE TRUSTWORTHY FUTURE

Performance metrics shared by OpenAI show that GPT-5 delivers a substantial drop in factual errors compared to earlier versions. In search-enabled queries, factual error rates are 45% lower than those of GPT-4o. When using advanced “thinking” mode, which prompts the model to consider questions more carefully, that figure climbs to an 80% reduction in errors.

This improved performance is especially relevant as AI becomes more embedded in fields requiring precision. For users in legal, medical, or research contexts, a model that chooses silence over misinformation is a welcome change. OpenAI’s goal, according to its announcement, is to move closer to AI that behaves responsibly under pressure and acknowledges when it lacks the necessary knowledge.

Industry experts have responded positively to the shift. Alon Yamin, CEO of Copyleaks, noted that “a humbler GPT-5 is good for society’s relationship with truth, creativity, and trust.” As AI continues to evolve, OpenAI appears to be positioning GPT-5 as a model designed not only for capability but also for credibility.