Openai has launched its largest and best model to date GPT-4.5. The company led by Sam Altman claims that GPT 4.5 improves the ability to identify patterns, build connections and generate creative insights without reasoning compared to previous models.
Openai said it has used an unsupervised learning paradigm for GPT 4.5, which leads to a broader knowledge of the model and a deeper understanding of the world, while reducing hallucinations and improving reliability.
“This is the first person who feels like talking to me. I have a moment of moments, I’m sitting in a chair and amazed at getting good advice from AI,” Altman said in an X post that announced GPT 4.5.
Although GPT 4.5 is not a different reasoning model than the O3 Mini or DeepSeek R1, Altman says there is a “magic” of this model that he has never seen before.
GPT 4.5 currently does not support multimodals such as voice mode, video and screenshots. However, Openai implies that these features may be performed in future updates.
Who can use GPT 4.5? What are its use cases?
GPT 4.5 is currently only available for CHATGPT PRO users. Next week, launch and team users will start, with corporate and EDU users following in a week.
Openai said GPT 4.5 has a broader knowledge base and improves the ability to follow user intentions and a larger EQ or emotional quotient. All of this makes GPT 4.5 useful for tasks such as improving writing, programming, and solving real-world problems with reduced hallucinations.