OpenAI launching GPT-4.5, its next general-purpose large language model
- OpenAI announced the research preview of GPT-4.5, a general-purpose large language model, which is intended for everyday tasks like writing and problem-solving.
- GPT-4.5 has a lower hallucination rate and improved emotional intelligence compared to its predecessors, GPT-4o and o1.
- Sam Altman noted that GPT-4.5 would initially be available for Pro users and is planned to roll out to Plus, Team, Edu, and Enterprise users over the following weeks.
- Microsoft's CEO Satya Nadella confirmed that GPT-4.5 is also available in preview through Azure AI Foundry, highlighting the collaboration between OpenAI and Microsoft.
166 Articles
166 Articles
Open AI released ChatGPT-4.5, a new version of its AI model: how it works and the differences with previous ones
The company reported that this product presents fewer hallucinations and represents greater confidence for its users in the accuracy of their responses.
A reporter tries out the popular agent AI "Operator"
It is said that in 2025, "agent AI" services that carry out jobs and tasks on behalf of humans will become widespread. What can it do, and how is it useful? Will it bring surprises like when Chat GPT was released? Here, we actually tried out the agent AI service "Operator" launched in the United States by the US company OpenAI, and verified its capabilities. Agent AI is also referred to as "AI agent".
Sam Altman’s OpenAI launches GPT-4.5 with fewer ‘hallucinations’ as AI race heats up
Sam Altman’s OpenAI released GPT-4.5, an upgraded version of the artificial intelligence model that powers ChatGPT, to select users on Thursday as it looks to stave off challenges from rivals like Elon Musk’s xAI.
“It’s a lemon”—OpenAI’s largest AI model ever arrives to mixed reviews
The verdict is in: OpenAI's newest and most capable traditional AI model, GPT-4.5, is big, expensive, and slow, providing marginally better performance than GPT-4o at 30x the cost for input and 15x the cost for output. The new model seems to prove that longstanding rumors of diminishing returns in training unsupervised-learning LLMs were correct and that the so-called "scaling laws" cited by many for years have possibly met their natural end. An…
Coverage Details
Bias Distribution
- 40% of the sources lean Left, 40% of the sources are Center
To view factuality data please Upgrade to Premium
Ownership
To view ownership data please Upgrade to Vantage