OpenAI Will Not Release GPT-5 This Year But Some Very Good Releases Are Coming, Says CEO Sam Altman Technology News
Is OpenAIs GPT-5 an Ambitious Leap or a Costly Misstep?
Reports said a few weeks ago that the GPT-5 upgrade we’ve been waiting for could be rolled out in December. OpenAI’s ChatGPT platform and Sora video generator have gone offline and are currently not responding to user queries. Knowing I have access to these tools expands my willingness to use them. We need to move from the technical aspects of these systems to what they actually do.
This works better than having a founding team of 10 people in many ways (less coordination overhead, for example). Altman also addressed a question regarding OpenAI’s ongoing loss of some of its co-founders and top researchers. OpenAI, Anthropic, and Google have been in an AI arms race, each one working to unlock the next major AI breakthrough. While OpenAI has continued to iterate on GPT-4, it no longer has a dominant lead, with Anthropic’s Claude going toe-to-toe with ChatGPT and besting it at times. In a Reddit AMA with OpenAI’s Sam Altman, Kevin Weil, Srinivas Narayanan, and Mark Chen, Altman blamed compute scaling for the lack of newer AI models.
And even that is more of a security risk than something that would compel me to upgrade my laptop. The report notes that training runs for large languages can cost tens of millions of dollars and require hundreds of chips to run simultaneously. Some of the people involved in AI researchers and developers out there worry about the dangers of unsafe AI. The faster these AI innovations drop, the more risky it might be, as some companies might be pressured to ignore safety over profitability.
Nike’s Secret Design Archive to be made visible to the public for the first time at the Vitra Design Museum
Graham has an honors degree in Computer Science and spends his spare time podcasting and blogging. For all that we’re a year into the AI PC life cycle, the artificial intelligence software side of the market is still struggling to find its footing. Few AI features and applications are truly unique, and only a handful are compelling enough to justify the AI PC label. Sure, AI PCs may have Neural Processing Units with some impressive performance, but outside of getting you better battery life and better hardware acceleration, there hasn’t been a “Killer App” for the AI market. The New York Times indicates that the cost of housing and running OpenAI’s LLM has started to sour the relationship between OpenAI and Microsoft. The AI company has requested access to more of Microsoft’s servers, particularly those housing the powerful Nvidia H100 GPUs.
That supposedly coincided with OpenAI researchers celebrating the end of Orion’s training. Llama-3 will also be multimodal, which means it is capable of processing and generating text, images and video. Therefore, it will be capable of taking an image as input to provide a detailed description of the image content. Equally, it can automatically create a new image that matches the user’s prompt, or text description. He specifically said that he would not be releasing the GPT-5 this year and would instead focus on shipping GPT-o1. The model, previously called ‘Project Strawberry,’ differs from other models by taking a more methodological and slower approach.
OpenAI CEO Sam Altman says new AI model is taking a while because ‘we can’t ship’ as quickly as hoped – CNBC
OpenAI CEO Sam Altman says new AI model is taking a while because ‘we can’t ship’ as quickly as hoped.
Posted: Thu, 31 Oct 2024 07:00:00 GMT [source]
They require an order of magnitude more computing operations, meaning many more chips and power. The supply of chips, data centres, power, and financial capital could be a bottleneck for the most advanced frontier models in the 4-5-year period. So I reckon a ‘generation’ is more likely to look like Schmidt’s four years. Sam Altman’s confirmation that GPT-5 won’t launch in 2024 reflects a measured approach to AI development. This decision, while affecting market expectations, underscores a commitment to technological integrity over rapid deployment.
Reports initially put the release date sometime in the summer of 2024, but once that passed, the goalpost had been moved to this fall. On the other hand, former Chief Technology Officer Mira Murati said in an interview this past June (before leaving teh company) that the “next-gen” model was not due out for another year and a half. In a recent podcast with Lex Fridman, OpenAI CEO Sam Altman said that the company will release “an amazing new model” this year. He didn’t state that it was GPT-5, but it certainly corroborates the Business Insider report that GPT-5 is likely coming this year.
Instead, the company intends to hand the new model over to select businesses and partners, who will then use it as a platform to build their own products and services. This is the same strategy that Nvidia is pursuing with its NVLM 1.0 family of large language models (LLMs). OpenAI has released some major AI models since it first launched ChatGPT, and its most powerful one is GPT-4o.
What Industries Will Benefit the Most from GPT-5
It might generate quizzes, essay prompts, and even multimedia content for interactive teaching. GPT-5’s ability to adapt to context and user needs could help educators and students in ways that were once unimaginable. Researchers could also rely on GPT-5 to sift through vast amounts of data, identifying trends and correlations that are not immediately visible to humans. OpenAI has hinted at significant improvements in GPT-5, making it one of the most anticipated updates in AI technology.
OpenAI announced publicly back in May that training on its next-gen frontier model “had just begun.” As to when it will launch, however, we’re still in the dark. Sam Altman revealed that ChatGPT’s outgoing models have become more complex, hindering OpenAI’s ability to work on as many updates in parallel as it would like to. Apparently, computing power is also another big hindrance, forcing OpenAI to face many “hard decisions” about what great ideas it can execute.
Android 16 Beta Launch: What It Means for Samsung’s One UI 8 Update
That’s where he said that OpenAI will launch o3-mini around the end of January, with the full o3 model to follow shortly after that. OpenAI ended its “12 Days of ChatGPT” announcements on Friday with a bang. The company unveiled the next-gen reasoning model that will power ChatGPT, which is called o3. It will affect the way people work, learn, receive healthcare, communicate with the world and each other. It will make businesses and organisations more efficient and effective, more agile to change, and so more profitable.
With GPT-5, it’d be nice to see that change with more integrations for other services. Given that plugin support seems to be waning in favor of custom GPTs, I’m expecting that the list of advantages that OpenAI has is beginning to dwindle, especially given that Copilot has custom GPTs, too. I’d love to see OpenAI partner with other companies to introduce exclusive features. GPT-4 with vision is a model that already exists, and it can interpret visual data to then use in decision-making.
- The WSJ’s report also nodded to the sheer number of high-profile executive exits that have plagued OpenAI this year, including co-founder and Chief Scientist Ilya Sutskever and Chief Technology Officer Mira Murati.
- Answering a question about the timeline for GPT-5 or its equivalent’s release, Altman said, “We have some very good releases coming later this year!
- In the best case scenario, the sources said, Orion outperforms OpenAI’s current offerings, but hasn’t progressed enough to justify the massive cost of running the new model.
- Considering that most strawberries in that image are ripe, Altman might be telling the world that Strawberry could launch soon.
- The team is also looking into improving the model’s performance post-training.
Software composition analysis (SCA) is a process undertaken to identify and track application code dependenci… A new vulnerability enables remote code execution (RCE) with system p… OpenAI and Microsoft declined to comment on the U.S. newspaper article. In November, Altman said the startup would not release anything under the GPT-5 name by 2024.
Those techniques could involve using synthetic training data, such as what Nvidia’s Nemotron family of models can generate. The team is also looking into improving the model’s performance post-training. OpenAI’s response to these challenges demonstrates the complexity of modern AI development. Moving beyond traditional internet-based training data, the company initiated an innovative approach to dataset creation. While this methodology promises improved results, it has significantly extended the development timeline. This means the AI can autonomously browse the web, conduct research, plan, and execute actions based on its findings.
OpenAI has recently been in the spotlight with its ambitious Project Strawberry, which aims to bring AI closer to human-level reasoning. As detailed by various reports, including a recent one from Reuters, Project Strawberry represents a significant leap in AI capabilities. This article delves into what Project Strawberry is, its potential implications, and whether it signals the arrival of GPT-5. However, while the model is expected to edge closer to human-level intelligence, experts caution that it still falls short of true AGI. GPT-5, or Orion, promises to outperform its predecessor, GPT-4o, in several key areas, including a larger context window, expanded knowledge base, and superior reasoning abilities.
Another user asked about the value that SearchGPT or the ChatGPT Search feature brings, Altman said that he finds it to be a faster and easier way to get to the information. He also highlighted that the web search functionality will be more useful for complex research. “I also look forward to a future where a search query can dynamically render a custom web page in response,” he added. Last year, Shane Legg, Google DeepMind’s co-founder and chief AGI scientist, told Time Magazine that he estimates there to be a 50% chance that AGI will be developed by 2028. Dario Amodei, co-founder and CEO of Anthropic, is even more bullish, claiming last August that “human-level” AI could arrive in the next two to three years.
And while there’s ongoing work to address that shortcoming, there are plenty of situations where authorship, trust, and accountability really matter, and a ChatGPT summary without citations won’t suffice. Given the talk of OpenAI pitching partnerships with publishers, the AI biz may be looking to show off how it can summarize current news content in its chatbot replies, which would be search-adjacent. There’s been a lot of talk lately that the major GPT-5 upgrade, or whatever OpenAI ends up calling it, is coming to ChatGPT soon. As you’ll see below, a Samsung exec might have used the GPT-5 moniker in a presentation earlier this week, even though OpenAI has yet to make this designator official. The point is the world is waiting for a big ChatGPT upgrade, especially considering that Google also teased big Gemini improvements that are coming later this year.
The internet, which OpenAI and others mined for data during the training phases of previous AI models, is finite. Called Orion internally, GPT-5 has been in development for 18 months. It was initially expected to drop in 2024, but OpenAI encountered unexpected delays while burning through cash.
Regardless of what product names OpenAI chooses for future ChatGPT models, the next major update might be released by December. It will be different from GPT-4o and o1, and could be more powerful. But this GPT-5 candidate, reportedly called Orion, might not be available to regular users like you and me, at least not initially. The technology behind these systems is known as a large language model (LLM). These are artificial neural networks, a type of AI designed to mimic the human brain. They can generate general purpose text, for chatbots, and perform language processing tasks such as classifying concepts, analysing data and translating text.
Content writer and researcher with a strong focus on AI and machine learning. Always keeping up with the latest AI news, She enjoys breaking down the coolest trends and discoveries in AI. With a passion for simplifying complex ideas, she offers expert insights into how AI is transforming industries and making a real-world impact. The delay in releasing Orion also underscores a growing emphasis on responsible AI. OpenAI CEO, Sam Altman, has previously stressed the importance of ensuring that new models are safe and beneficial before being released to the public.
That’s especially true now that Google has announced its Gemini language model, the larger variants of which can match GPT-4. In response, OpenAI released a revised GPT-4o model that offers multimodal capabilities and an impressive voice conversation mode. While it’s good news that the model is also rolling out to free ChatGPT users, it’s not the big upgrade we’ve been waiting for. The technical hurdles facing GPT-5’s development stem from fundamental challenges in its training process. Initial training rounds exposed unexpected limitations in the model’s ability to process and synthesize information effectively.
This approach echoes how previous models like GPT-4o were handled, with enterprise solutions taking priority over consumer access. The WSJ also reports that rather than just relying on publicly available data and licensing deals, OpenAI has also hired people to create fresh data by writing code or solving math problems. It’s also using synthetic data created by another of its models, o1. OpenAI’s Orion has been in development for 18 months, and the process has included at least two major training runs. These runs, which involve training the model on vast datasets, are designed to refine and improve its capabilities.
GPT-5 has been a hot topic for quite a while now, and OpenAI CEO Sam Altman recently made comments regarding the future of the GPT model on Lex Fridman’s podcast. In that podcast, he stated that GPT-4 “kind of sucks” now and that he’s looking forward to what comes next. He refused to refer to it as “GPT-5”, but a recent report from Business Insider did name it as such, with people familiar with the LLM referring to it as “materially better” when compared to GPT-4. With over 25 years of experience in both online and print journalism, Graham has worked for various market-leading tech brands including Computeractive, PC Pro, iMore, MacFormat, Mac|Life, Maximum PC, and more. He specializes in reporting on everything to do with AI and has appeared on BBC TV shows like BBC One Breakfast and on Radio 4 commenting on the latest trends in tech.
OpenAI CEO Sam Altman predicted that GPT-5 will be a significant leap forward. GPT-5 should enable new scientific discoveries and perform routine tasks such as booking appointments or flights. Researchers hope it will make fewer mistakes than current AI, or at least be able to acknowledge doubt.
Following past trends, we can expect GPT-5 to become more accurate in its responses, as it will be trained on more data. Generative AI models depend on training data to fuel the answers they provide. Therefore, the more data a model is trained on, the better the model’s ability to generate coherent content, leading to better performance. What’s more, The Verge reports that Microsoft is planning to host the new model beginning in November. There is no confirmation yet that Orion will actually be called GPT-5 when it is released, though the model is reportedly considered by its engineers to be GPT-4’s successor.
We’ve been on the 4-series of GPT models for a while, and people are starting to wonder when the next major upgrade is coming. Well, if you’re on the edge of your seat waiting for GPT-5, you’re going to be disappointed. This is all speculation based on a series of reports from various sources that have all heard the same thing. Reuters said a few days ago that AI companies face delays in training new large language models, identifying a key problem.
One of its standout features is the inclusion of sources for responses. Google’s Gemini is gaining attention for how well it fits into everyday tools. It connects seamlessly with Google Workspace and Search, making it easy for users to adopt. While the details about GPT-5’s pricing have not been officially announced, examining current trends provides some likely scenarios. In industries like retail, travel, and banking, this level of automation could cut costs while maintaining high service standards.
OpenAI almost certainly won’t be complacent with being second-best, and I suspect the company will be pushing further and further towards its goal of eventual AGI. We assumed that the company was going to release it in May, but we eventually got GPT-4o. GPT-4 is a major leap over GPT-3, and the company released the former not too long after ChatGPT’s big unveil. Since then, we’ve gotten several variants of GPT-4 like GPT-4 Turbo, GPT-4o, and GPT-4o mini.
GPT-5 is also expected to show higher levels of fairness and inclusion in the content it generates due to additional efforts put in by OpenAI to reduce biases in the language model. While OpenAI is working arduously to evolve its technology, Sam Altman shared their vision to make AI capable enough to act as “agents” on users’ behalf and perform tasks autonomously. If the company is able to achieve this, it could mark another giant breakthrough, given its capabilities to go beyond merely providing information to accomplishing tasks with minimal human input. When these series of training runs began, researchers reportedly found that the data in question “wasn’t as diversified as they had thought,” which limited how much Orion would learn.
Training GPT-5 might cost up to $500 million per run, and the results aren’t exciting. Training GPT-4 cost the company over $100 million, according to Altman. Sam Altman & Co. detailed the capabilities of the o3 models during a short live stream on Friday.