With ChatGPT already creating groundbreaking changes the world over in both the personal and professional sectors, the newly released flagship model, GPT-4o, now aims for a more natural conversation between man and AI.
Introduced on Monday, GPT-4o has now stepped up from its chat-based limitations, now allowing the user to interact with the AI with GPT-4 level intelligence, using both audio and visual tools, such as images and having real-time conversations. Emphasising the naturalness of human conversation, GPT-4o promises to imbibe its nuances, such as tonal changes and interruptions common to native conversation. Of particular significance is the API feature that now allows developers to make apps of scale on their own, alongside the added ability to interpret and analyse both simple math equations and code bases.
To mimic human emotions
GPT-4o’s edge lies in its ability to accurately mimic human emotions via voice.
This plays a particularly important role in real-time translations. Giving the feel of interacting with an actual person, rather than the older, better-known signature ‘machine-voice’ translation, GPT-4o could also be a potential replacement for language interpretation.
Furthermore, in the endeavour to make advanced AI tools accessible by all, everywhere, OpenAI’s ChatGPT has not only done away with its account-creation feature, but has also introduced a desktop app.
As a plus, the new model is available at a 50% cheaper rate with 5% high-rate limits as compared to the Turbo model. With an additional improved UI, the faster, smarter and cheaper model promises even further breakthroughs than its previous models.