ChatGPT now laughs, quite creepy indeed

  • mayaskmayask
  • AI
  • September 24, 2024
  • 0 Comments

ChatGPT 4o Update: A Multimodal Chatbot Revolution

img {
max-width: 100%;
height: auto;
}

.aligncenter {
display: block;
margin: 0 auto;
}


OpenAI

We all saw it coming, and the day has finally arrived – ChatGPT is gradually transforming into your friendly neighborhood AI, capable of eerily laughing along with you if you say something funny or expressing an “aww” when you’re being kind. And that’s just the tip of the iceberg of today’s announcements. OpenAI recently held a special Spring Update Event, during which it unveiled its latest large language model (LLM) – GPT-4o. With this update, ChatGPT now has a desktop app, is even better and faster, but most importantly, it becomes fully multimodal.

The event began with an introduction by Mira Murati, OpenAI’s CTO, who revealed that these updates aren’t just for paid users – GPT-4o is being rolled out across the platform for both free users and paid subscribers. “The remarkable thing about GPT-4o is that it brings the intelligence of GPT-4 to everyone, including our free users,” Murati said.

Recommended Videos

It is said that GPT-4o is significantly faster, but the truly impressive aspect is that it takes the capabilities to a whole new level, spanning text, vision, and audio. It can also be utilized by developers to integrate into their APIs, and it is reported to be up to two times faster and 50% cheaper, with a rate limit that is five times higher compared to GPT-4 Turbo.


OpenAI

Alongside the new model, OpenAI is also launching the ChatGPT desktop app along with a refreshing of the user interface on the website. The aim is to make communicating with the chatbot as seamless as possible. “We are envisioning the future of interaction between ourselves and machines, and we believe that GPT-4o is truly shifting this paradigm into the future of collaboration, where the interaction becomes much more natural,” Murati said.

To that end, the new enhancements – which Murati showcased with the help of OpenAI’s Mark Chen and Barret Zoph – indeed seem to make the interaction much more seamless. GPT-4o is now capable of analyzing videos, images, and speech in real time and accurately pinpointing emotions in all three. This is particularly remarkable in ChatGPT Voice, which has become so human-like that it almost skates on the edge of the uncanny valley.

When saying “hi” to ChatGPT, one receives an enthusiastic and friendly response with just the slightest hint of a robotic undertone. When Mark Chen told the AI that he was conducting a live demo and needed help to calm down, it seemed appropriately impressed and came up with the idea that he should take a few deep breaths. It also noticed when the breaths were too rapid – more like panting – and guided Chen through the correct way to breathe, even making a small joke first: “You’re not a vacuum cleaner.”

The conversation flows naturally, as you can now interrupt ChatGPT without having to wait for it to finish, and the responses come quickly without any awkward pauses. When asked to tell a bedtime story, it responded to requests regarding the tone of voice, ranging from enthusiastic to dramatic to robotic. The second half of the demo showcased ChatGPT’s ability to accurately read code, assist with math problems through video, and read and describe the content on the screen.

The demo wasn’t flawless – the bot seemed to cut off at times, and it was difficult to determine whether this was due to someone else speaking or because of latency. However, it sounded about as lifelike as one could expect from a chatbot, and its ability to read human emotion and respond in kind is both thrilling and anxiety-provoking. Hearing ChatGPT laugh wasn’t something I thought I would hear this week, but here we are.

With its multimodal design and the desktop app, GPT-4o will be gradually launched over the next few weeks. A few months ago, Bing Chat told us that it wanted to be human-like, but now, we are about to get a version of ChatGPT that might be as close to human as we have ever seen since the beginning of the AI boom.

  • mayask

    Related Posts

    ChatGPT’s new Canvas feature like Claude’s Artifacts vividly

    img { max-width: 100%; } OpenAI Following closely on the heels of its whopping $6.6 billion funding round, OpenAI on Thursday made the beta of a brand-new collaboration interface for…

    OpenAI raises $6.6B in latest funding round

    Andrew Martonik / Digital Trends OpenAI has now emerged as one of the wealthiest private companies on Earth after successfully securing a whopping $6.6 billion in its latest funding round…

    You Missed

    New Avatar: The Last Airbender game looks super ambitious

    • By mvayask
    • October 5, 2024
    • 39 views

    PS5 colorful chrome accessories pre-order now

    • By mvayask
    • October 5, 2024
    • 38 views
    PS5 colorful chrome accessories pre-order now

    ChatGPT’s new Canvas feature like Claude’s Artifacts vividly

    • By mayask
    • October 5, 2024
    • 40 views
    ChatGPT’s new Canvas feature like Claude’s Artifacts vividly

    OpenAI raises $6.6B in latest funding round

    • By mayask
    • October 5, 2024
    • 45 views
    OpenAI raises $6.6B in latest funding round

    Qualcomm aims to add cool AI tools to Android phone

    • By mayask
    • October 5, 2024
    • 39 views
    Qualcomm aims to add cool AI tools to Android phone

    Reddit in $60M deal with Google for AI tools boost

    • By mayask
    • October 5, 2024
    • 38 views