Sunday, September 8, 2024
Apps

OpenAI introduces new lightweight GPT-4o mini


Key Takeaways

  • GPT-4o mini is a cost-effective model outperforming GPT-3.5 Turbo
  • Available in the ChatGPT app, it replaces GPT 3.5 for Free, Plus, Team users
  • GPT-4o mini supports new safety features such as instruction hierarchy



OpenAI has introduced a new smaller version of GPT-4o that is intended to make it cheaper for developers to incorporate the company’s AI models into their apps. GPT-4o mini is a smaller version of GPT-4o that is capable of outperforming GPT 3.5 Turbo while being 60% cheaper. This makes it much more accessible to smaller developers who may previously have been put off by the cost of incorporating OpenAI’s intelligence into their apps. The new model is also already available in the ChatGPT app, replacing GPT 3.5 for Free, Plus, and Team users, with the model coming to Enterprise users next week.

Related

How I upgraded Siri with ChatGPT to get smarter AI responses on my iPhone

I can still talk to Siri, but now I get better answers generated by ChatGPT. It’s the best of both worlds.

What is GTP-4o mini?

A cost-efficient small model aimed at developers

ChatGPT Plus vs Gemini Advanced vs Microsoft Copilot Pro

Pocket-lint


AI models can be used by developers to make their apps far more intelligent. Using models such as GPT-3.5, you can create customer support chatbot apps, or search and summarize large instruction manuals. However, each query and response costs money, with OpenAI charging a fee for the use of its tech. For some developers, these costs outweighed the benefits.

GPT-4o mini is a small model that is designed to be much more cost-effective to use while still maintaining a level of performance beyond that of GPT-3.5 Turbo. In the LMSYS Chatbot Arena, in which anyone can compare two anonymous chatbots and vote for which is superior, GPT-4o mini currently has a higher score than GPT-4 and is only just behind GPT4-Turbo.


The model has a context window of 128K tokens, outputs up to 16K tokens per request, and was trained on data up to October 2023. It costs just 15 cents per million input tokens and 60 cents per million output tokens. In comparison, GPT-3.5 Turbo costs 50 cents per million input tokens, and $1.50 per million output tokens.

Can I use GPT-4o mini in the ChatGPT app?

GPT-4o mini has replaced GPT 3.5

new ChatGPT model options in the ChatGPT app

The new GPT-4o mini model is already available in the ChatGPT app. It replaces the GPT-3.5 option, which is no longer available. OpenAI says that Enterprise users will have access starting from next week. The three available models in the ChatGPT app are now GPT-4, described as the legacy model, GPT-4o, described as being best for complex tasks, and GPT-4o mini, described as being faster for everyday tasks. We may see GPT-4o mini being used by Siri at some point in the future, too.


GPT-4o mini supports text and vision in the API, with support for text, image, video, and audio inputs coming in the future.

Unlike its bigger brother, GPT-4o mini currently doesn’t offer support for anything other than text inputs; you can’t upload images or files like you can in GPT-4o. You also can’t generate images like you can with GPT-4o. According to OpenAI, “GPT-4o mini supports text and vision in the API, with support for text, image, video, and audio inputs coming in the future.” It doesn’t explicitly state that these features will be coming to the ChatGPT app, although it would seem likely.


One big difference with GPT-4o mini, however, is the introduction of new safety features. It’s the first model to utilize a new technique called instruction hierarchy to try to counteract some of the jailbreaks and prompt injections that have been used on other models. This feature appears to give greater weight to some prompts than others, so that an instruction not to reveal the system prompt, for example, would take priority over an instruction asking to reveal it. It will be interesting to see how well this works in the wild.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.