At OpenAI's first developer conference on November 6th 2022, they announced major upgrades coming to GPT-4 Turbo, the API that powers ChatGPT and other applications. Key highlights include increased context length up to 128,000 tokens, new modalities like image and speech capabilities, lower pricing up to 2.75x cheaper, and ways for anyone to build customized "GPTs" - AI assistants tailored for specific use cases. The upgrades also apply to ChatGPT itself. While not all features have fully rolled out yet, some can already be accessed via OpenAI's API playground.
How To Access Massive GPT-4 Upgrades and New ChatGPT Features
🤯 GPT-4 Turbo has 6 major capability upgrades over previous GPT-4, including massively increased context length of up to 96,000 words!
💻 JSON mode and other controls give developers more fine-grained control over model behavior and outputs when using the API.
🖼️ New modalities like vision and speech expand capabilities to process images, generate captions/classifications, and text-to-speech audio.
💸 Pricing per token drops by nearly 3x, making it much cheaper to leverage GPT-4 Turbo capabilities.
⚡️ ChatGPT now runs on GPT-4 Turbo architecture with all the latest capabilities and knowledge.
🤖 Anyone can now build customized 'GPTs' - AI assistants tailored for specific use cases and publish them for others.
💰 GPT creators can even sell access to their customized assistants via a new GPT marketplace and receive a revenue share.
Read More Summaries About Artificial Intelligence and Technology
The increased context length for GPT-4 Turbo to 96,000+ words is an order of magnitude improvement that allows far more conversational and referential dialog.
Building customized GPT AI assistants allows almost anyone to create helpful bots that combine instructions, data/knowledge, and actions into easy-to-use tools.
Lower pricing and new revenue share model incentives more creators and developers to build AI apps and assistants using GPT-4 Turbo and publishing via OpenAI's marketplace.
While exciting, OpenAI continually expanding capabilities risks obsoleting some early AI startups; maintaining proprietary data/capabilities seems key.
The upgrades to ChatGPT architecture and expanding ways to build assistants signals OpenAI wants ecosystem growth firmly rooted in their platforms.