Last week, OpenAI announced that GPT-4 — their most advanced large language model (LLM) yet — is now available to paid ChatGPT+ subscribers and within the OpenAI API, which has a waitlist.
In the hours after its launch, early users tweeted in amazement as they used GPT-4 to recreate the game of Pong in under 60 seconds, create 1-click lawsuits, and turn hand-drawn sketches into websites.
Advanced input capabilities: GPT-4 is multimodal and can accept both text and image inputs. However, image input capability is not yet available in the ChatGPT+ version of GPT-4 or within the API. OpenAI says they are working with a partner called be my eyes to prepare this feature for broader availability.
“We’re also open-sourcing OpenAI Evals, our framework for automated evaluation of AI model performance, to allow anyone to report shortcomings in our models to help guide further improvements,” said the announcement.
Superior outputs and user experience: OpenAI used exams like the Bar and the LSATs to prove GPT-4 is brighter than GPT-3.5 but stressed it’s not fully reliable and will still “hallucinate facts and make reasoning errors.”
“The difference comes out when the complexity of the task reaches a sufficient threshold—GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5.”
OpenAI, Microsoft, and monetization: Microsoft revealed that Bing AI has been using a tuned version of GPT-4 all along. By licensing GPT-4 for integration into Microsoft products, collaborating on joint research projects, and leveraging Microsoft's cloud infrastructure, OpenAI is positioned to generate revenue while expanding its reach.
Have you tried GPT-4? Let the community know if you think it’s worth the 20 bucks a month for instant access to try out the new model’s text-only input capabilities.