Introduction to GPT-4o
This notebook is from OpenAI Cookbooks, enhanced with Portkey observability and features
The GPT-4o Model
GPT-4o (“o” for “omni”) is designed to handle a combination of text, audio, and video inputs, and can generate outputs in text, audio, and image formats.
Current Capabilities
Currently, the API supports {text, image}
inputs only, with {text}
outputs, the same modalities as gpt-4-turbo
. Additional modalities, including audio, will be introduced soon.
This guide will help you get started with using GPT-4o for text, image, and video understanding.
Getting Started
Install OpenAI SDK for Python
Configure the OpenAI Client
First, grab your OpenAI API key here. Now, let’s start with a simple input to the model for our first request. We’ll use both system
and user
messages for our first request, and we’ll receive a response from the assistant
role.
Image Processing
GPT-4o can directly process images and take intelligent actions based on the image. We can provide images in two formats:
- Base64 Encoded
- URL
Let’s first view the image we’ll use, then try sending this image as both Base64 and as a URL link to the API
Base64 Image Processing
URL Image Processing
Video Processing
While it’s not possible to directly send a video to the API, GPT-4o can understand videos if you sample frames and then provide them as images. It performs better at this task than GPT-4 Turbo.
Since GPT-4o in the API does not yet support audio-in (as of May 2024), we’ll use a combination of GPT-4o and Whisper to process both the audio and visual for a provided video, and showcase two usecases:
- Summarization
- Question and Answering
Setup for Video Processing
We’ll use two python packages for video processing - opencv-python and moviepy.
These require ffmpeg, so make sure to install this beforehand. Depending on your OS, you may need to run brew install ffmpeg
or sudo apt install ffmpeg
Process the video into two components: frames and audio
Example 1: Summarization
Now that we have both the video frames and the audio, let’s run a few different tests to generate a video summary to compare the results of using the models with different modalities. We should expect to see that the summary generated with context from both visual and audio inputs will be the most accurate, as the model is able to use the entire context from the video.
- Visual Summary
- Audio Summary
- Visual + Audio Summary
Visual Summary
The visual summary is generated by sending the model only the frames from the video. With just the frames, the model is likely to capture the visual aspects, but will miss any details discussed by the speaker.
The model is able to capture the high level aspects of the video visuals, but misses the details provided in the speech.
Audio Summary
The audio summary is generated by sending the model the audio transcript. With just the audio, the model is likely to bias towards the audio content, and will miss the context provided by the presentations and visuals.
{audio}
input for GPT-4o isn’t currently available but will be coming soon! For now, we use our existing whisper-1
model to process the audio
The audio summary might be biased towards the content discussed during the speech, but comes out with much less structure than the video summary.
Audio + Visual Summary
The Audio + Visual summary is generated by sending the model both the visual and the audio from the video at once. When sending both of these, the model is expected to better summarize since it can perceive the entire video at once.
After combining both the video and audio, you’ll be able to get a much more detailed and comprehensive summary for the event which uses information from both the visual and audio elements from the video.
Example 2: Question and Answering
For the Q&A, we’ll use the same concept as before to ask questions of our processed video while running the same 3 tests to demonstrate the benefit of combining input modalities:
- Visual Q&A
- Audio Q&A
- Visual + Audio Q&A
Comparing the three answers, the most accurate answer is generated by using both the audio and visual from the video. Sam Altman did not discuss the raising windows or radio on during the Keynote, but referenced an improved capability for the model to execute multiple functions in a single request while the examples were shown behind him.
Conclusion
Integrating many input modalities such as audio, visual, and textual, significantly enhances the performance of the model on a diverse range of tasks. This multimodal approach allows for more comprehensive understanding and interaction, mirroring more closely how humans perceive and process information.
Was this page helpful?