“OpenAI’s ChatGPT can now ‘see, hear and speak,’ or, at least, understand spoken words, respond with a synthetic voice and process images, the company announced Monday.
The update to the chatbot — OpenAI’s biggest since the introduction of GPT-4 — allows users to opt into voice conversations on ChatGPT’s mobile app and choose from five different synthetic voices for the bot to respond with. Users will also be able to share images with ChatGPT and highlight areas of focus or analysis (think: ‘What kinds of clouds are these?’).
The changes will be rolling out to paying users in the next two weeks, OpenAI said. While voice functionality will be limited to the iOS and Android apps, the image processing capabilities will be available on all platforms.”