Every update to ChatGPT, Gemini, Perplexity, Copilot and Google Search can change how your brand appears in AI-generated answers. We track release notes, API changes and documentation updates across major LLM platforms so you don't have to.
Total Updates
475
Gemini
264
ChatGPT
233
Perplexity
12
GoogleAI
9
Copilot
6
201–250 of 475 updates
ChatGPT
7 Aug 2025 chatgpt
August 7, 2025 — GPT-5
GPT-5 is slowly rolling out to all users on ChatGPT Plus, Pro, Team, and Free plans worldwide across web, mobile, and desktop. GPT-5 will be available to ChatGPT Enterprise and Edu plans soon.
GPT-5 in ChatGPT is our next flagship model and the new default for all logged-in users. It simplifies ChatGPT to a single auto-switching system that brings together the best of our previous models into asmart, fast model.
GPT-5 is available to all ChatGPT Tiers. Users on Paid tiers - Plus, Pro, and Team - have access to the model picker, which enables you to manually select GPT-5 or GPT-5 Thinking. Pro and Team tier users have access to GPT-5 Thinking Pro, which takes a bit longer to think but delivers the accuracy you need for complex tasks.
Learn more about GPT-5 in ChatGPT.
You can now choose from four distinct personalities or use the Default personality in yourCustomize ChatGPT settings. Default is the standard ChatGPT style: clear, neutral, and adaptable. The other personalities each have their own style and tone, described below.
Cynic– Sarcastic and dry, delivers blunt help with wit. Often teases, but provides direct, practical answers when it matters.
Robot– Precise, efficient, and emotionless, delivering direct answers without extra words.
Listener– Warm and laid-back, reflecting your thoughts back with calm clarity and light wit.
Nerd– Playful and curious, explaining concepts clearly while celebrating knowledge and discovery.
Please note that these personalities will not apply to Voice mode.
You can now set an accent color that applies to elements in ChatGPT, including your conversation bubbles, the Voice button, and highlighted text.
On web:Click your profile icon at the bottom left, selectSettings, go to theGeneraltab, and choose an option from theAccent colordrop-down.
On mobile (iOS and Android):Tap your profile icon at the bottom, go toPersonalization, then selectColor Schemeto pick your accent color.
Today we’re rolling out improvements to Voice Mode to make it more accessible and useful for everyone. It also now works with custom GPTs. We’re expanding access with near-unlimited use for Plus users and hours each day for Free users. To simplify the experience, Standard Voice Mode will be retired in 30 days, unifying all users onto our latest voice experience.
For paid users, Voice now adapts to your instructions, adjusting its speaking style (length, speed, tone, and more) to fit the moment.
Study effectively with helpful exam prep toolsWhat:Dive deeper into your studies by generating unlimited custom quizzes. You can also create flashcards and study guides based on your quiz, or other class materials, for an easy way to review and reinforce your learning. This experience is open to all users over the age of 18.Why:Effective studying is not one-size-fits-all. We want to provide a variety of ways for learners to master the topics they are studying, whether that is sitting down for a long, challenging quiz, quickly reviewing concepts with flashcards, or taking a comprehensive study guide with them on the go.Understand Complex Topics Faster with Integrated Visuals and YouTube videosWhat:Gemini now provides a richer learning experience by automatically integrating high-quality images, diagrams, and YouTube videos directly into its responses. When you ask about complex topics like photosynthesis or the parts of a cell, Gemini will seamlessly weave in visuals and YouTube videos alongside the text to help you understand the information more easily. Integration of high quality images, diagrams and YouTube videos is available to all users.Why:We believe seeing a concept is key to truly understanding it. Gemini now uses visuals to make complex information more digestible and memorable. This helps turn abstract ideas into concrete knowledge, making learning faster and more effective for everyone.Go Beyond the Answer with Step-by-Step Guided LearningWhat:Gemini now offers Guided Learning, a new mode designed to help you build a deeper understanding of whatever you’re trying to learn or do. Instead of providing a single answer, this feature guides you through subjects step-by-step, breaking down concepts and providing interactive help along the way. It's ideal for understanding complex information or developing a new skill. You can activate this tool using the ‘Guided Learning’ chip on desktop or the ‘Learn’ chip on mobile. This experience is open to all users.Why:
Study effectively with helpful revision toolsWhat:Dive deeper into your studies by generating unlimited custom quizzes. You can also create flashcards and study guides based on your test, or other class materials, for an easy way to review and reinforce your learning. This experience is open to all users over the age of 18.Why:Effective studying is not one-size-fits-all. We want to provide a variety of ways for learners to master the topics that they're studying, whether that's sitting down for a long, challenging test, quickly reviewing concepts with flashcards or taking a comprehensive study guide with them on the go.Understand complex topics faster with integrated visuals and YouTube videosWhat:Gemini now provides a richer learning experience by automatically integrating high-quality images, diagrams, and YouTube videos directly into its responses. When you ask about complex topics like photosynthesis or the parts of a cell, Gemini will seamlessly weave in visuals and YouTube videos alongside the text to help you understand the information more easily. Integration of high quality images, diagrams and YouTube videos is available to all users.Why:We believe seeing a concept is key to truly understanding it. Gemini now uses visuals to make complex information more digestible and memorable. This helps turn abstract ideas into concrete knowledge, making learning faster and more effective for everyone.Go beyond the answer with step-by-step Guided LearningWhat:Gemini now offers Guided Learning, a new mode designed to help you build a deeper understanding of whatever you’re trying to learn or do. Instead of providing a single answer, this feature guides you through subjects step-by-step, breaking down concepts and providing interactive help along the way. It's ideal for understanding complex information or developing a new skill. You can activate this tool using the 'Guided Learning' chip on desktop or the 'Learn' chip on mobile. This experience is open to all users.Why
Solve your most complex problems with Gemini 2.5 Deep ThinkWhat:Starting today, Google AI Ultra subscribers get early, limited access to Gemini 2.5 Deep Think, Gemini’s most advanced reasoning mode. Deep Think is capable of thinking for longer and generating multiple parallel streams of thought simultaneously—much like how humans brainstorm to tackle complex problems – making it excel at iterative development and design, scientific and mathematical research, and coding.Try it next time you need more brain power for your most complex tasks, especially those requiring creativity, strategic planning and step-by-step improvements. Select 2.5 Pro from the model drop down, and "Deep Think" will appear in the prompt bar. Submit your task with Deep Think enabled and Gemini will let you know when your response is ready – generally in a few minutes.Why:We're committed to bringing Google's latest AI innovations faster to Google AI Ultra users, including our most powerful reasoning capabilities to help you tackle even the most complex problems.
- Launched image-to-video generation for the Veo 3 Preview model.
- Released Veo 3 Fast Preview model.
- To learn more about Veo 3, visit the [Veo](https://ai.google.dev/gemini-api/docs/video) page.
Study mode is currently available to users in Free, Plus, Pro, and Teams plans globally and will expand to Edu plans in the coming weeks. Study mode works with any model available in ChatGPT on iOS, Android, web, and desktop.
Study mode is a new learning experience in ChatGPT designed to help you build a deeper understanding of any topic. When you turn it on, ChatGPT will ask interactive questions to understand your goals and skill level, then work with you to reach the answer together.
With Study mode enabled, ChatGPT can:
Guide understanding with Socratic-style questions.Break concepts into easy to follow sections starting simple and adding complexity as you progress.Personalize responses based on your past chats if memory is on, using examples and tips tailored to you.Check your understanding with open-ended prompts and feedback.Work with your materials by referencing images or PDFs you upload.
You can enable study mode at any time by selectingToolsin the prompt window and choosingStudy and learnfrom the drop‑down menu or go tochatgpt.com/studymode.
Study mode is powered by custom system instructions and can have some inconsistent behavior and mistakes across conversations. We plan on training this behavior directly into our main models once we’ve learned what works best through iteration and user feedback.
Learn more about study mode.
Pro users can now connect to Canva and Notion for both chat search and deep research.
Note this feature is currently limited to users located outside of the EEA, Switzerland, and the UK.
Learn more aboutconnectors.
July 24, 2025 — Chat search for HubSpot and custom connectors (MCP) (Pro)
Pro users can now utilize chat search with HubSpot and custom connectors (MCP) in addition to deep research.
Note this feature is currently limited to users located outside of the EEA, Switzerland, and the UK.
Learn more aboutconnectors, as well ashow to create a custom connector using MCP.
- Launched `veo-3.0-generate-preview`, the latest update to Veo introducing
video with audio generation. To learn more about Veo 3, visit the [Veo](https://ai.google.dev/gemini-api/docs/video) page.
- Increased rate limits for Imagen 4 Standard and Ultra. Visit the
[Rate limits](https://ai.google.dev/gemini-api/docs/rate-limits) page for more details.
July 17, 2025 — Updates to Advanced Voice Mode for Free tier users
The Advanced Voice upgradeswe announced on June 7thare now rolling out to ChatGPT free users too. With the same improvements available to paid users, ChatGPT sounds more natural and expressive, and can translate more effectively. Rate limits for free users stay the same.
July 16, 2025 — Record mode in ChatGPT macOS desktop app (Plus)
Record mode is now available to ChatGPT Plus users globally in the macOS desktop app.
Record live conversations like team meetings or voice notes, and turn those into an editable summary in canvas.Available on the macOS desktop app only.
Learn more aboutrecord mode.
Note: Record mode originally rolled out to Team users on June 4, and Enterprise & Edu plans on June 18.
Stay on top of your work with the Productivity Planner GemWhat:The new Productivity Planner Gem seamlessly brings together information from your favourite productivity apps like Gmail, Calendar and Drive to help you stay organised. Whether it's getting an overview of your urgent emails or your calendar events for the week, this pre-made Gem gives you useful context to start your day and prioritise your most important tasks. Get started by asking it to summarise your key projects for the week, create a daily work brief or recommend action items to focus on.Availability: The Gem is available to Google AI Pro and Ultra subscribers, as well as qualifying Google Workspace Business and Education plans. Google Workspace apps must be enabled for all users to access the Gem.Why:Productivity Planner Gem helps you save time by connecting to your Workspace apps like Gmail, Calendar and Drive to deliver insights right when you need them. Schedule updates so that you can prompt less while staying informed on the things that matter most to you.
Stay on top of your work with the Productivity Planner GemWhat:The new Productivity Planner Gem seamlessly brings together information from your favorite productivity apps like Gmail, Calendar, and Drive to help you stay organized. Whether it's getting an overview of your urgent emails or your calendar events for the week, this pre-made Gem gives you useful context to start your day and prioritise your most important tasks. Get started by asking it to summarise your key projects for the week, create a daily work brief or recommend action items to focus on.Availability: The Gem is available to Google AI Pro and Ultra subscribers, as well as qualifying Google Workspace Business and Education plans. Google Workspace apps must be enabled for all users to access the Gem.Why:Productivity Planner Gem helps you save time by connecting to your Workspace apps like Gmail, Calendar and Drive to deliver insights right when you need them. Schedule updates so that you can prompt less while staying informed on the things that matter most to you.
- Released `gemini-embedding-001`, the stable version of our text embedding model. To learn more, see [embeddings](https://ai.google.dev/gemini-api/docs/embeddings). The `gemini-embedding-exp-03-07` model will be deprecated on August 14, 2025.
- Launched Gemini API Batch Mode. Batch up requests and send them to process asynchronously. To learn more, see [Batch Mode](https://ai.google.dev/gemini-api/docs/batch-mode).
- The preview models `gemini-2.5-pro-preview-05-06` and
`gemini-2.5-pro-preview-03-25` are now redirecting to
the latest stable version `gemini-2.5-pro`.
- Released Imagen 4 Ultra and Standard Preview models. To learn more, see the [Image generation](https://ai.google.dev/gemini-api/docs/image-generation) page.
Pro users are now able to use chat search connectors (in addition to deep research connectors) for the following integrations: Dropbox, Box, Google Drive (synced and non-synced), Microsoft OneDrive (Business), Microsoft SharePoint.
Note this feature is currently limited to users located outside of the EEA, Switzerland, and the UK.
Learn more aboutConnectors.
Plan ahead with scheduled actionsWhat:You can now schedule actions in the Gemini app so you can plan ahead for future and recurring tasks.Ask Gemini to help you get organized with a daily summary of your calendar, to-do's and important unread emails. Or get weekly creative ideas for a project and updates on topics you’re passionate about.You can schedule recurring actions, like finding local weekend music events every Friday, or one-time tasks, like asking for a recap of a keynote the next morning. Just tell Gemini what you need and when—it'll handle it for you.Scheduled actions are available to Google AI Pro and Ultra subscribers and qualifying Google Workspace business and education plans.Why:We want Gemini to be your personal, proactive and powerful AI assistant, and scheduled actions can help boost your productivity in new and exciting ways with proactive help that free up your time.
Plan ahead with scheduled actionsWhat:You can now schedule actions in the Gemini app so you can plan ahead for future and recurring tasks.Ask Gemini to help you get organised with a daily summary of your calendar, to-dos and important unread emails. Or get weekly creative ideas for a project and updates on topics that you're passionate about.You can schedule recurring actions, like finding local weekend music events every Friday, or one-time tasks, like asking for a recap of a keynote the next morning. Just tell Gemini what you need and when – it'll handle it for you.Scheduled actions are available to Google AI Pro and Ultra subscribers and qualifying Google Workspace business and education plans.Why:We want Gemini to be your personal, proactive and powerful AI assistant, and scheduled actions can help boost your productivity in new and exciting ways with proactive help that free up your time.
Capture meetings, brainstorms, or voice notes. Launching today for Pro, Enterprise, and Edu users. Previously launched for Team users on June 4, 2025.
ChatGPT will transcribe, summarize, and turn them into helpful outputs like follow-ups, plans, or even code.Available on the macOS desktop app only.
Learn more aboutChatGPT Record.
- Released `gemini-2.5-pro`, the stable version of our most powerful model, now with adaptive thinking. To learn more, see [Gemini 2.5 Pro](https://ai.google.dev/gemini-api/docs/models#gemini-2.5-pro) and [Thinking](https://ai.google.dev/gemini-api/docs/thinking). `gemini-2.5-pro-preview-05-06` will be redirected to `gemini-2.5-pro` on June 26, 2025.
- Released `gemini-2.5-flash`, our first stable 2.5 Flash model. To learn more, see [Gemini 2.5 Flash](https://ai.google.dev/gemini-api/docs/models#gemini-2.5-flash). `gemini-2.5-flash-preview-04-17` will be deprecated on July 15, 2025.
- Released `gemini-2.5-flash-lite-preview-06-17`, a low-cost, high-performance Gemini 2.5 model. To learn more, see [Gemini 2.5 Flash-Lite
Preview](https://ai.google.dev/gemini-api/docs/models#gemini-2.5-flash-lite).
June 16, 2025 — Expanded Model Support for Custom GPTs
Enterprise and Edu users can also now choose from the full set of ChatGPT models (GPT-4o, o3, o4-mini and more) when building Custom GPTs. This was launched to Plus, Pro and Team users earlier this month.
Key details:
GPTswithout Custom Actionscan use the model picker to select from all models available to the user.GPTswith Custom Actionscurrently support GPT-4o and 4.1Available onwebfor users on all paid plans (Plus, Pro Team, Enterprise, Edu).
June 13, 2025 — Improvements to the ChatGPT search response quality
We’ve upgraded ChatGPT search for all users to provide even more comprehensive, up-to-date responses. In testing, we found users preferred these search improvements over our previous search experience.
Improved quality
Smarter responses that are more intelligent, are better at understanding what you’re asking, and provide more comprehensive answers.Handles longer conversational contexts, allowing better intelligence in longer conversations.
Improved search capability and instruction following
More robust ability to follow instructions, especially in longer conversations, significantly reducing repetitive responses.Capability to run multiple searches automatically for complex or difficult questions.Search the web using an image you’ve uploaded.
Known limitations
Users may notice longer responses with this new search experience.In some cases, a “chain of thought” reasoning will show up unexpectedly for simple queries. A fix for this is rolling out to users shortly.ChatGPT may still make occasional mistakes - please double check responses.
June 12, 2025 — Expanded Model Support for Custom GPTs
Creators can now choose from the full set of ChatGPT models (GPT-4o, o3, o4-mini and more) when building Custom GPTs—making it easier to fine-tune performance for different tasks, industries, and workflows. Creators can also set a recommended model to guide users.
Key details:
GPTswithout Custom Actionscan use the model picker to select from all models available to the user.GPTswith Custom Actionscurrently support GPT-4o and 4.1Available onwebfor users onPlus, Pro and Teamplans.Enterprise and Edu rollout coming soon.
June 12, 2025 — Adding More Capabilities to Projects
Starting today, we’re adding several updates to projects in ChatGPT to help you do more focused work. These updates are available for Plus, Pro, and Team users.
Deep research and voice mode supportImprovements to memory to reference past chats in a project*Sharing chats from projectsStarting a new project directly from a chatUpload files and access model selector on mobile
Learn moreabout projects.
*Memory improvements are available for Plus and Pro users.
June 10, 2025 — Today, we're launching OpenAI o3-pro—available now for Pro users in ChatGPT and in our API.
Like o1-pro, o3-pro is a version of our most intelligent model, o3, designed to think longer and provide the most reliable responses. Since the launch of o1-pro, users have favored this model for domains such as math, science, and coding—areas where o3-pro continues to excel, as shown in academic evaluations. Like o3, o3-pro has access to tools that make ChatGPT useful—it can search the web, analyze files, reason about visual inputs, use Python, personalize responses using memory, and more. Because o3-pro has access to tools, responses typically take longer than o1-pro to complete. We recommend using it for challenging questions where reliability matters more than speed, and waiting a few minutes is worth the tradeoff.
In expert evaluations, reviewers consistently prefer o3-pro over o3 in every tested category and especially in key domains like science, education, programming, business, and writing help. Reviewers also rated o3-pro consistently higher for clarity, comprehensiveness, instruction-following, and accuracy.
Academic evaluations show that o3-pro consistently outperforms both o1-pro and o3.
To assess the key strength of o3-pro, we once again use our rigorous "4/4 reliability" evaluation, where a model is considered successful only if it correctly answers a question in all four attempts, not just one:
o3-pro is available in the model picker for Pro and Team users starting today, replacing o1-pro. Enterprise and Edu users will get access the week after.
As o3-pro uses the same underlying model as o3, full safety details can be found in theo3 system card.
Limitations
At the moment, temporary chats are disabled for o3-pro as we resolve a technical issue.
Image generation is not supported within o3-pro—please use GPT-4o, OpenAI o3, or OpenAI o4-mini to generate images.
Canvas is also currently not supported within o3-pro.
June 7, 2025 — Updates to Advanced Voice Mode for paid users
We're upgrading Advanced Voice in ChatGPT for paid users with significant enhancements in intonation and naturalness, making interactions feel more fluid and human-like. When we first launched Advanced Voice, it represented a leap forward in AI speech—now, it speaks even more naturally, with subtler intonation, realistic cadence (including pauses and emphases), and more on-point expressiveness for certain emotions including empathy, sarcasm, and more.
Voice also now offers intuitive and effective language translation. Just ask Voice to translate between languages, and it will continue translating throughout your conversation until you tell it to stop or switch. It’s ready to translate whenever you need it—whether you're asking for directions in Italy or chatting with a colleague from the Tokyo office. For example, at a restaurant in Brazil, Voice can translate your English sentences into Portuguese, and the waiter’s Portuguese responses back into English—making conversations effortless, no matter where you are or who you're speaking with.
This upgrade to Advanced Voice is available for all paid users across markets and platforms—just tap the Voice icon in the message composer to get started.
This update is in addition to improvements we made earlier this year to ensure fewer interruptions and improved accents.
Known Limitations
In testing, we've observed that this update may occasionally cause minor decreases in audio quality, including unexpected variations in tone and pitch. These issues are more noticeable with certain voice options. We expect to improve audio consistency over time.
Additionally, rare hallucinations in Voice Mode persist with this update, resulting in unintended sounds resembling ads, gibberish, or background music. We are actively investigating these issues and working toward a solution.
- Released `gemini-2.5-pro-preview-06-05`, a new version of our most powerful model, now with adaptive thinking. To learn more, see [Gemini 2.5 Pro Preview](https://ai.google.dev/gemini-api/docs/models#gemini-2.5-pro-preview-06-05) and [Thinking](https://ai.google.dev/gemini-api/docs/thinking). `gemini-2.5-pro-preview-05-06` will be redirected to `gemini-2.5-pro` on June 26, 2025.
June 4, 2025 — Connectors in beta for deep research (Plus, Pro, Team, Enterprise, Edu)
ChatGPT Team, Enterprise, and Edu customers globally can use connectors in deep research, as well as Pro and Plus users (excluding users in Switzerland, EEA, and the UK) to generate long-form, cited responses that include your company’s internal tools.
Supported connectors: Google Drive, SharePoint, Dropbox, Box, Outlook, Gmail, Google Calendar, Linear, GitHub, HubSpot, and TeamsCombines internal + web sources for synthesis
Learn more aboutConnectors in ChatGPT.
June 4, 2025 — Custom connectors via Model Context Protocol (Pro, Team, Enterprise, Edu)
Admins and users can now build and deploy custom connectors to proprietary systems using Model Context Protocol (MCP).
Requires a remote MCP serverAvailable only in deep researchAdmin-published connectors appear in the connector list for all users
Learn more aboutbuilding custom connectors with MCP. For Team, Enterprise, and Edu plans, only admins can build and deploy custom connectors.
June 3, 2025 — Memory is now more comprehensive for Free users
Memory improvements are starting to roll out to Free users. In addition to the saved memories that were there before, ChatGPT now references your recent conversations to deliver responses that feel more relevant and tailored to you.
Free users must be logged in and on up-to-date apps (iOS/Android v1.2025.147+).
Opt‑in reminders
Free users in EEA (EU + UK), Switzerland, Norway, Iceland, or Liechtenstein will see a prompt to enable this setting or can visitSettings>Personalization>Memory>Reference chat historyto enable.Outside the European regions listed above, all Free users that have memory enabled will receive the upgrade automatically.
You can turn off memory anytime in settings. Learn more in ourMemory FAQ.
- The last available tuning model, Gemini 1.5 Flash 001, has been shut down. Tuning is no longer supported on any models. See [Fine tuning with the Gemini API](https://ai.google.dev/gemini-api/docs/model-tuning).
- Launched support for [custom video preprocessing](https://ai.google.dev/gemini-api/docs/video-understanding#customize-video-processing) using clipping intervals and configurable frame rate sampling.
- Launched multi-tool use, which supports configuring [code execution](https://ai.google.dev/gemini-api/docs/code-execution) and [Grounding with Google Search](https://ai.google.dev/gemini-api/docs/grounding) on the same `generateContent` request.
- Launched support for [asynchronous function calls](https://ai.google.dev/gemini-api/docs/live-tools#async-function-calling) in the Live API.
- Launched an experimental [URL context tool](https://ai.google.dev/gemini-api/docs/url-context) for providing URLs as additional context to prompts.
**Model updates:**
- Released `gemini-2.5-flash-preview-05-20`, a Gemini [preview](https://ai.google.dev/gemini-api/docs/models#model-versions) model optimized for price-performance and adaptive thinking. To learn more, see [Gemini 2.5 Flash Preview](https://ai.google.dev/gemini-api/docs/models#gemini-2.5-flash-preview) and [Thinking](https://ai.google.dev/gemini-api/docs/thinking).
- Released the [`gemini-2.5-pro-preview-tts`](https://ai.google.dev/gemini-api/docs/models#gemini-2.5-pro-preview-tts) and [`gemini-2.5-flash-preview-tts`](https://ai.google.dev/gemini-api/docs/models#gemini-2.5-flash-preview-tts) models, which are capable of [generating speech](https://ai.google.dev/gemini-api/docs/speech-generation) with one or two speakers.
- Released the `lyria-realtime-exp` model, which [generates music](https://ai.google.dev/gemini-api/docs/music-generation) in real time.
- Released `gemini-2.5-flash-preview-native-audio-dialog` and `gemini-2.5-flash-exp-native-audio-thinking-dialog`, new Gemini models for the Live API with native audio output capabilities. To learn more, see the [Live API guide](https://ai.google.dev/gemini-api/docs/live-guide#native-audio-output) and [Gemini 2.5 Flash Native Audio](https://ai.google
Create custom practice quizzes with GeminiWhat:Starting today, you can generate custom practice quizzes to help you prepare for an upcoming exam or simply increase your knowledge of any topic - big or small. Create quizzes based on documents you want to study, such as PDFs or class notes, or ask Gemini to create a quiz on a specific topic, and you’ll get a dynamic quiz experience, complete with hints, explanations for right and wrong answers, and a helpful summary at the end highlighting where you did well and where you may need to study a little harder. This experience is available to users over the age of 18 and to qualifying Google Workspaceeducationplans.Why:With Gemini’s intuitive and streamlined practice quizzes, learners can lean on the advantages of generative AI to help them prepare: unlimited quiz generation, personalized responses, and conversational learning experiences.Create with CanvasWhat:Discover new ways to create in Canvas! Starting today, the new Create menu lets you transform text into a variety of dynamic content, custom web pages, visual infographics, engaging quizzes, and immersive Audio Overviews. Or describe anything that you want to create and watch Gemini generate code to build a working prototype. Then collaborate with Gemini to customize it to your needs.Vibe coding apps in Canvas just got better too! With just a few prompts, you can now build fully functional personalized apps in Canvas that can use Gemini-powered features, save data between sessions, and share data between multiple users. You can even save a shortcut to your apps on your phone home screen for easy access. Lastly if there are errors in the app, Canvas will automatically try to resolve them for you.Why:Whether you’re writing, visualizing information, or vibe coding personal apps, Canvas helps you transform a blank slate into a share-worthy creation in minutes. Focus on your vision to create something awesome and leave the heavy lifting of generating, editing, and fixing
Create custom practice quizzes with GeminiWhat:Starting today, you can generate custom practice quizzes to help you prepare for an upcoming exam or simply increase your knowledge of any topic – big or small. Create quizzes based on documents that you want to study, such as PDFs or class notes, or ask Gemini to create a quiz on a specific topic, and you'll get a dynamic quiz experience, complete with hints, explanations for right and wrong answers and a helpful summary at the end highlighting where you did well and where you may need to study a little harder. This experience is available to users over the age of 18 and to qualifying Google Workspaceeducationplans.Why:With Gemini’s intuitive and streamlined practice quizzes, learners can lean on the advantages of generative AI to help them prepare: unlimited quiz generation, personalized responses, and conversational learning experiences.Create with CanvasWhat:Discover new ways to create in Canvas! Starting today, the new Create menu lets you transform text into a variety of dynamic content, custom web pages, visual infographics, engaging quizzes and immersive Audio Overviews. Or describe anything that you want to create and watch Gemini generate code to build a working prototype. Then collaborate with Gemini to customise it to your needs.Vibe coding apps in Canvas just got better too! With just a few prompts, you can now build fully functional personalised apps in Canvas that can use Gemini-powered features, save data between sessions and share data between multiple users. You can even save a shortcut to your apps on your phone home screen for easy access. Lastly, if there are errors in the app, Canvas will automatically try to resolve them for you.Why:Whether you're writing, visualising information or vibe coding personal apps, Canvas helps you transform a blank slate into a share-worthy creation in minutes. Focus on your vision to create something awesome and leave the heavy lifting of generating, editing, and fixi
May 15, 2025 — Dropbox connector for deep research for Plus/Pro/Team
ChatGPT deep research with Dropbox is available globally to Team users. It is also gradually rolling out to Plus and Pro users, except for those in the EEA, Switzerland, and the UK. Enterprise user access will be announced at a later date.
See:Connect apps to ChatGPT deep research
May 14, 2025 — Releasing GPT-4.1 in ChatGPT for all paid users
Since its launch in the API in April, GPT-4.1 has become a favorite among developers—by popular demand, we’re making it available directly in ChatGPT.
GPT-4.1 is a specialized model that excels at coding tasks. Compared to GPT-4o, it's even stronger at precise instruction following and web development tasks, and offers an alternative to OpenAI o3 and OpenAI o4-mini for simpler, everyday coding needs.
Starting today, Plus, Pro, and Team users can access GPT-4.1 via the "more models" dropdown in the model picker. Enterprise and Edu users will get access in the coming weeks. GPT-4.1 has the same rate limits as GPT-4o for paid users.
May 14, 2025 — Introducing GPT-4.1 mini, replacing GPT-4o mini, in ChatGPT for all users
GPT-4.1 mini is a fast, capable, and efficient small model, delivering significant improvements compared to GPT-4o mini—in instruction-following, coding, and overall intelligence. Starting today, GPT-4.1 mini replaces GPT-4o mini in the model picker under "more models" for paid users, and will serve as the fallback model for free users once they reach their GPT-4o usage limits. Rate limits remain the same.
Evals for GPT-4.1 and GPT-4.1 mini were originally shared in theblog postaccompanying their API release. They also went through standard safety evaluations. Detailed results are available in the newly launchedSafety Evaluations Hub.
May 12, 2025 — Microsoft Sharepoint and OneDrive connector for deep research for Plus/Pro/Team
ChatGPT deep research with Sharepoint and OneDrive is available globally to Team users. It is also gradually rolling out to Plus and Pro users, except for those in the EEA, Switzerland, and the UK. Enterprise user access will be announced at a later date.
See:Connecting SharePoint and Microsoft OneDrive to ChatGPT deep research
May 12, 2025 — Export Deep Research as PDF for Plus/Pro/Team
You can now export your deep research reports as well-formatted PDFs—complete with tables, images, linked citations, and sources.
To use, click the share icon and select 'Download as PDF.' It works for both new and past reports.
May 8, 2025 — GitHub connector for deep research for Plus/Pro/Team
ChatGPT deep research with GitHub is available globally to Team users. It is also gradually rolling out to Plus and Pro users, except for those in the EEA, Switzerland, and the UK. Enterprise user access will be announced at a later date.
See:Connecting GitHub to ChatGPT deep research.
May 8, 2025 — Enhanced Memory in ChatGPT (including EU) on Plus/Pro
Enhanced memory rolling out to all Plus and Pro users (including the EU). The new memory features are available in the EEA (EU + UK), Switzerland, Norway, Iceland, or Liechtenstein. These features areOFF by defaultand must be enabled inSettings > Personalization > Reference Chat History.
Plan differences
•Saved memoriesandChat historyare offered only to Plus and Pro accounts.
• Free‑tier users have access toSaved memoriesonly.
Opt‑in reminders
• Outside the European regions listed above, all Plus and Pro accounts that have memory enabled will receive the upgrade automatically.
• If you previously opted out of memory, ChatGPT will not reference past conversations unless you opt back in.
SeeMemory FAQ.
- Released `gemini-2.0-flash-preview-image-generation`, a preview model for generating and editing images. To learn more, see [Image
generation](https://ai.google.dev/gemini-api/docs/image-generation) and [Gemini 2.0 Flash Preview Image
Generation](https://ai.google.dev/gemini-api/docs/models#gemini-2.0-flash-preview-image-generation).
The LLM Changelog is a live, automated tracker that monitors release notes and documentation updates across the largest AI platforms — including OpenAI's ChatGPT, Google's Gemini, Perplexity AI, Microsoft Copilot and Google Search. Changes are crawled daily and presented in a single, searchable feed.
Why AI Platform Changes Matter for Brands
When AI models update their training data, adjust ranking signals or change how they cite sources, the brands and products they recommend can shift overnight. An API change to Gemini or a new feature in ChatGPT can alter which businesses get mentioned in millions of AI-generated responses.
Platforms We Track
We currently monitor ChatGPT release notes, OpenAI developer and model changelogs, Gemini API and app updates, Perplexity product and API changes, Microsoft 365 Copilot release notes, and Google Search Central documentation updates. New sources are added as the AI landscape evolves.
Built by reconnAI
reconnAI tracks how AI systems represent brands in their responses. We monitor mentions, sentiment and citations across ChatGPT, Gemini, Perplexity and more — helping businesses understand and improve their visibility in the age of AI search. Learn more →