LLM Changelog

Updated 9 Feb 2026

AI Platform Changes That Affect Brand Visibility

Every update to ChatGPT, Gemini, Perplexity, Copilot and Google Search can change how your brand appears in AI-generated answers. We track release notes, API changes and documentation updates across major LLM platforms so you don't have to.

Total Updates
340
ChatGPT
181
Gemini
176
Perplexity
6
GoogleAI
5
151–200 of 340 updates
ChatGPT
ChatGPT 13 Jun 2025 chatgpt

June 13, 2025 — Improvements to the ChatGPT search response quality

We’ve upgraded ChatGPT search for all users to provide even more comprehensive, up-to-date responses. In testing, we found users preferred these search improvements over our previous search experience.
Improved quality
Smarter responses that are more intelligent, are better at understanding what you’re asking, and provide more comprehensive answers.Handles longer conversational contexts, allowing better intelligence in longer conversations.
Improved search capability and instruction following
More robust ability to follow instructions, especially in longer conversations, significantly reducing repetitive responses.Capability to run multiple searches automatically for complex or difficult questions.Search the web using an image you’ve uploaded.
Known limitations
Users may notice longer responses with this new search experience.In some cases, a “chain of thought” reasoning will show up unexpectedly for simple queries. A fix for this is rolling out to users shortly.ChatGPT may still make occasional mistakes - please double check responses.
ChatGPT
ChatGPT 12 Jun 2025 chatgpt

June 12, 2025 — Expanded Model Support for Custom GPTs

Creators can now choose from the full set of ChatGPT models (GPT-4o, o3, o4-mini and more) when building Custom GPTs—making it easier to fine-tune performance for different tasks, industries, and workflows. Creators can also set a recommended model to guide users.
Key details:
GPTswithout Custom Actionscan use the model picker to select from all models available to the user.GPTswith Custom Actionscurrently support GPT-4o and 4.1Available onwebfor users onPlus, Pro and Teamplans.Enterprise and Edu rollout coming soon.
ChatGPT
ChatGPT 12 Jun 2025 chatgpt

June 12, 2025 — Adding More Capabilities to Projects

Starting today, we’re adding several updates to projects in ChatGPT to help you do more focused work. These updates are available for Plus, Pro, and Team users.
Deep research and voice mode supportImprovements to memory to reference past chats in a project*Sharing chats from projectsStarting a new project directly from a chatUpload files and access model selector on mobile
Learn moreabout projects.
*Memory improvements are available for Plus and Pro users.
ChatGPT
ChatGPT 10 Jun 2025 chatgpt

June 10, 2025 — Today, we're launching OpenAI o3-pro—available now for Pro users in ChatGPT and in our API.

Like o1-pro, o3-pro is a version of our most intelligent model, o3, designed to think longer and provide the most reliable responses. Since the launch of o1-pro, users have favored this model for domains such as math, science, and coding—areas where o3-pro continues to excel, as shown in academic evaluations. Like o3, o3-pro has access to tools that make ChatGPT useful—it can search the web, analyze files, reason about visual inputs, use Python, personalize responses using memory, and more. Because o3-pro has access to tools, responses typically take longer than o1-pro to complete. We recommend using it for challenging questions where reliability matters more than speed, and waiting a few minutes is worth the tradeoff.
In expert evaluations, reviewers consistently prefer o3-pro over o3 in every tested category and especially in key domains like science, education, programming, business, and writing help. Reviewers also rated o3-pro consistently higher for clarity, comprehensiveness, instruction-following, and accuracy.
Academic evaluations show that o3-pro consistently outperforms both o1-pro and o3.
To assess the key strength of o3-pro, we once again use our rigorous "4/4 reliability" evaluation, where a model is considered successful only if it correctly answers a question in all four attempts, not just one:
o3-pro is available in the model picker for Pro and Team users starting today, replacing o1-pro. Enterprise and Edu users will get access the week after.
As o3-pro uses the same underlying model as o3, full safety details can be found in theo3 system card.
Limitations
At the moment, temporary chats are disabled for o3-pro as we resolve a technical issue.
Image generation is not supported within o3-pro—please use GPT-4o, OpenAI o3, or OpenAI o4-mini to generate images.
Canvas is also currently not supported within o3-pro.
ChatGPT
ChatGPT 7 Jun 2025 chatgpt

June 7, 2025 — Updates to Advanced Voice Mode for paid users

We're upgrading Advanced Voice in ChatGPT for paid users with significant enhancements in intonation and naturalness, making interactions feel more fluid and human-like. When we first launched Advanced Voice, it represented a leap forward in AI speech—now, it speaks even more naturally, with subtler intonation, realistic cadence (including pauses and emphases), and more on-point expressiveness for certain emotions including empathy, sarcasm, and more.
Voice also now offers intuitive and effective language translation. Just ask Voice to translate between languages, and it will continue translating throughout your conversation until you tell it to stop or switch. It’s ready to translate whenever you need it—whether you're asking for directions in Italy or chatting with a colleague from the Tokyo office. For example, at a restaurant in Brazil, Voice can translate your English sentences into Portuguese, and the waiter’s Portuguese responses back into English—making conversations effortless, no matter where you are or who you're speaking with.
This upgrade to Advanced Voice is available for all paid users across markets and platforms—just tap the Voice icon in the message composer to get started.
This update is in addition to improvements we made earlier this year to ensure fewer interruptions and improved accents.
Known Limitations
In testing, we've observed that this update may occasionally cause minor decreases in audio quality, including unexpected variations in tone and pitch. These issues are more noticeable with certain voice options. We expect to improve audio consistency over time.
Additionally, rare hallucinations in Voice Mode persist with this update, resulting in unintended sounds resembling ads, gibberish, or background music. We are actively investigating these issues and working toward a solution.
Gemini
Gemini 5 Jun 2025 gemini_api

June 05, 2025

- Released `gemini-2.5-pro-preview-06-05`, a new version of our most powerful model, now with adaptive thinking. To learn more, see [Gemini 2.5 Pro Preview](https://ai.google.dev/gemini-api/docs/models#gemini-2.5-pro-preview-06-05) and [Thinking](https://ai.google.dev/gemini-api/docs/thinking). `gemini-2.5-pro-preview-05-06` will be redirected to `gemini-2.5-pro` on June 26, 2025.
ChatGPT
ChatGPT 4 Jun 2025 chatgpt

June 4, 2025 — Connectors in beta for deep research (Plus, Pro, Team, Enterprise, Edu)

ChatGPT Team, Enterprise, and Edu customers globally can use connectors in deep research, as well as Pro and Plus users (excluding users in Switzerland, EEA, and the UK) to generate long-form, cited responses that include your company’s internal tools.
Supported connectors: Google Drive, SharePoint, Dropbox, Box, Outlook, Gmail, Google Calendar, Linear, GitHub, HubSpot, and TeamsCombines internal + web sources for synthesis
Learn more aboutConnectors in ChatGPT.
ChatGPT
ChatGPT 4 Jun 2025 chatgpt

June 4, 2025 — Custom connectors via Model Context Protocol (Pro, Team, Enterprise, Edu)

Admins and users can now build and deploy custom connectors to proprietary systems using Model Context Protocol (MCP).
Requires a remote MCP serverAvailable only in deep researchAdmin-published connectors appear in the connector list for all users
Learn more aboutbuilding custom connectors with MCP. For Team, Enterprise, and Edu plans, only admins can build and deploy custom connectors.
ChatGPT
ChatGPT 3 Jun 2025 chatgpt

June 3, 2025 — Memory is now more comprehensive for Free users

Memory improvements are starting to roll out to Free users. In addition to the saved memories that were there before, ChatGPT now references your recent conversations to deliver responses that feel more relevant and tailored to you.
Free users must be logged in and on up-to-date apps (iOS/Android v1.2025.147+).
Opt‑in reminders
Free users in EEA (EU + UK), Switzerland, Norway, Iceland, or Liechtenstein will see a prompt to enable this setting or can visitSettings>Personalization>Memory>Reference chat historyto enable.Outside the European regions listed above, all Free users that have memory enabled will receive the upgrade automatically.
You can turn off memory anytime in settings. Learn more in ourMemory FAQ.
Gemini
Gemini 27 May 2025 gemini_api

May 27, 2025

- The last available tuning model, Gemini 1.5 Flash 001, has been shut down. Tuning is no longer supported on any models. See [Fine tuning with the Gemini API](https://ai.google.dev/gemini-api/docs/model-tuning).
Gemini
Gemini 20 May 2025 gemini_api

May 20, 2025

**API updates:**

- Launched support for [custom video preprocessing](https://ai.google.dev/gemini-api/docs/video-understanding#customize-video-processing) using clipping intervals and configurable frame rate sampling.
- Launched multi-tool use, which supports configuring [code execution](https://ai.google.dev/gemini-api/docs/code-execution) and [Grounding with Google Search](https://ai.google.dev/gemini-api/docs/grounding) on the same `generateContent` request.
- Launched support for [asynchronous function calls](https://ai.google.dev/gemini-api/docs/live-tools#async-function-calling) in the Live API.
- Launched an experimental [URL context tool](https://ai.google.dev/gemini-api/docs/url-context) for providing URLs as additional context to prompts.

**Model updates:**

- Released `gemini-2.5-flash-preview-05-20`, a Gemini [preview](https://ai.google.dev/gemini-api/docs/models#model-versions) model optimized for price-performance and adaptive thinking. To learn more, see [Gemini 2.5 Flash Preview](https://ai.google.dev/gemini-api/docs/models#gemini-2.5-flash-preview) and [Thinking](https://ai.google.dev/gemini-api/docs/thinking).
- Released the [`gemini-2.5-pro-preview-tts`](https://ai.google.dev/gemini-api/docs/models#gemini-2.5-pro-preview-tts) and [`gemini-2.5-flash-preview-tts`](https://ai.google.dev/gemini-api/docs/models#gemini-2.5-flash-preview-tts) models, which are capable of [generating speech](https://ai.google.dev/gemini-api/docs/speech-generation) with one or two speakers.
- Released the `lyria-realtime-exp` model, which [generates music](https://ai.google.dev/gemini-api/docs/music-generation) in real time.
- Released `gemini-2.5-flash-preview-native-audio-dialog` and `gemini-2.5-flash-exp-native-audio-thinking-dialog`, new Gemini models for the Live API with native audio output capabilities. To learn more, see the [Live API guide](https://ai.google.dev/gemini-api/docs/live-guide#native-audio-output) and [Gemini 2.5 Flash Native Audio](https://ai.google
Gemini
Gemini 20 May 2025 gemini_app

2025.05.20

Create custom practice quizzes with GeminiWhat:Starting today, you can generate custom practice quizzes to help you prepare for an upcoming exam or simply increase your knowledge of any topic - big or small. Create quizzes based on documents you want to study, such as PDFs or class notes, or ask Gemini to create a quiz on a specific topic, and you’ll get a dynamic quiz experience, complete with hints, explanations for right and wrong answers, and a helpful summary at the end highlighting where you did well and where you may need to study a little harder. This experience is available to users over the age of 18 and to qualifying Google Workspaceeducationplans.Why:With Gemini’s intuitive and streamlined practice quizzes, learners can lean on the advantages of generative AI to help them prepare: unlimited quiz generation, personalized responses, and conversational learning experiences.Create with CanvasWhat:Discover new ways to create in Canvas! Starting today, the new Create menu lets you transform text into a variety of dynamic content, custom web pages, visual infographics, engaging quizzes, and immersive Audio Overviews. Or describe anything that you want to create and watch Gemini generate code to build a working prototype. Then collaborate with Gemini to customize it to your needs.Vibe coding apps in Canvas just got better too! With just a few prompts, you can now build fully functional personalized apps in Canvas that can use Gemini-powered features, save data between sessions, and share data between multiple users. You can even save a shortcut to your apps on your phone home screen for easy access. Lastly if there are errors in the app, Canvas will automatically try to resolve them for you.Why:Whether you’re writing, visualizing information, or vibe coding personal apps, Canvas helps you transform a blank slate into a share-worthy creation in minutes. Focus on your vision to create something awesome and leave the heavy lifting of generating, editing, and fixing
ChatGPT
ChatGPT 15 May 2025 chatgpt

May 15, 2025 — Dropbox connector for deep research for Plus/Pro/Team

ChatGPT deep research with Dropbox is available globally to Team users. It is also gradually rolling out to Plus and Pro users, except for those in the EEA, Switzerland, and the UK. Enterprise user access will be announced at a later date.
See:Connect apps to ChatGPT deep research
ChatGPT
ChatGPT 15 May 2025 chatgpt

May 15, 2025 — GitHub connector for deep research

The GitHub connector is now available globally to Plus/Pro/Team users, including those in the EEA, Switzerland, and the UK.
ChatGPT
ChatGPT 14 May 2025 chatgpt

May 14, 2025 — Releasing GPT-4.1 in ChatGPT for all paid users

Since its launch in the API in April, GPT-4.1 has become a favorite among developers—by popular demand, we’re making it available directly in ChatGPT.
GPT-4.1 is a specialized model that excels at coding tasks. Compared to GPT-4o, it's even stronger at precise instruction following and web development tasks, and offers an alternative to OpenAI o3 and OpenAI o4-mini for simpler, everyday coding needs.
Starting today, Plus, Pro, and Team users can access GPT-4.1 via the "more models" dropdown in the model picker. Enterprise and Edu users will get access in the coming weeks. GPT-4.1 has the same rate limits as GPT-4o for paid users.
ChatGPT
ChatGPT 14 May 2025 chatgpt

May 14, 2025 — Introducing GPT-4.1 mini, replacing GPT-4o mini, in ChatGPT for all users

GPT-4.1 mini is a fast, capable, and efficient small model, delivering significant improvements compared to GPT-4o mini—in instruction-following, coding, and overall intelligence. Starting today, GPT-4.1 mini replaces GPT-4o mini in the model picker under "more models" for paid users, and will serve as the fallback model for free users once they reach their GPT-4o usage limits. Rate limits remain the same.
Evals for GPT-4.1 and GPT-4.1 mini were originally shared in theblog postaccompanying their API release. They also went through standard safety evaluations. Detailed results are available in the newly launchedSafety Evaluations Hub.
ChatGPT
ChatGPT 12 May 2025 chatgpt

May 12, 2025 — Microsoft Sharepoint and OneDrive connector for deep research for Plus/Pro/Team

ChatGPT deep research with Sharepoint and OneDrive is available globally to Team users. It is also gradually rolling out to Plus and Pro users, except for those in the EEA, Switzerland, and the UK. Enterprise user access will be announced at a later date.
See:Connecting SharePoint and Microsoft OneDrive to ChatGPT deep research
ChatGPT
ChatGPT 12 May 2025 chatgpt

May 12, 2025 — Export Deep Research as PDF for Plus/Pro/Team

You can now export your deep research reports as well-formatted PDFs—complete with tables, images, linked citations, and sources.
To use, click the share icon and select 'Download as PDF.' It works for both new and past reports.
ChatGPT
ChatGPT 8 May 2025 chatgpt

May 8, 2025 — GitHub connector for deep research for Plus/Pro/Team

ChatGPT deep research with GitHub is available globally to Team users. It is also gradually rolling out to Plus and Pro users, except for those in the EEA, Switzerland, and the UK. Enterprise user access will be announced at a later date.
See:Connecting GitHub to ChatGPT deep research.
ChatGPT
ChatGPT 8 May 2025 chatgpt

May 8, 2025 — Enhanced Memory in ChatGPT (including EU) on Plus/Pro

Enhanced memory rolling out to all Plus and Pro users (including the EU). The new memory features are available in the EEA (EU + UK), Switzerland, Norway, Iceland, or Liechtenstein. These features areOFF by defaultand must be enabled inSettings > Personalization > Reference  Chat  History.
Plan differences
•Saved  memoriesandChat historyare offered only to Plus and Pro accounts.
• Free‑tier users have access toSaved  memoriesonly.
Opt‑in reminders
• Outside the European regions listed above, all Plus and Pro accounts that have memory enabled will receive the upgrade automatically.
• If you previously opted out of memory, ChatGPT will not reference past conversations unless you opt back in.
SeeMemory FAQ.
Gemini
Gemini 7 May 2025 gemini_api

May 7, 2025

- Released `gemini-2.0-flash-preview-image-generation`, a preview model for generating and editing images. To learn more, see [Image
generation](https://ai.google.dev/gemini-api/docs/image-generation) and [Gemini 2.0 Flash Preview Image
Generation](https://ai.google.dev/gemini-api/docs/models#gemini-2.0-flash-preview-image-generation).
Gemini
Gemini 6 May 2025 gemini_api

May 6, 2025

- Released `gemini-2.5-pro-preview-05-06`, a new version of our most powerful model, with improvements on code and function calling. `gemini-2.5-pro-preview-03-25` will automatically point to the new version of the model.
Gemini
Gemini 6 May 2025 gemini_app

2025.05.06

2.5 Pro (experimental) got a major upgrade for codingWhat:We’re rolling out an updated version of Gemini 2.5 Pro with improved capabilities for coding. This model more intuitively understands coding prompts and produces stronger outputs, enabling users to quickly create compelling web apps in Canvas. This improvement builds on the overwhelmingly positive feedback to 2.5 Pro’s coding and multimodal reasoning capabilities.Why:We believe in rapid iteration and bringing the best of Gemini to the world. Your feedback helps us improve these models over time and learning from experimental launches informs how we release models more widely.
ChatGPT
ChatGPT 6 May 2025 chatgpt

May 6, 2025 — Mobile UI (iOS/Android) changing on Free/Plus/Pro plans

We've removed the row of individual tool icons from the mobile composer and replaces it with the new sliders‑style icon to open theSkillsmenu; tapping that button opens a bottom‑sheet menu where users can choose tools like Create an image or Search the web.
No tools are deprecated—access is simply consolidated to clear space and reduce on‑screen clutter.
ChatGPT
ChatGPT 1 May 2025 chatgpt

May 1, 2025 — Sunsetting Monday GPT

Launched on April 1 as a one-month surprise, our “Monday” personality (available in both voice and text) has now been sunset. We hope you enjoyed Monday's irreverent style, and we’re already working on new personalities for future use. Please stay tuned for what’s next!
ChatGPT
ChatGPT 29 Apr 2025 chatgpt

April 29, 2025 — Updates to GPT-4o

We've reverted the most recent update to GPT-4o due to issues with overly agreeable responses (sycophancy).
We’re actively working on further improvements. For more details, check out ourblog postexplaining what happened and our initial findings, andthis blog postwhere we expand on what we missed with sycophancy and the changes we're going to make going forward.
ChatGPT
ChatGPT 25 Apr 2025 chatgpt

April 25, 2025 — Improvements to GPT-4o

We’re making additional improvements to GPT-4o, optimizing when it saves memories and enhancing problem-solving capabilities for STEM. We’ve also made subtle changes to the way it responds, making it more proactive and better at guiding conversations toward productive outcomes. We think these updates help GPT-4o feel more intuitive and effective across a variety of tasks–we hope you agree!
Gemini
Gemini 22 Apr 2025 gemini_app

2025.04.22

Create and share videos with Veo 2 in Gemini AdvancedWhat:Starting today, Gemini Advanced subscribers can generate high-quality, 8-second videos using Veo 2. Simply describe what you have in mind and watch your ideas come to life in motion - whether you're creating for fun, sharing with friends, or adding a dynamic element to your projects.How to access: Select ‘Veo 2’ from the model dropdown.How to share: Download the MP4 file, generate a public link, or share the video directly (mobile only).This feature is available globally on web and mobile to Gemini Advanced subscribers as part of the Google One AI Premium Plan.Why:Veo 2 in Gemini puts powerful video creation tools directly into your hands. It offers an intuitive way to visualize concepts, tell stories, and bring creative ideas to life dynamically, without requiring complex software or prior video-editing experience. Just describe it and Gemini will create it.
Gemini
Gemini 19 Apr 2025 gemini_app

2025.04.19

Update to 2.0 Flash in GeminiWhat:An enhanced version of Gemini 2.0 Flash is now available to Gemini app users. This update enables a more natural, collaborative, and adaptive conversational style, providing users with more fluid and engaging interactions that feel in tune with specific topics and tasks.Try chatting about your interests; working through a problem at school, work, or home; or asking for a new, more creative perspective to feel the difference.Why:We’re committed to making Gemini a more insightful, collaborative partner. When it’s better at understanding context and engaging in helpful dialogue, it’s easier for you to create, interact, and accomplish your goals.
Gemini
Gemini 17 Apr 2025 gemini_api

April 17, 2025

- Released `gemini-2.5-flash-preview-04-17`, a Gemini [preview](https://ai.google.dev/gemini-api/docs/models#model-versions) model optimized for price-performance and adaptive thinking. To learn more, see [Gemini 2.5 Flash Preview](https://ai.google.dev/gemini-api/docs/models#gemini-2.5-flash-preview) and [Thinking](https://ai.google.dev/gemini-api/docs/thinking).
Gemini
Gemini 17 Apr 2025 gemini_app

2025.04.17

Try our 2.5 Flash (experimental) modelWhat:Starting today, Gemini app users have access to our 2.5 Flash (experimental) model - a fast and efficient thinking model that also provides strong performance. It’s ideal for tasks that require advanced reasoning and benefit from faster processing, like summarization, document analysis and data extraction.This experimental model is meant to be an early preview and can have unexpected behaviors and may make mistakes.Why:We believe in rapid iteration and bringing the best of Gemini to the world. Your feedback helps us improve these models over time and learning from experimental launches informs how we release models more widely.
Gemini
Gemini 16 Apr 2025 gemini_api

April 16, 2025

- Launched context caching for [Gemini 2.0 Flash](https://ai.google.dev/gemini-api/docs/models#gemini-2.0-flash).
ChatGPT
ChatGPT 16 Apr 2025 chatgpt

April 16, 2025 — o3 and o4-mini in ChatGPT

Today, we’re releasing OpenAIo3ando4-mini,the latest in our o-series of models trained to think for longer before responding. These are the smartest models we’ve released to date, representing a step change in ChatGPT's capabilities for everyone from curious users to advanced researchers. For the first time, our reasoning models can agentically use and combine every tool within ChatGPT—this includes searching the web, analyzing uploaded files and other data with Python, reasoning deeply about visual inputs, and even generating images. Critically, these models are trained to reason about when and how to use tools to produce detailed and thoughtful answers in the right output formats, typically in under a minute, to solve more complex problems. This allows them to tackle multi-faceted questions more effectively, a step toward a more agentic ChatGPT that can independently execute tasks on your behalf. The combined power of state-of-the-art reasoning with full tool access translates into significantly stronger performance across academic benchmarks and real-world tasks, setting a new standard in both intelligence and usefulness.
ChatGPT
ChatGPT 16 Apr 2025 chatgpt

April 16, 2025 — Memory with Search

ChatGPT can also use memories to inform search queries when ChatGPT searches the web using third-party search providers.Learn more about search.
ChatGPT
ChatGPT 15 Apr 2025 chatgpt

April 15, 2025 — ChatGPT Image Library

All images you create with ChatGPT are now automatically saved to a newLibraryin the sidebar, giving you one place to browse, revisit, and reuse your work without digging through past conversations. The Library is rolling out today on Web, iOS, and Android for Free, Plus, and Pro users (Enterprise / Edu support coming soon). For now, it displays images generated with 4o Image Generation while we backfill older creations, and you can remove an image by deleting the conversation where it was made.
ChatGPT
ChatGPT 10 Apr 2025 chatgpt

April 10, 2025

Sunsetting GPT‑4 in ChatGPT
Effective  April  30,  2025, GPT‑4 will be retired from ChatGPT and fully replaced by GPT‑4o.
GPT‑4o is our newer, natively multimodal model. In head‑to‑head evaluations it consistently surpasses GPT‑4 in writing, coding, STEM, and more.
Recent upgrades have further improved GPT‑4o’s instruction following, problem solving, and conversational flow, making it a natural successor to GPT‑4.
GPT-4 will still be available in the API.
GPT‑4 marked a pivotal moment in ChatGPT’s evolution. We’re grateful for the breakthroughs it enabled and for the feedback that helped shape its successor. GPT‑4o builds on that foundation to deliver even greater capability, consistency, and creativity.
Gemini
Gemini 9 Apr 2025 gemini_api

April 9, 2025

**Model updates:**

- Released `veo-2.0-generate-001`, a generally available (GA) text- and image-to-video model, capable of generating detailed and artistically nuanced videos. To learn more, see the [Veo docs](https://ai.google.dev/gemini-api/docs/video).
- Released `gemini-2.0-flash-live-001`, a public preview version of the
[Live API](https://ai.google.dev/gemini-api/docs/live) model with billing enabled.

- **Enhanced Session Management and Reliability**

- **Session Resumption:** Keep sessions alive across temporary network disruptions. The API now supports server-side session state storage (for up to 24 hours) and provides handles (session_resumption) to reconnect and resume where you left off.
- **Longer Sessions via Context Compression:** Enable extended interactions beyond previous time limits. Configure context window compression with a sliding window mechanism to automatically manage context length, preventing abrupt terminations due to context limits.
- **Graceful Disconnect Notification:** Receive a `GoAway` server message indicating when a connection is about to close, allowing for graceful handling before termination.
- **More Control over Interaction Dynamics**

- **Configurable Voice Activity Detection (VAD):** Choose sensitivity
levels or disable automatic VAD entirely and use new client events
(`activityStart`, `activityEnd`) for manual turn control.

- **Configurable Interruption Handling:** Decide whether user input
should interrupt the model's response.

- **Configurable Turn Coverage:** Choose whether the API processes all
audio and video input continuously or only captures it when the end-user
is detected speaking.

- **Configurable Media Resolution:** Optimize for quality or token usage
by selecting the resolution for input media.

- **Richer Output and Features**

- **Expanded Voice \& Language Options:** Choose from two new voices and
30 new languages for audio output. The output l
Gemini
Gemini 4 Apr 2025 gemini_api

April 4, 2025

- Released `gemini-2.5-pro-preview-03-25`, a public preview Gemini 2.5 Pro version with billing enabled. You can continue to use `gemini-2.5-pro-exp-03-25` on the free tier.
Gemini
Gemini 29 Mar 2025 gemini_app

2025.03.29

Our 2.5 Pro (experimental) model is now available to all Gemini users, with CanvasWhat:Today we’re expanding access to our most intelligent AI model, 2.5 Pro (experimental), to all Gemini users. This state-of-the-art model has thinking capabilities natively built in, with exceptional performance in coding, math, image understanding and more.Canvas, our new interactive space that makes it easy to create, refine, and share your work, is now available to try with 2.5 Pro (experimental). With 2.5 Pro’s improved coding capabilities, coupled with Canvas, you can quickly create compelling web apps or generate code for immediate use.Gemini users will be able to try 2.5 Pro (experimental) with rate limits, and Gemini Advanced users will continue to have expanded access and a significantly larger context window.Being an experimental model, it can have unexpected behaviors and may make mistakes.Why:We want to bring the best model in the world to all Gemini users. Your feedback helps us improve these models over time and learning from experimental launches informs how we release models more widely.
ChatGPT
ChatGPT 27 Mar 2025 chatgpt

March 27, 2025 — Improvements to GPT-4o

We’ve made improvements to GPT-4o—it now feels more intuitive, creative, and collaborative, with enhanced instruction-following, smarter coding capabilities, and a clearer communication style.
Smarter problem-solving in STEM and coding:
GPT-4o has further improved its capability to tackle complex technical and coding problems. It now generates cleaner, simpler frontend code, more accurately thinks through existing code to identify necessary changes, and consistently produces coding outputs that successfully compile and run, streamlining your coding workflows.
Enhanced instruction-following and formatting accuracy:
GPT-4o is now more adept at following detailed instructions, especially for prompts containing multiple or complex requests. It improves on generating outputs according to the format requested and achieves higher accuracy in classification tasks.
“Fuzzy” improvements:
Early testers say that the model seems to better understand the implied intent behind their prompts, especially when it comes to creative and collaborative tasks. It’s also slightly more concise and clear, using fewer markdown hierarchies and emojis for responses that are easier to read, less cluttered, and more focused. We're curious to see if our users also find this to be the case.
This model is now available in ChatGPT and in the API as the newest snapshot of chatgpt-4o-latest. We plan to bring these improvements to a dated model in the API in the coming weeks.
Gemini
Gemini 25 Mar 2025 gemini_api

March 25, 2025

- Released `gemini-2.5-pro-exp-03-25`, a public experimental Gemini model with thinking mode always on by default. To learn more, see [Gemini 2.5 Pro Experimental](https://ai.google.dev/gemini-api/docs/models#gemini-2.5-pro-preview-03-25).
Gemini
Gemini 25 Mar 2025 gemini_app

2025.03.25

Priority access with Gemini Advanced: Try our latest 2.5 Pro (experimental) modelWhat:Today we’re introducing Gemini 2.5, our most intelligent AI model. Our first 2.5 release is a chat optimized version of Gemini-2.5-Pro-Exp-03-25, which is state-of-the-art on a wide range of benchmarks and debuts at #1 on LMArena by a significant margin. This model also has thinking capabilities natively built in, with improved performance in complex tasks like coding, math, and image understanding.2.5 Pro (experimental) is now rolling out to the Gemini web and mobile app and is available to qualifying Google Workspace business and education plans.This experimental model is meant to be an early preview and can have unexpected behaviors and may make mistakes.Why:We believe in rapid iteration and bringing the best of Gemini to the world, and we want to give Gemini Advanced subscribers priority access to our latest AI innovations. Your feedback helps us improve these models over time and learning from experimental launches informs how we release models more widely.
Gemini
Gemini 18 Mar 2025 gemini_app

2025.03.18

Try Canvas, a new way to collaborate with GeminiWhat:Starting today, you can collaborate with Gemini 2.0 Flash to write documents and code in Canvas, a new interactive space that makes refining & sharing your work really easy.Co-create documents: Generate a first draft, then rapidly refine and ask Gemini for feedback on your edits. Update specific sections or the whole draft, and use the quick editor tools to change the tone, length, or formatting. From essays to blog posts to reports, uplevel your documents in Canvas.Generate & iterate on code: Easily convert your ideas to working prototypes for web apps, Python scripts, and more. Ask Gemini to generate & preview React or HTML code directly in Canvas in a familiar code editor, and review Gemini's changes on each turn.Canvas is available globally across all Gemini-supported languages. Select Canvas in the prompt bar to start creating!Why:Canvas lets you experience the power of collaboration with Gemini because you can see your ideas and iterations take shape in real time. Go from blank slate to a share-worthy creation in minutes. This means you can focus on your vision to create something awesome and leave the heavy lifting of generating, editing, and fixing things to Gemini.Introducing Audio Overview, now in GeminiWhat:Leveraging the same technology that powers NotebookLM’s Audio Overviews, Gemini users can now generate podcast-style conversations based on documents, slides, and Deep Research reports. Upload files about topics you want to explore and enjoy dynamic discussions between two AI hosts with unique perspectives.Why:Stay informed while you’re on the go and gain new insights by experiencing your content in a fresh way.
ChatGPT
ChatGPT 18 Mar 2025 chatgpt

March 18, 2025 — Web and the Windows desktop app

1. In-line message error retries: If you encounter a message error, you can retry or just continue chatting in the same conversation.
2. o1 and o3-mini now offer Python-powered data analysis in ChatGPT: You can now ask these models to perform tasks like running regressions on test data, visualizing complex business metrics, and conducting scenario-based simulations.
3. Conversation Drafts: Unsubmitted messages in your message prompt will now be saved.
4. New UI for Temporary Chats: We’ve made accessing and viewing these private conversations more clear.
ChatGPT
ChatGPT 18 Mar 2025 chatgpt

March 18, 2025 — Android App

1. Increased the default size for in‐line generated images.
2. More clear and private Incognito keyboard for Temporary Chats.
ChatGPT
ChatGPT 18 Mar 2025 chatgpt

March 18, 2025 — iOS App

1. Improved table-copying functionality out of ChatGPT responses on iOS and macOS.
2. Added long press on your message to open menu actions: Copy, Edit.
3. Improved response display of nested blockquote content.
4. Improved display and faster streaming of conversations and message menu actions.
Gemini
Gemini 13 Mar 2025 gemini_app

2025.03.13

Update to Gemini 2.0 Flash Thinking (experimental) in GeminiWhat:Starting today, an improved version of Gemini 2.0 Flash Thinking (experimental) will become available to Gemini app users. Built on the foundation of 2.0 Flash, this model delivers improved performance and better advanced reasoning capabilities with efficiency and speed.Starting in English, 2.0 Flash Thinking (experimental) now works with your favorite Gemini features and connected apps such as YouTube, Maps, Search and more. Gemini Advanced users will also have access to a 1M token context window with this model.Why:We're investing in thinking and reasoning capabilities because we believe they unlock deeper intelligence and deliver enhanced performance for tasks requiring complex reasoning, such as coding, scientific discovery, and advanced math.Deep Research is now powered by Gemini 2.0 Flash Thinking (experimental) and available to more Gemini users at no costWhat:Today, we're upgrading Gemini Deep Research to use the 2.0 Flash Thinking (experimental) model. We're also giving Gemini users the ability to try Deep Research at no cost. Gemini Advanced and qualifying enterprise users have expanded access to Deep Research to save even more time on their most complex projects.With Deep Research, you can save hours of work as Gemini analyzes relevant information on your behalf to create comprehensive multi-page reports on any topic in minutes. With advanced reasoning from 2.0 Flash Thinking, Gemini is even better at all stages of research from planning to searching to reporting.Why:By powering Deep Research with this upgraded model, and expanding its availability, Deep Research is taking one step forward in Google's mission to organize the world's information and make it universally accessible and useful.Gemini gets personal, with tailored help based on your Search historyWhat:Starting today, Gemini can nowuse your Search historyto give you even more helpful and personalized responses. When you select "Per
Gemini
Gemini 12 Mar 2025 gemini_api

March 12, 2025

**Model updates:**

- Launched an experimental [Gemini 2.0 Flash](https://ai.google.dev/gemini-api/docs/image-generation#gemini) model capable of image generation and editing.
- Released `gemma-3-27b-it`, available on [AI Studio](https://aistudio.google.com) and through the Gemini API, as part of the [Gemma 3](https://ai.google.dev/gemma/docs/core) launch.

**API updates:**

- Added support for [YouTube URLs](https://ai.google.dev/gemini-api/docs/vision#youtube) as a media source.
- Added support for including an [inline video](https://ai.google.dev/gemini-api/docs/vision#inline-video) of less than 20MB.
Gemini
Gemini 11 Mar 2025 gemini_api

March 11, 2025

**SDK updates:**

- Released the [Google Gen AI SDK for TypeScript and JavaScript](https://googleapis.github.io/js-genai) to public preview.
Gemini
Gemini 7 Mar 2025 gemini_api

March 7, 2025

**Model updates:**

- Released `gemini-embedding-exp-03-07`, an [experimental](https://ai.google.dev/gemini-api/docs/models/experimental-models) Gemini-based embeddings model in public preview.

What is the LLM Changelog?

The LLM Changelog is a live, automated tracker that monitors release notes and documentation updates across the largest AI platforms — including OpenAI's ChatGPT, Google's Gemini, Perplexity AI, Microsoft Copilot and Google Search. Changes are crawled daily and presented in a single, searchable feed.

Why AI Platform Changes Matter for Brands

When AI models update their training data, adjust ranking signals or change how they cite sources, the brands and products they recommend can shift overnight. An API change to Gemini or a new feature in ChatGPT can alter which businesses get mentioned in millions of AI-generated responses.

Platforms We Track

We currently monitor ChatGPT release notes, OpenAI developer and model changelogs, Gemini API and app updates, Perplexity product and API changes, Microsoft 365 Copilot release notes, and Google Search Central documentation updates. New sources are added as the AI landscape evolves.

Built by reconnAI

reconnAI tracks how AI systems represent brands in their responses. We monitor mentions, sentiment and citations across ChatGPT, Gemini, Perplexity and more — helping businesses understand and improve their visibility in the age of AI search. Learn more →