In March 2023 OpenAI opened the ChatGPT plugins beta with an ambitious promise: turn the chat into a platform where ChatGPT could talk to external services — banks, hotels, data APIs, internal tools — the same way it talked to the user. Three months later, the ecosystem has around 500 plugins available and a clear pattern of which use cases work and which don’t.
What a Plugin Does
A plugin is, in essence, an OpenAPI spec + a manifest that describes to the model when to invoke it. ChatGPT reads both, and when the user asks a relevant question, the model decides to call the plugin with the right parameters, interprets the response, and integrates it into the conversation.
Typical flow:
- User installs the plugin (browser store, similar to mobile app model).
- Asks a question: “What’s the best time to fly from Madrid to Rome on July 15?”
- ChatGPT decides the Kayak plugin is relevant and calls it.
- Plugin returns structured data (flights, prices, schedules).
- ChatGPT presents them to the user in natural language.
Key concept: the model doesn’t learn the plugin — it decides to invoke it case by case, based on the manifest description and conversation context.
What Works Well
Three scenarios where plugins shine:
- Live data lookup. Plugins like WolframAlpha, Kayak, OpenTable, or Instacart address a real limitation of ChatGPT: access to dynamic or specific information the training doesn’t cover. Finding an available restaurant tonight in a particular city is a textbook case.
- Low-transaction-risk actions. Generating a document with the Zapier plugin, creating a diagram with Show Me Diagrams, summarising a PDF: tasks where an error is easy to detect and doesn’t cost money.
- Internal data exposure. Companies that already publish OpenAPI can build private plugins (inside an Enterprise plan) so their team interacts with corporate data in natural language.
Where Friction Appears
But there are areas where plugins, as of this article’s date, don’t deliver on the promise:
- Orchestrating multiple plugins in a flow. ChatGPT allows up to 3 active plugins at once, but choosing well between them is fragile. When you ask “find a flight and book a restaurant at the destination”, the model often stalls or invokes the wrong plugin.
- Real-money transactions. Though technically possible, commercial plugins are cautious: Kayak searches flights but redirects to its website for payment; Instacart builds the basket but you close the purchase. The friction of leaving the chat to pay reduces perceived value.
- Cumulative latency. Each plugin adds 2-5 seconds to response time. Conversations with two or three invocations go from instant to noticeably slow.
- Discovery. The plugin store is a very long list without reliable rankings. Finding the right plugin for a specific case is manual work.
Plugins vs. Function Calling
In June 2023 OpenAI introduced function calling in their public API — a more granular capability for developers to define functions the model can invoke, without going through the plugin store. This raises the question: is function the new plugin?
In practice, both coexist with different roles:
- Plugins are an end-consumer experience in ChatGPT. They must pass an OpenAI review and target non-technical use.
- Function calling is a direct API for developers building their own experiences. No store, no gatekeeper, no predefined UX.
For a team integrating AI into their product, function calling is almost always the better primitive because it maintains control over UX and transactions. Plugins shine only when the goal is for ChatGPT itself to be the interface.
This fits what we saw in prompt engineering as mature discipline: the function-calling API has channeled many of the structured-output patterns previously done with manual prompts.
Where the Ecosystem Is Heading
Three reasonable predictions for the coming months:
- Consolidation. Of the 500+ current plugins, those solving a clear and maintainable use case will survive. The long tail will fade.
- Enterprise vertical plugins. Instead of aiming to be general like ChatGPT, the next successful plugins will be specific: ChatGPT inside tools like Notion or Salesforce, with business context.
- Convergence with agents. The distinction between “plugin” and “agent” will blur. The next systems will likely orchestrate multiple plugins in multi-step plans, not just answer a single question.
Also see our analysis of Code Interpreter as a case where the plugin is more deeply integrated and solves more complex tasks.
Conclusion
ChatGPT plugins are an interesting experiment in how a conversational assistant can open its behaviour to external services. Three months of real adoption show the potential (live data lookup) and the limits (complex orchestration, real-money transactions). As an industrial standard, the ball is with function calling; plugins are more of a consumer product.
Follow us on jacar.es for more on AI product integration, APIs, and conversational-assistant architecture.