GPTs and what they mean for companies building with AI

OpenAI recently announced their new “GPTs” feature - this article delves into how non-coders can now create tailored AI tools, and why it’s worth it for innovators and developers to take note of this significant shift.

GPTs allow anyone to tailor ChatGPT to be more helpful for daily tasks, work, learning, or fun by providing it with customised instructions, knowledge, and skills. People can build GPTs through the chat interface without needing to code. GPTs can search the web, generate images, analyse data, and more based on the capabilities provided.

Essentially, this is a simple way for people to share their custom instructions (or system prompts). Previously, if you wanted to show someone how they could use ChatGPT in different ways you had to either send long system prompts back and forth, or use OpenAI’s API to build a “gpt wrapper” and host it online.

The reality is that most people probably never knew you could ask ChatGPT to act as different characters, or respond in different ways. So we expect this feature to be hugely successful, similarly to how character.ai became an internet sensation by allowing anyone to create a custom character without code to engage in role playing conversations.

Just as GPT Plugins made a lot of AI-based products obsolete, GPTs will become the de facto way to share prompts and create custom assistants. So what does this mean for companies and startups building AI tools?

On a first level, AI tools can’t just be a chat bot with custom instructions anymore. Even adding domain knowledge through embeddings will be possible without code on OpenAI GPTs, so that will not be enough either.

AI products need to go beyond this, they need to be AI-enabled workflows. Processes that integrate and combine inputs and outputs from different sources and models, with multi-steps and output parsing.

For example, if you need to customise how the prompt template joins together multiple embeddings datasets, with GPTs you can provide documents for context, but you can’t group them into categories, or specify how they should be included and when.

Another example could be using LLMs as AI algorithms within a machine-to-machine system. There are algorithms that are incredibly difficult to convert to machine code, but can be easily explained in natural language and fed into an LLM. Using the latest JSON mode feature, the result from the LLM can be fed into another (non-AI) system directly to continue the data workflow.

In summary, companies building with AI need to create AI-enabled products, not AI-based products (i.e.: chat bots). The main use case of LLMs will continue to become ever more easy to customise and tweak by anyone even without programming knowledge, while complex use cases that leverage AI to become more resilient and better handle edge cases will be the ones that need custom solutions.