AI Agents
Prior to developing AI Agents, I used to rank model’s reasoning capabilities as the most important factor in choosing the appropriate LLM for my tasks. While cost is an import factor to consider, it is less of an issue at this stage for us since our clients don't have massive Gen AI bills. Moreover, cost per token is rapidly falling every quarter or so as frontier labs release new models. Consequently, at the time I ranked speed as the least important factor for choosing a LLM.
Until now.
Since I started developing my own AI Agent library, I realise how important it is to have access to a model running on hardware that can output a high number of tokens/second. This realisation turned into conviction upon trying Llama 3 on Groq a few week ago. I even went as far as implementing a Groq Agent in my AI Agent library so that I can use it as part of my workflow.
In this post, I will briefly explain what Groq is. Then I will expand on what benefits fast inference unlocks for AI Agent systems.
Groq is a hardware company that was founded in 2016 by Jonathan Ross. Before founding Groq, Ross led the development of Google’s Tensor Processing Unit (TPU). During his time at Google, Jonathan saw first hand how much larger the inference market would become compared to the market for training deep and large language models.
Groq is developing a new kind of chip called the Language Processing Unit (LPU). In this new age of LLMs, the LPU is not designed for training new models but to maximise their inference speed. Groq’s chip aims to overcome the compute density and memory bandwidth bottlenecks observed when running LLMs on traditional GPUs. It does so by clustering together lots of LPUs, which enables faster computation. Additionally, Groq doesn’t use external memory; the models are fully loaded on its chips’ on die-memory, which reduces interruptions that traditional GPUs must contend with to access model parameters via external RAM.
To demonstrate the impressive capabilities of their hardware, Groq have created a chat interface as well as an API that enables users to interact with the best open source models they host.
I recently tried Llama 3 on both web and API formats and was blown away at the speed with which I was getting answers back. These experiences completely changed my views on the importance of speed in the criteria required to deliver amazing AI based applications.
One of the common issues when building AI Agent systems is their low reliability. This problem becomes even more acute in enterprise settings, where low reliability is a deal beaker.
We expect upcoming models with more powerful reasoning capabilities to help mitigate these reliability issues, but I doubt this will be sufficient for agentic systems due to their more complex nature.
At the moment, adding a reflection step enables AI Agent systems to (self-)correct their answers. However, this extra step, which could involve multiple exchanges between agents, can add significant latency to an application. As a result, this step is sometimes skipped in AI Agent systems, thereby negatively impacting reliability.
With hardware like Groq that can deliver breathtaking tokens/second, implementing a reflection step becomes a very attractive proposition because it won’t significantly impact the latency of an agentic system. As a result, this encourages developers of AI systems to build more elaborate reflection modules, which will lead to even more reliable systems.
Most agentic workflows are currently very slow. This is because unlike in traditional chat exchanges with LLMs like ChatGPT, where few messages are sent and received, AI Agent based systems involve a multitude of messages sent across multiple agents.
Currently, the latency for a call to a standard, non-accelerated LLM is measured in hundreds of milliseconds or even seconds. Since in AI Agent based systems dozens (and sometimes more) of messages are exchanged, it takes several minutes to get an output from the system.
This is not a great user experience.
According to research from the usability experts of the Nielsen Norman Group, users implicitly anchor to 3 response time limits:
In many usability studies, a delay over 10 seconds could mean a user leaving a website or abandoning the interaction with a system. And even when users stick around for more than 10 seconds, they may have trouble understanding what is going on.
In a nutshell - speed matters to users.
With faster inference, a dozen of AI Agent calls would complete just as fast as a single one, thereby reducing the overall latency of the system. Workloads that before had to be presented as asynchronous to manage users’ expectations (and patience) could now be completed synchronously in a timely fashion. This would provide a much more natural, seamless user experience.
The truth is, more tokens/seconds unlocks so many more use cases that we haven’t yet fully envisioned.
For instance, OpenAI recently released their new flagship model called GPT-4o. While this model retains the “GPT-4” prefix, it is different in nature from its predecessor model because it is natively multimodal.
GPT-4o was been trained on text, images, and audio simultaneously. In contrast, the current GPT-4 was trained on text; so when you ask it on ChatGPT to analyse an input image it delegates to another model trained specifically on images. GPT-4o does all of this from the same model, without delegation to another one. Of course, there are obviously more optimisations and other tricks that the clever folks of OpenAI incorporated in this model too.
The result of all this good and hard work is a model that is 50% faster than GPT-4-turbo. Additionally, this native multimodality allows novel use cases like real-time audio translation, tutoring, and more low-latency interactions.
With the current pace of innovation on model reasoning capabilities, speed, and competitive price, it feels like we are inching ever closer to a world where interactions with AI Agents won’t be as slow, clumsy, and pricey as in our present reality.
Of those 3 core criteria, I believe speed is the one that has been downplayed for too long. But I am glad to see that more people (including me) are realising how critical it is to creating better AI Agent systems.
And just like Steve Jobs used to proclaim 1000 songs in your pocket about the iPod, we may soon be able to state 1000 co-workers in the cloud when it comes to AI Agents. In the near future we could even go as far as claiming infinite intelligence in your pocket, once we can run very powerful models at low latency on our smartphones and other devices.
This is a brave new world. We at Kiseki Labs are excited to be a part of it and contribute to it.
LLMs
February 3, 2025
This blog post explains what DeepSeek v3 is and why it matters for frontier LLM development.
Read more
AI Agents
December 1, 2024
In this post I provide some reasons for why AI Agents are a useful paradigm for developing LLM-based applications that can tackle increasingly complex problems. Read on to learn more!
Read more
AI Agents
December 1, 2024
In this post I explain why I believe that Andrej Karpathy's vision for "Software 2.0" is finally within reach thanks to AI Agents. Read on to learn more.
Read more