Multi-Agent LLMs: Unlocking New Possibilities Beyond ChatGPT and Gemini

Srinath Sridharan
4 min readSep 22, 2024

If you are not a member and want to read this article — please use this link.

From LLMs to Agent-Based Systems

Many of us have used LLMs like ChatGPT, Gemini, Claude, etc., to answer questions, summarize information, or draft content. These LLMs respond to prompts with high accuracy and can process vast amounts of data, making them invaluable for tasks like medical research or drafting clinical notes.

However, traditional LLMs have a few key limitations:

  • One prompt, one response: Each interaction is independent, which means for complex workflows, you need to repeatedly provide more prompts.

Imagine you’re using ChatGPT to troubleshoot a medical device issue or handle patient queries in a hospital. For every new piece of information, you’d need to input a new prompt, manually feeding the AI more details, adjusting, and iterating. This process quickly becomes inefficient, especially when the task involves multiple steps or requires the cooperation of different specialists.

  • No collaboration between outputs: Traditional LLMs are single agents — each response is isolated, and the model doesn’t have the built-in ability to work collaboratively across different tasks.
  • No built-in tools or memory: While LLMs excel at generating responses, they can’t search the web, read documents, or store memories across interactions without…

--

--

Srinath Sridharan

Data Enthusiast | Healthcare Aficionado | Digital Consultant