From Oops to Awesome: 5 Common Prompting Mistakes with LLMs

Srinath Sridharan
3 min readJun 14, 2024

Getting the most out of Large Language Models (LLMs) like ChatGPT, Gemini, or LLaMA is like knowing the secret recipe for perfect cookies — one wrong move and it’s a mess! Even the best data scientists can make mistakes that turn great questions into confusing answers.

So, let’s have some fun and learn how to avoid five common mistakes. I will show you how to turn your prompts from “oops” to “awesome” with easy examples.

Image generated by OpenAI’s DALL-E

1. Being Too Vague

Mistake: Being too vague is like asking a friend, “What’s up?” and expecting a life story. LLMs need details to give you a good answer..

Ineffective Prompt:Tell me about healthcare.

Effective Prompt:Provide a summary of the latest advancements in telemedicine for chronic disease management.

Explanation: The first prompt is too vague and could lead to a broad discussion covering various aspects of healthcare, while the second prompt specifies the area of interest (telemedicine) and context (chronic disease management), leading to a more targeted and useful response.

2. Using Complex Language

Mistake: Using fancy words can confuse LLMs. Keep it simple, and you’ll get better answers.

--

--

Srinath Sridharan

Data Enthusiast | Healthcare Aficionado | Digital Consultant