Understanding How AI Models Work
Large language models (LLMs) like ChatGPT-4, Google AI or Gemini, Claude, and Notion AI use extensive datasets containing millions of books, web pages, and other texts. They analyse language by breaking it into small parts— called tokens— and ranking these parts based on how often they appear together. When you prompt an AI, it uses these rankings to generate a response. For straightforward prompts, this process is quite simple.
However, LLMs sometimes produce creative and unexpected responses. While we may not fully understand how these models work, we can still learn to guide them effectively.
Talking to Your Model Like It’s Human
Speak Normally
Generative AI models are designed to understand natural language, unlike Siri or Google Assistant, which rely on specific phrases. Since these models are trained on conversational dialogue, speaking to them as you would to a person will yield more human-like responses.
Be Concise
Keep your prompts simple yet detailed. Clear language reduces the chances of misinterpretation by the model. Avoid negative phrases like "Don't use negative phrases" because the AI might focus on the "do" and ignore the "not." Instead, phrase your prompts positively:
BAD: Do not include incomplete lists.
GOOD: Only include complete lists.
Providing Complete Information
Giving Your Model an Identity
To get your AI to perform tasks relevant to a specific role, assign it an identity. For example, if you want it to act as a market analyst, start your prompt with "You're a charted accountant." This approach works because LLMs emphasize token patterns associated with the given identity.
Be Specific
Ensure your prompts are specific. General prompts may not provide meaningful responses, but adding detail helps. For instance:
General: You’re a market analyst. Where should we sell our products?
Specific: You’re a market analyst. Where in the U.S. should my company sell our camping gear products?
Refining prompts with minor adjustments can lead to significantly better responses.
Avoiding Errors and Producing Great Results
Be Thorough
While concise prompts are important, being thorough is equally crucial. LLMs can handle vast amounts of data, so providing detailed instructions helps them generate more accurate responses. For example, instead of a brief prompt, use something like:
"You’re a market analyst. Tell me which U.S. cities are the best to sell a new line of camping gear in, including evidence to support your choices and suggesting which camping items should sell best in each city."
Add Lines to Prevent Bad Results
Anticipate potential errors by adding clarifying sentences. For instance:
"You’re a market analyst. Tell me which U.S. cities are the best to sell a new line of camping gear in, including evidence to support your choices and suggesting which camping items should sell best in each city. I only want to sell gear in cities that get at least six inches of snow per year."
This helps the model focus on relevant criteria and avoid irrelevant results.
Using Examples to Guide AI Responses
Input-Output Example (Few-Shot Example)
LLMs excel at language manipulation. To refine their responses, provide an example of the input and the desired output. For example, if you need a report, start with a sample prompt and desired response, then ask the AI to follow that format.
Summing Up
Artificial intelligence holds incredible potential, but the key to unlocking its power lies in crafting precise and thoughtful prompts. AI serves as a co-pilot, assisting and enhancing our capabilities, but the ultimate control and direction come from us. By continually updating our knowledge and refining our communication with AI, we can achieve remarkable outcomes. Remember, the more effort you invest in guiding and training AI, the more beneficial and impressive the results will be.