
Anatomy of a Prompt
- Persona – what role should the model play
- Instructions – what do you want it to do
- Input content – what dataset should it use
- Format – what you want the output to look like
- Additional information – e.g. any constraints
Categories of Prompts
Zero shot prompting – no example is provided, so the model uses general knowledge to complete the response. E.g. summarization and sentiment analysis.
Few-shot prompting – you provide some examples as input and then ask the model to learn from it and answer new use cases/questions. The model will copy the examples in terms of the style output.
Chain-of-thought – you can ask a model to work something through step-by-step to get a logical and well thought out plan or reasoning.
Common Issues
The following issues can be corrected through better prompting:
- Accuracy – ask AI to cite its sources, or to say it doesn’t know if it is unsure.
- Coherence – use chain of thought prompting to guide it, or ask it to structure its response.
- Bias – ask it to use inclusive language. Ask it to do a self review of the output.
Common Parameters
Most models like Gemini, ChatGPT have settings that can be modified to produce different output. Some that are common across popular models are:
Max tokens – controls the length (and therefore cost) of the response
Temperature – controls how creative or predictable the AI is. Higher = more creative.
Top P (Nucleus sampling) – controls randomness by narrowing the choices of words to a smaller % of the possible options
Stop sequence – specify a character that gives a hit to the LLM to stop generating. E.g. if you want it to generate short answers rather than long wordy ones with extra explanation.