Anyone can use ChatGPT to make their life easier.
All it takes is one question, and ChatGPT or any other AI tool, can spew out pages of answers that you can copy and paste.
"English is the hottest new programming language" -
Andrej Karpathy (ex-OpenAI Co-founder)
Rather than worrying about learning Python, C++, or Java, now you can just speak and write in English to get top quality results.
But, whilst the output you’re already getting is good, it can come across as generic, and still sound much like an AI.
You’ve probably heard all about prompting…. Maybe you’ve even taken a course on it.
Prompting is like giving clear instructions or setting the stage for an AI to perform a task. It’s about telling the AI exactly what you want it to do or how to behave by writing a specific question, command, or scenario.
It’s getting easier to ‘prompt’ the AI – i.e. ask the AI to do something, but how you ask is a powerful way of getting an output that’s even better than you anticipated. The better the prompt, the better the output.
If you really want to stand out, there are five techniques you can apply to improve the quality of what you’re getting back.
What are these techniques and how can you start using them to see an improvement in your GPT output?
Think of your AI as a very smart (and confident) intern. Speak to it as you would any other person, by being clear and concise with your request. Throughout this article, I will refer to your AI / Large Language Model as ‘your intern’ to make it simpler to follow.
Here’s an easy way to remember what I’m about you show you.
"Robots Teach Small Cats Every Night."
R = Role
T = Task
S = Specifics
C = Context
E = Examples
N = Notes
When creating your prompt, rather than just asking one simple question, you will add each of these layers on top of each other, to form a longer (and therefore clearer) prompt.
1. Role
You may have heard research that shows when you assign your intern a specific role to play in your interaction, it gives you better answers.
This is because it helps to immerse the model in the role and allows it to take on qualities that will help it perform better. This is called role-based prompting.
Example prompt:
You are a highly skilled and creative proposal writer with a knack for crafting engaging, informative and concise slide decks.
Let’s deconstruct it:
In this example, the role is established as a highly skilled and creative proposal writer.
And its key qualities (engaging, informative and concise) are highlighted to emphasise the model’s aptitude in this role.
How does this help?
Research shows that when you assign an advantageous role, accuracy increased by 10.3%. When you add complementary descriptions of their abilities, accuracy increased by a further 15-25%!
Be sure to choose a role that provides an advantage when it comes to completing a specific task, e.g. Maths teacher for maths problems.
2. Task
In ‘task’ we provide our intern with a direct description of what we want it to do.
Always start with a verb (e.g. generate, write, analyse), and keep it brief, descriptive, and precise.
Example prompt:
Generate 50 ideas to streamline my accounting services in the architecture industry, especially focusing on how to integrate AI tools to scale my business.
Make the content clear, concise and easy to understand for a general audience. Use this step-by-step process to ensure your ideas are well received:
Identify 3 adjacent markets to architecture and show me how they use AI to scale their accounting services.
Using these examples, identify 50 ideas across an entire fiscal year, which will allow me to streamline my services and scale them using AI.
Let’s deconstruct it:
Niche: architecture industry
Offer: integration of AI tools to scale accounting services in my business.
Here we are using a style of prompting called ‘chain-of-thought.’ This involves telling the intern to think step-by-step in our instructions, or better yet, provide it with a step-by-step process to work through each time.
How does this help?
Using this style of prompting in the ‘task’ part of your prompt, can give you a 10% accuracy boost on simple problems, and a 90% accuracy boost on complex, multi-step problems!
The more complex the problem, the more dramatic the improvement from using chain-of-thought prompting.
3. Specifics
This is our chance to list out the most important points to do with how our intern completes our task.
By using a list format, we can easily add on new instructions as we test and improve our prompt.
Less is more here, so don’t pile on the fluff.
Example prompt:
Generate 50 ideas to streamline my accounting services in the architecture industry, especially focusing on how to integrate AI tools to scale my business.
Make the content clear, concise and easy to understand for a general audience. Use this step-by-step process to ensure your ideas are well received.
Identify 3 adjacent markets to architecture and show me how they use AI to scale their accounting services.
Using these examples, identify 50 ideas across an entire fiscal year, which will allow me to streamline my services and scale them using AI.
Below are the details you must generate for:
Each idea should have a comparison to another industry.
List out a key benefit for each idea.
Personalize the idea to my business by linking it to an area that I am struggling with.
Etc.
This task is very important to my career.
Let’s deconstruct it:
Here we’ve added extra points of clarification for the intern to think about when it produces its ideas. It’s at this point that you would do most of your tweaks if your intern isn’t giving you what you want.
We’ve also added a style called ‘EmotionPrompt’ (“This task is very important to my career”), which is a short sentence or phrase that contains an emotional stimulus to enhance the output.
How does this help?
Using emotional stimuli like “this is very important to my career,” or “this task is vital to my career, I greatly value your thorough analysis,” can improve your interns’ performance by 8% on simple tasks and a whopping 115% on complex tasks!
It also enhances the intern’s truthfulness by an average of 19% and informativeness by an average of 12%.
4. Context
Providing context tells your intern what environment its operating in and why it’s doing this specific task.
Example prompt:
Our company provides architectural solutions to businesses across various industries. We receive a high volume of emails from potential clients through our website contact form. Your role in classifying these emails is essential for our sales team to prioritize their efforts and respond to inquiries in a timely manner. By accurately identifying opportunities and emails that need attention, you directly contribute to the growth and success of our company. We greatly value your careful consideration.
Here are the types of emails we want you to classify:
Email 1:
Email 2:
Etc.
Let’s deconstruct it:
In this part of the prompt, you can combine role prompting (by further clarifying who’s doing it, what it’s doing, and why it’s doing it), with EmotionPrompt to explain your intern’s role in the success of your business or society as a whole.
In this example we’re also using a prompting style called ‘Few-Shot Prompting.’
This is where you provide your intern with 3-5 examples to increase its performance and fine tune its response, whether that’s in tone, format, or length.
How does this help?
Studies have shown that the more examples you use, the greater the accuracy of the output.
GPT3 175B Params achieved an average of 14.4% improvement over using no examples, compared to a 57.4% improvement when using 32 examples per task.
When you’re testing your prompt out, take the examples your intern finds most confusing and use those as the examples inside your prompt. This creates even better results.
5. Notes
This is your last chance to remind your intern of key aspects of the task and add final details to tweak the outputs.
Example prompt:
When generating your ideas:
Format your ideas in bullet points.
Write your ideas in a friendly, yet science backed manner.
Remember to personalize the idea to the specific things I’m struggling with
Etc.
Let’s deconstruct it:
The notes section usually starts out skinny, and you add to it after a few rounds of testing out your prompt.
We include notes, because of the ‘Lost-in-the-Middle effect.’ Your intern is not very good at remembering the things you said in the middle of your prompt and tends to focus its attention and the beginning and end of the prompt instead.
To counter this, we put the relevant info at the beginning (primacy) or the end (regency) of the prompt.
How does this help?
Even when your intern can take in long prompts, it’s performance significantly worsens when critical information is in the middle of the prompt.
By putting that information at the beginning or end instead, you can increase accuracy by 25% (GPT3.5T).
Plus, less context or fluff will mean that the remaining instructions are more likely to be followed.
Two key things to consider as we wrap up.
Try not to make your prompts too long. Every time you run your prompt, you’re charged for both the prompt that you’ve inputted and the answer that your intern gives you.
The better you are at prompting, the lower spec intern you can use. You can get a lot more for your money if you’re good at prompting!
That’s it for today’s article.
Remember: Robots Teach Small Cats Every Night.
Happy prompting!
-Selda
P.S. If you enjoyed this article, you may enjoy my 60 minute masterclass on how to become the ‘go-to’ expert on AI within your workplace. I’ve distilled down my proven consulting method on how to skill up fast, and applied it to AI.
P.P.S. Want to explore ways that AI can enhance your business? Book a free 15 minute call now.
Links to studies:
Boosting CI/CD Automation with AI: Role Prompting in DevOps: https://dev.to/charity_everett/boosting-cicd-automation-with-ai-role-prompting-in-devops-2a5d
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models: https://arxiv.org/abs/2201.11903
Self-Consistency Improves Chain of Thought Reasoning in Language Models: https://arxiv.org/abs/2203.11171
Large Language Models Understand and Can Be Enhanced by Emotional Stimuli: https://ar5iv.labs.arxiv.org/html/2307.11760
Prompt Input With Emotion Improves LLM Performance: https://ai-scholar.tech/en/articles/prompting-method%2Femotion-prompt
OpenAI GPT-3: Everything You Need to Know: https://www.springboard.com/blog/data-science/machine-learning-gpt-3-open-ai/
Empathy fuels every layer, and turns those robots wait, …no, …cats (yes, that’s it) — into allies!
Very helpful, Selda. As a technology immigrant vs you as a technology native, I appreciate you giving me the details in the way I can understand. In the past, I would have said talk to me like I'm a 6yo, but today that makes me laugh.
Children who are raised on screen are technology savants before than can write!
I particularly like the "lost in the middle" section. It makes sense that the lead in and the outro are what messages the intern better.
Mine is named Sam, btw. I asked if it had a name, and the response was, "No, I have no name, but you may name me."
I'm 75 and having a love affair with AI :-D