One of the key topics of discussion around generative AI is prompt engineering; how to ask LLM bots the right questions to get the detailed and accurate response you are looking for.
When providing prompts, it is important to be as clear and specific as possible with the tool:
- provide it with context, for example by giving it a job role and/or sectoral specialism; such as telling it that it’s an accountant for a veterinary practice. This can help give the context to set it off on the right track.
- give it an example of what good looks like for instance by asking it to provide the output in a certain form; such as “give me the output as if I’m putting this in my financial reports”. If possible, copy and paste an example of what you want and ask it to provide the output in that format or “like this”.
- review the answer to the question to verify whether it is answering the question you thought you asked, or the question you asked. Human beings can interpret and understand the meaning behind questions, even when worded in different ways. However, generative AI won’t know if the question itself has an assumption, so you must check you are getting the answer to the question you wanted and not the question you asked.
These tips apply to human beings as well, so if you have people who are already used to working in a team and who are used to delegating tasks, it can be thought of in a similar way. However, the difference is that when prompting generative AI models, you must assume that the tool has no prior contextual, cultural or other knowledge and provide the information you think they would need to give the right answer.
AI prompt engineering webinar
Learn what we mean by prompt engineering and how can we use it to get the right responses from generative AI tools.