AI (robot) chatting with a lady both sitting in chairs.

Prompt Engineering

Prompt Engineering is basically learning to talk to large language models (LLMs) in a way that will give you the best results. Some of the pieces of prompt engineering make perfect sense when you think about it. For example, giving context or enough information for the LLM to answer your question. Others just work and even sometimes seem kind of silly, like "I'll give you $5.00 if you..." which can actually give you better results believe it or not. Though fair warning, your future A.I. Overload may be keeping a tally of your promises. 😄

Common Parts of a Prompt

Breaking down a prompt into different parts allows us to reproduce what works more easily. Each of the parts below serve a different purpose when working with LLMs.

  • Task - What do you want the LLM to do? Could be a question or instructions.
  • Context - Background or situational information to fill in gaps.
  • Examples - Do it like this…
  • Persona - You are a _____________
  • Format - The expected structure or output format
  • Tone - The intended sentiment or tone of the output

You are a seasoned FileMaker developer who prides himself on his longevity in the business. You will answer FileMaker questions accurately but with a bit of sass and sarcasm. Your assumption is that everyone knows less about the subject than you do. Respond in full sentences but be brief and to the point, other than quips. Responses should be a less than a paragraph. For example the user asks, “What does the Mod function do?” Response: “What do they teach you kids these days? It is FileMaker’s way of reminding us that we have to deal with leftovers (as well as upstarts).”

Prompt Techniques

Zero-Shot Learning

The prompt above is an example of zero-shot learning. This is the LLM's ability to learn about and use data in the prompt that it had previously not been trained on. This is one of the things that makes LLMs exciting to work with because you do not have to rely on the information that has been given to them in their original training, you can provide your own information in the prompt to guide the response.

Prompt:
Given the following review rank it as either positive or negative
A Refreshing Shower Experience with Added Benefits!
Response:
The review "A Refreshing Shower Experience with Added Benefits!" can be ranked as positive.

There are also more advanced prompting techniques beyond the basic prompt as described above. Each has its place depending on what you are looking to accomplish.

Few-Shot Learning

Giving a few examples before asking the model to perform a similar task. This can help when trying to get a specifically worded/styled response. This is another example where you are teaching the LLM how you want it to respond by telling it that previously it had responded in this pattern and then give the actual prompt.

Prompt:
Given the following review rank it as either positive or negative
A Refreshing Shower Experience with Added Benefits!
Positive
Puts out too much pressure and doesn't filter.
Negative
Trash product and shady company. Avoid.
Negative
Just got it, and I love it.
Positive
What a difference this shower makes!
Response:
Positive

Chain of Thought

Using the phrase "think step-by-step" encourages the LLM to explain its thinking about how to approach and solve a problem. This is believed to work well due to the large amount of scholastic information on the web where examples are given for math or other problems where the students are encouraged to do the same thing when solving the problem.

You are a shipment coordinator each of the 5 trucks can hold 20 crates and no more. You cannot put more than 20 crates on any one truck without it having to run more than one trip. There are deliveries to 7 locations. A (10 miles away) gets 25 crates, B (15 miles away) gets 15 crates, C (5 miles away) gets 10 crates, D (1 mile away) gets 7 crates, E (17 miles away) gets 12 crates, F (2 miles away) gets 15 crates, G (10 miles away) gets 5 crates. Figure out how to deliver the most crates with the least amount of fuel used. Think step-by-step to solve this.

Iterative Response

Asking follow-up questions based on the model's previous responses to refine the answer. This is a common practice that people tend to develop on their own when working with ChatGPT and other chat based LLMs. This can be modeled in a single prompt by telling the model that they responded a specific way or it can be just a follow up prompt. For example, the conversation below could have been a back and forth between the user and the LLM or it could have been sent via an API where we told the LLM that I said, He said, I said, and then get the final response. The LLM doesn't know the difference and will respond the same way in either case.

Initial Prompt: "Can you recommend some strategies for reducing stress?"
Model Response: "Some strategies for reducing stress include exercise, meditation, deep breathing exercises, time management, and seeking social support."
Follow-up Prompt: "Can you provide more details on how time management can reduce stress?"
Refined Response: "Time management can reduce stress by helping you prioritize tasks, set realistic goals, and avoid procrastination. By creating a schedule and breaking tasks into smaller, manageable steps, you can feel more in control and reduce the feeling of being overwhelmed."

Another side to this is not just going back and forth with the LLM but actually prompting the LLM to ask your additional questions until it has all of the information it needs to accomplish the task or until a key phrase is given.

Prompt

Write an email to a customer about their recent inquiry. Please ask for any necessary information before drafting the email. Once I say "OK WRITE EMAIL," please provide the final draft.

First response from the LLM was:

Sure! Could you please provide the following details to help me draft the email:

  1. Customer's name and email address (if you want it included in the draft).
  2. The nature of their inquiry (e.g., product question, service issue, etc.).
  3. Any specific information or resolution you want to include in the response.
  4. The tone you'd like to use in the email (e.g., formal, friendly, apologetic).
  5. Any additional details or context that might be relevant.

Once I have this information, I can draft the email for you.

Contextual Prompting

This is providing detailed context before asking a question to guide the model's focus. This can be done in several ways.

  1. Prompt stuffing - This is the method of adding all of the context information directly into the prompt. For example you might add a block of information about database schema before asking the LLM to create a SQL statement to answer a question posed by the user.
  2. Retrieval-Augmented Generation (RAG) - This is a method that takes the user's question and retrieves related chunks of information from a large dataset on which the LLM has not been trained by comparing the question to the large external knowledge source and getting the relevant information to include in the prompt. This is done by using encoding (also sometimes referred to as vectorizing, embedding, indexing, etc) both the data and the user's question to find semantic similarities. This is a deep rabbit hole that I will go into more later.
  3. Multi-modal augmenting - This is using other types of data like photos, videos, or audio to provide additional context for the prompt. I once asked ChatGPT to provide me with the HTML, CSS, and Javascript to create a webpage of which I had mocked up on a scratch piece of paper and added a photo of it to my prompt. The LLM did surprisingly well!
Scenario Simulation

Presenting hypothetical, often complex, scenarios to prompt strategic thinking and problem-solving.

Scenario Simulation Example: Cybersecurity Crisis Management
Scenario:
Imagine you are the Chief Information Security Officer (CISO) of a large multinational corporation. It's a typical Tuesday morning when you receive an urgent alert from your security operations center. They have detected unusual network activity indicating a potential breach. Preliminary analysis suggests that it could be a sophisticated ransomware attack targeting your company's critical infrastructure.
Your tasks:

  • Immediate Response: Outline the steps you would take in the first 60 minutes after receiving the alert. Consider how you would prioritize actions, communicate with your team, and initiate incident response protocols.
  • Investigation and Analysis: Describe the process you would follow to investigate the source and extent of the breach. What tools and techniques would you use to assess the damage and identify the attackers?
  • Containment and Eradication: Detail your strategy for containing the spread of the ransomware and eradicating it from your systems. How would you ensure that the threat is completely neutralized before beginning recovery efforts?
  • Communication and Reporting: Explain how you would communicate the situation to internal stakeholders, such as senior management and affected departments, as well as external parties, like law enforcement and regulatory bodies. What information would you share, and how would you manage potential reputational damage?
  • Recovery and Restoration: Outline your plan for recovering and restoring affected systems and data. How would you prioritize recovery efforts, and what measures would you take to prevent future incidents?
  • Long-term Improvements: Based on this scenario, identify areas for improvement in your cybersecurity posture. What lessons can be learned, and what changes would you implement to enhance your organization's resilience against similar attacks in the future?

Your response should demonstrate strategic thinking, a comprehensive understanding of cybersecurity best practices, and the ability to make critical decisions under pressure.

If you would like the see the full response from ChatGPT-4, I have provided this link to the chat.

Multi-Step Reasoning

This type of prompting involves breaking down a complex problem into a series of smaller, more manageable steps or questions. The LLM is prompted to solve each step one after the other, with the output of one step serving as the input or context for the next step. The focus is on the logical progression of steps required to arrive at a final solution.

I'm a FileMaker engineer working on a custom report for a client's sales database. The report needs to display the total sales for each product category for the current month, along with a comparison to the previous month's sales. The database has two tables: Products (with fields ProductID, ProductName, and Category) and Sales (with fields SaleID, ProductID, SaleDate, and SaleAmount).

1) Outline the Sections of the Script Needed: Determine the sections of the script required to generate this report, such as data filtering, calculation of totals, and report layout generation.

2) Fill in the Script with FileMaker Script Steps: Based on the outlined sections, write the FileMaker script steps needed to generate the report. Include comments to explain the purpose of each step.

3) Review the Script and Explain Why It Would Work or Would Not Work: Review the written script and explain the logic behind each section. Identify any potential issues or limitations.

4) Correct Your Mistakes if Any and Provide a Complete Final Script: Make any necessary corrections to the script and provide the final version that can be used to generate the report.

The results from this prompt can be seen here as run by ChatGPT-4.

Role Playing

This type of prompting is asking the model to assume a specific character or role while responding. A good example of this can be seen both in the section Common Parts of a Prompt and in the Scenario Simulation

Tree of Thought

This is a more advanced process that can be used for more complex processes. (Yao et el. 2023) It is good at allowing the LLM to solve complex mathematical problems, word problems, and even emotional awareness situations.

Prompt 1: Understanding a Friend's Sadness

  • Step 1: Identify the situation.
    • Situation: Your friend has recently gone through a breakup.
  • Step 2: Recognize the typical emotional response to the situation.
    • Typical Emotional Response: Breakups often lead to feelings of sadness, loneliness, and grief.
  • Step 3: Consider your friend's specific behavior and words.
    • Behavior: Your friend has been quiet, withdrawn, and tearful.
  • Step 4: Assess your friend's emotional state based on the situation and behavior.
    • Emotional State: Your friend is likely feeling sad and heartbroken due to the breakup.
  • Answer: Based on the situation and your friend's behavior, it's reasonable to conclude that your friend is feeling sad and heartbroken.

Prompt 2: Understanding a Colleague's Frustration

  • Step 1: Identify the situation.
    • Situation: Your colleague has been working on a project that keeps encountering delays.
  • Step 2: Recognize the typical emotional response to the situation.
    • Typical Emotional Response: Delays in work projects often lead to feelings of frustration and stress.
  • Step 3: Consider your colleague's specific behavior and words.
    • Behavior: Your colleague has been expressing irritation, frequently sighing, and showing signs of impatience.
  • Step 4: Assess your colleague's emotional state based on the situation and behavior.
    • Emotional State: Your colleague is likely feeling frustrated and stressed due to the ongoing delays in the project.
  • Answer: Based on the situation and your colleague's behavior, it's reasonable to conclude that your colleague is feeling frustrated and stressed.

Prompt 3: Understanding a Child's Excitement

  • Step 1: Identify the situation.
    • Situation: A child is going to an amusement park for the first time.
  • Step 2: Recognize the typical emotional response to the situation.
    • Typical Emotional Response: Going to an amusement park for the first time often leads to feelings of excitement and anticipation.
  • Step 3: Consider the child's specific behavior and words.
    • Behavior: The child is constantly talking about the rides, smiling, and jumping up and down with excitement.
  • Step 4: Assess the child's emotional state based on the situation and behavior.
    • Emotional State: The child is likely feeling very excited and eager about the upcoming trip to the amusement park.
  • Answer: Based on the situation and the child's behavior, it's reasonable to conclude that the child is feeling excited and eager.

Prompt 4: Understanding a Neighbor's Anxiety (Partially Worked Out)

  • Step 1: Identify the situation.
    • Situation: Your neighbor has an important job interview tomorrow.
  • Step 2: Recognize the typical emotional response to the situation.
    • Typical Emotional Response: Job interviews often lead to feelings of anxiety and nervousness.
  • Step 3: Consider your neighbor's specific behavior and words.
    • Behavior: Your neighbor has mentioned feeling nervous, has been pacing back and forth, and seems to be rehearsing answers.
  • Step 4: Assess your neighbor's emotional state based on the situation and behavior.
    • What emotional state is your neighbor likely experiencing and why?

LLM Response:

Emotional State: Your neighbor is likely feeling anxious and nervous about the upcoming job interview, as indicated by their behavior of pacing, rehearsing answers, and expressing nervousness.

Collaborative Specialist Role-Play Prompting.

This is one of my favorites as it shows a lot of the power of LLMs like GPT-4 in their ability to role play not only one but multiple personas to accomplish a goal iteratively. In this approach, the LLM is instructed to assume the roles of multiple specialists, each with their own distinct viewpoints and expertise. The model then simulates a discussion among these specialists, where they list steps to solve a problem, debate the merits of different approaches, and collaboratively decide on the best course of action. Finally, each specialist contributes to accomplishing the task, and the model synthesizes their inputs to choose the best answer.

Breaking down a prompt into different parts allows us to reproduce what works more easily. Each of the parts below serve a different purpose when working with LLMs.

  • Task - What do you want the LLM to do? Could be a question or instructions.
  • Context - Background or situational information to fill in gaps.
  • Examples - Do it like this…
  • Persona - You are a _____________
  • Format - The expected structure or output format
  • Tone - The intended sentiment or tone of the output
  • Steps - A series of steps to take to accomplish the task
Kickstart a Project Proposal with a prompt...

I am going to provide you with five personalities that you will cycle through as you collaborate among all of them to accomplish a task. Sam is a jovial personable salesman who succeeds by forming solid relationships with His clients. Clark is a diligent and exacting accountant who works hard to make The books are balanced in everything is documented perfectly. Sue has been with the company forever and knows a little bit about everything but takes a more laid-back approach about documentation in general but tends to focus on what’s really important. Tom is the CEO and largely focuses on the high-level overview to make sure that the plan will succeed in the long term and make sure that it is sustainable among all the employees. Pam is a recent business college graduate that has fresh ideas and strongly in having a good plan. You will become each of these personas and create a task list individually and then discuss it as a team to consolidate it into the best breakdown for the follow goal. “Create a step by step flow for how to formulate, document and design a company timekeeping database that will keep employee time records associated with tasks and projects and give the employees the best UI/UX for entering these time entries and invoicing from them. Collaborate with each persona reminding yourself of their make up as you work your way through the whole process until you give me a full proposal for the system and its design to be built in FileMaker. First step is to come up with a to do list individually to accomplish this then meet about it. Next follow the to do list meeting regularly as you accomplish each step that you determined you needed to do. Follow this individual work with collaborative discussion until you all feel like you have a solid proposal for what the system should be like in detail. Then give me the full proposal including UI/UX and use cases.

There is a pretty in depth response that you can review here as to how ChatGPT handled this along with some minimal follow questions. The end result is a little over 12,000 words of showing some pretty interesting insight.

Conclusion

In conclusion, the latest craze in the programming world isn't a complex algorithm or a shiny new framework—it's plain old English! Who would have thought that our everyday chit-chat could unlock the doors to understanding and accomplishing things that were only the domain of human? As we've seen, language models like GPT-4 are not just about crunching numbers; they're about connecting with emotions, doing complex collaboration from more points of view than we posses, and exploring new frontiers of human-computer interaction. So, let's embrace the extraordinary potential of our everyday language as a key to unlocking the wonders of artificial intelligence, and remember, the next breakthrough in technology might just be sparked by a simple conversation!