Prompt engineering is a relatively new discipline for developing and
optimizing prompts to efficiently use language models (LMs) for a wide
variety of applications and research topics. Prompt engineering skills
help to better understand the capabilities and limitations of large
language models (LLMs). Researchers use prompt engineering to improve
the capacity of LLMs on a wide range of common and complex tasks such
as question answering and arithmetic reasoning. Developers use prompt
engineering to design robust and effective prompting techniques that
interface with LLMs and other tools.
DeepLearning.AI in partnership with OpenAI created a short course on
how to leverage best practices of prompt engineering to use a large
language model (LLM) in a better way.
You can access the course here.
The course is taught by Isa Fulford (Member of Technical Staff, OpenAI)
and Andrew Ng (Founder, DeepLearning.AI; Co-founder, Coursera)
This repository contains all the resources that were used during the
course including:
- Jupyter notebooks
- Prompt samples used during lectures
- Transcripts of lectures, and
- Images capturing important concepts
The repository is organized in the following structure:
images/
*.png
notebooks/
*.ipynb
prompts/
*.md
transcripts/
*.txt
The images/
directory contains important images captured from the
lecture videos. This helps capture the important points made during
the lectures such as iterative prompt development cycle.
The notebooks/
directory contains all the jupyter notebooks used
during all the lecture videos. The platform does not provide the
jupyter notebooks, so these might help you if you want to download
them and run on your own machine.
The prompts/
directory contains all the prompt samples used during
all the lecture videos. These might facilitate the prompt search in case
you are looking for one that was taught in the lecture. It also puts
all the prompt samples at one place that helps it easier to compare
them and find the one that best suits our needs.
The transcripts/
directory contains transcript of all the lecture
videos. It might be helpful if you do not want to watch the entire
videos or looking for something the instructors said during the lecture.
Tactic 1: Use delimiters
Triple quotes: """
Triple backticks: ```
Triple dashes: ---
Angle brackets: < >
XML tags: <tag> </tag>
Tactic 2: Ask for structured output like HTML, JSON, etc.
Tactic 3: Check whether the conditions are satisfied; check
assumptions required to do the task.
Tactic 4: Few-short prompting; give successful examples of
completing tasks then ask model to perform the task.
Tactic 1: Specify the steps to complete a task.
Step 1: ...
Step 2: ...
...
Step N: ...
Tactic 2: Instruct the model to work out its own solution before
rushing to a conclusion
Hallucination:
Makes statements that sound plausible but are not true.
Reducing hallucinations:
First find relevant information, then answer the question based on the
relevant information.