Join today
Write your awesome label here.

Prompt Engineering: The Complete Course

Go from prompt novice to confident practitioner with hands-on training in prompt engineering. Learn how to communicate effectively with large language models, structure complex problems into solvable tasks, and build production-ready AI applications that solve real-world challenges.
🏅 Included in the PRO membership!
Write your awesome label here.

108 Lectures

Comprehensive Knowledge

16 Hours

Video Duration

30+ Labs

Focus on Practice

Course Certificate

Validate Your Learning
What you are going to learn

Transform Your Skills with Practical AI Communication and Application Development

This course takes you from understanding how large language models work to building production-ready AI-powered applications. You will start with the essentials: API setup, tokenization, cost management, and the core principles of effective prompting. From there, you will develop expertise in techniques like few-shot prompting, chain-of-thought reasoning, persona patterns, and output formatting that give you reliable control over model behavior.

By the end of this course, you will be able to design multi-step AI pipelines using patterns like decomposition, self-critique, and self-consistency, implement function calling to connect models with external tools, and build complete AI features that are maintainable and testable. You will leave with both the prompting skills and the engineering mindset needed to ship AI-powered features in real applications.

By completing this course, you will be able to:

  • Set up Python development environments and securely manage API credentials for OpenAI, Anthropic, and other providers
  • Make API calls using unified provider libraries
  • Explain how tokenization works and calculate actual costs based on token usage and pricing models
  • Implement the three pillars of effective prompts
  • Use delimiters and structural formatting to organize prompts
  • Apply persona patterns to shape model outputs
  • Implement few-shot prompting
  • Design and execute chain-of-thought prompts
  • Build reusable prompt templates
  • Apply advanced patterns including flip-the-script, decomposition, and self-critique for sophisticated problem solving
  • Implement function calling and tool use
  • And much more!

Course Contents

01

Introduction

This section covers the foundational introduction to prompt engineering, exploring why it matters as a critical skill in generative AI development, and introducing the course structure and project overview. It discusses how effective communication with AI systems requires clear, precise prompts that can be learned and mastered, along with practical considerations around AI toolbox organization, costs, and the overall course project design.
Introduction
03:48
Why Learn Prompt Engineering?
03:18
Aligning Expectations
03:05
Course Project: What We Will Build
02:32
Course Project: Module Breakdown
04:20
Cost Overview for Using the OpenAI API
03:40
Let's Stay Connected!
00:28
Course Resources
01:45

02

Tools and Local Setup

This section covers essential setup and configuration tasks needed to begin working with generative AI APIs, including managing local Python environments, creating accounts with major AI providers, generating and securely storing authentication credentials, and understanding the various provider options available for model interaction.
Section Introduction
01:04
Python Local Setup
02:36
OpenAI Setup
05:17
Anthropic Setup
03:25
IMPORTANT! Setting Up Python and the Local Environment
04:19
A Note on Using GitHub Copilot for Our Project
02:31

03

Crash Course: The OpenAI Python Library

This section provides a comprehensive exploration of working with the OpenAI API, including environment setup, authentication, making API calls, using a unified library for multiple providers, understanding response objects, controlling model behavior through parameters, implementing real-time response streaming, and running models locally for zero-cost interaction.
Section Introduction
02:32
Setting Up the Environment
05:39
Handling Authentication
07:33
Your First Chat Completion Call
10:23
Introduction to LiteLLM
09:40
Using LiteLLM with Anthropic
10:18
The Response Object
12:26
Useful Parameters: temperature and max_tokens
10:04
IMPORTANT: Adapting to max_completion_tokens for GPT-5 Models
02:56
Useful Parameters: stop, n, response_format
11:54
Streaming
09:27
Running LLMs Locally with Ollama
11:01

04

AI Toolbox #1 - Setup and First Comand

This section focuses on implementing the first module of the AI Toolbox project, which establishes a foundational Python package structure with a working CLI application that includes an AI-powered command. The implementation leverages generative AI tools and demonstrates project scaffolding, dependency management, and integration of API functionality.
Module 1: Overview
00:41
Scaffold the Project
11:08
Codebase Walkthrough
05:31
Implement a Minimal CLI with Click
12:06
AI-Powered Hello
08:00
Testing Our Feature
08:24

05

Foundations: How LLMs "Think"

This section explores the foundations of how large language models work internally, including tokenization, probability distributions, context windows, and cost implications. It covers comparing different model types by their capabilities and characteristics, and demonstrates how system prompts take priority in guiding model behavior throughout conversations.
Section Introduction
01:54
What Is Tokenization?
05:08
Lab: Tokenization - Part 1
06:57
Lab: Tokenization - Part 2
05:09
Understanding Log Probabilities
03:43
Lab: How LLMs Build Sentences
12:15
The Context Window
04:38
Usage Cost Main Components
06:04
Lab: Usage Cost - Part 1
15:45
Lab: Usage Cost - Part 2
10:33
Model Classes
06:25
Prompt Roles
03:26
Lab: System Prompts
13:31

06

Core Prompting Patterns for Developers

This section covers core prompting techniques that form the foundation of effective prompt engineering, including the three pillars of effective prompts, structural delimiters, persona patterns, few-shot prompting, and advanced techniques for formatting and reasoning.
Section Introduction
01:48
Instruction, Context, and Constraints
05:40
Lab: Instruction, Context, and Constraints
13:18
The Power of Delimiters: Delimiter Overview
04:43
The Power of Delimiters: Refactoring a Prompt
07:49
Lab: Using Delimiters
09:24
The Persona Pattern
06:26
Lab: The Persona Pattern
11:49
Lab: Defining Behavioral Rules
05:59
Lab: Example - Defining a DBA Expert
06:50
Few-Shot Prompting
04:58
Lab: Few-Shot Prompting
16:10
Techniques for Formatting Output - Part 1
09:51
Techniques for Formatting Output - Part 2
07:25
The Chain-of-Thought Pattern
04:16
Lab: Practicing Chain-of-Thought
05:02
The Template Pattern
04:20
Lab: Practicing the Template Pattern
09:48

07

AI Toolbox #2 - Smart Commits

This section covers implementing the second module of the AI Toolbox project, building a smart commit feature that generates conventional commit messages from code changes with an interactive user feedback loop.
Module 2: Overview
01:01
Scaffold the Commit Feature Boilerplate Code
14:44
Retrieve the Git Diff
16:48
Generate the Prompt Templates
11:29
Implement the Commit Functionality
11:31
Allow Commit Message Adjustments
10:48
Improve the Test Suite
18:54
Add Proper Logging to the Commit Feature
14:59
Allow Choosing a Custom Model via the CLI
09:17
Add Documentation
04:56

08

Advanced Prompting Techniques

This section covers sophisticated prompting techniques that enable better interaction with large language models through strategic prompt design patterns. Students learn how to restructure tasks, generate optimized prompts through AI assistance, break down complex problems, iteratively improve outputs, extend model capabilities through function calling, and synthesize multiple diverse solutions into superior final results. The patterns demonstrated form a toolkit for handling ambiguity, complexity, and the need for refined outputs across various application domains.
Section Introduction
02:57
Lab: Flip the Script Pattern - Part 1
11:28
Lab: Flip the Script Pattern - Part 2
06:37
Lab: Generate Prompts - Part 1
07:27
Lab: Generate Prompts - Part 2
15:55
Lab: The Decomposition Pattern
15:35
Lab: The Self-Critique Pattern
11:22
Lab: Implement Function Calling - Part 1
11:11
Lab: Implement Function Calling - Part 2
07:08
Lab: The Self-Consistency Pattern - Part 1
07:00
Lab: The Self-Consistency Pattern - Part 2
06:03

09

AI Toolbox #3 - Smart Code Reviews

This section guides students through building a sophisticated multi-step code review pipeline that orchestrates multiple AI personas, integrates external tools through function calling, and synthesizes diverse perspectives into comprehensive reports. The project demonstrates how to apply all the prompt engineering techniques from previous sections in an integrated, production-oriented system that uses decomposition, self-consistency, function calling, and iterative refinement to create intelligent analysis capabilities.
Module 3: Overview
00:57
Module TODOs Overview
03:38
Migrate from Subprocess to GitPython
17:49
Refactor Exceptions
04:13
Scaffolding the Review Command
05:56
Implementing Dataclasses and Helper Functions
14:09
Integrating the Review Command into the CLI
04:31
Implementing Logic and Syntax Review Prompts
09:58
Leveraging LLMs for Reviews
12:11
Creating Relevant Personas for More Advanced Analysis
09:22
Implementing the Self-Consistency Workflow - Part 1
14:00
Implementing the Self-Consistency Workflow - Part 2
06:31
Define Functions for Tool Calling
15:33
Implement a Lightweight Tool Registry - Part 1
13:31
Implement a Lightweight Tool Registry - Part 2
10:00
Fix Typing Issues
10:22
Implement a First Draft for AI-Driven Tool Calling
06:36
Improve the Tool Calling Loop
12:16
Implement Tests for Tool Calling
08:45
Add a Self-Critique Step in the Review
13:41

10

AI Toolbox #4 - JSON and Markdown Output

This section transforms the code review system from text-based to data-driven architecture, migrating toward structured data classes and JSON schemas throughout the pipeline. Students refactor the system into modular packages, update prompts to return structured JSON matching data class schemas, and implement flexible output formatting that supports both JSON and Markdown representations. This section demonstrates how thoughtful data architecture enables robust, maintainable systems that can adapt to new requirements and integrate with external tools more effectively.
Module 4: Overview
01:09
Start with Output Migration
15:23
Split the Review Module into Multiple Files
16:35
Fix Failing Tests
08:49
Migrate Existing Prompts to Output JSON
12:54
Update Existing Tests
04:39
Migrate Review Pipeline and Helpers to Work with Dataclasses - Part 1
12:46
Migrate Review Pipeline and Helpers to Work with Dataclasses - Part 2
07:14
Complete Pipeline Migration
14:24
Fix Minor Bugs
05:38
Fix Existing Tests
07:19
Implement Output Parsing for the Review Command
10:08
Project Wrap-Up: Update Synthesis Logic and Project Documentation
13:14

11

Conclusion

Congratulations!
00:26
Certificate of Completion

Frequently asked questions

Who is this course designed for?

This course is designed for multiple technical roles:

Software Engineers and Developers who want to integrate AI capabilities into their applications will gain the prompt engineering skills needed to build AI-powered features, automate code generation tasks, and create intelligent systems that solve real problems with reliable outputs.

Data Scientists and AI Engineers seeking to deepen their understanding of how to work effectively with language models will learn systematic approaches to prompting, evaluation, and iteration that transform raw model outputs into production-ready solutions.

DevOps Engineers and Technical Leads managing or deploying AI-powered tools will understand how to architect maintainable systems, manage costs, integrate external tools, and build applications that scale from prototype to production.

Entrepreneurs and Product Managers exploring AI-powered products will gain hands-on technical knowledge to understand what is possible, evaluate different approaches, and make informed decisions about AI technology selection for their projects.

What prior knowledge do I need before taking this course?

You should have familiarity with Python programming, as you will be writing code throughout the course to interact with AI APIs and build applications. Basic comfort with running commands in the terminal and using the command line is highly recommended, as you will be working extensively with Python scripts and command-line tools.

No prior knowledge of prompt engineering, large language models, or AI APIs is required. The course starts from the fundamentals and builds your knowledge progressively through hands-on exercises and real-world projects.

Will I incur costs (AI provider APIs) while taking this course?

The course is designed to minimize costs while providing hands-on experience with real AI APIs.

Free Components:
  • Python, Git, and all development tools are completely free
  • OpenAI provides free trial credits for new accounts
  • Anthropic provides free trial credits and ongoing free tier options
  • All course materials and resources are included

Optional Costs:
Most exercises can be completed using free trial credits from OpenAI and Anthropic. The course demonstrates actual costs transparently: planning and recording all examples cost less than 20 cents. If you choose to continue using APIs after free credits are exhausted, costs are typically minimal (usually under $5) unless you scale to production usage.

Does this course cover specific models like Claude, GPT-4, or Llama?

This course teaches prompt engineering principles that apply across all major language models. While examples use models from OpenAI and Anthropic, the techniques you learn transfer directly to any model including Claude, GPT-4, Llama, Mistral, and others.

The course demonstrates how to use unified provider libraries that support multiple models, allowing you to write prompt-engineering code once and run it against different models. This means you can apply what you learn to whatever models your organization or projects use.

Understanding the fundamental principles of effective prompting is more valuable than learning model-specific syntax, as it enables you to adapt quickly to new models as they are released and evolve.

Can I run language models locally instead of using cloud APIs?

Yes. The course demonstrates how to download and run large language models locally using tools like Ollama, giving you complete privacy, zero cost, offline capability, and full control over your interactions. The unified provider library used throughout the course supports local models alongside cloud APIs.

You can use local models for development, testing, and learning without incurring any API costs. Many students use local models for initial development and switch to cloud APIs when they need more capability or production deployment. The choice is yours, and the course teaches both approaches.

We use cookies to provide you with an optimal experience and relevant communication. Learn more or accept individual cookies.

Necessary

Necessary cookies (First Party Cookies) are sometimes called "strictly necessary" as without them we cannot provide the functionality that you need to use this website. For example, essential cookies help remember your preferences as you navigate through the online school.

Functional

Functional cookies enable this website to provide enhanced functionality and personalization, by remembering information you have entered and choices you make. These preferences are remembered through the use of persistent cookies, so that you will not have to set them again the next time you visit the website.

Analytics

Analytics cookies track information about visits on our website so that we can measure and improve its performance, as well as optimize our course content. These cookies help us analyze user behavior by tracking the number of visits, how visitors use the website, which site or page they come from and how long they are staying for.

Marketing

Marketing cookies are used to deliver advertising material relevant to you and your interests. They are also used to limit the number of times you see an advertisement, resulting to more targeted advertising, as well as help us measure the effectiveness of our campaigns. They are usually placed by advertising networks we collaborate with, with our permission.