Creating AI Agents with Custom System Prompts in Pydantic AI - Part 4

Creating AI Agents with Custom System Prompts in Pydantic AI

Creating AI Agents with Custom System Prompts in Pydantic AI

Have you ever felt like your AI agent interactions were falling a bit flat? Like you weren’t quite getting the focused, tailored responses you needed? I certainly have. It’s a common frustration when working with large language models (LLMs). You know the potential is there, but sometimes it feels like you’re speaking a slightly different language. This is where the magic of system prompts comes in.

This post will dive deep into system prompts within the Pydantic-AI framework. We’ll explore how crafting the right system prompt can completely transform your agent’s behaviour, personality, and, ultimately, its effectiveness. By the end, you’ll have a solid understanding of both static and dynamic system prompts, and be confident in creating your own to build truly powerful and useful AI agents. My core belief, and the thesis of this article is that effective use of system prompts is the key to unlocking the true power of any agent framework, especially Pydantic-AI.

Why System Prompts Matter: Shaping Your AI’s Persona

Think of a system prompt as the foundational instructions you give your AI agent. It’s not just about the specific question you ask (the user prompt); it’s about setting the stage, defining the role, and providing the context for how your agent should respond. It’s like giving an actor a character description and backstory before they step onto the stage.

While user prompts are essential, they often operate in a vacuum. A system prompt provides the crucial framework. It frames the agent’s:

  • Personality: Is your agent a seasoned business coach, a meticulous code reviewer, or a helpful historian?
  • Behaviour: Should your agent be concise and direct, or elaborate and explanatory?
  • Scope: What areas of knowledge is your agent an expert in? What is it not designed to handle?

I’ve learned through countless experiments (and a few frustrating failures!) that crafting a well-defined system prompt is often the difference between a generic, somewhat helpful response and a truly insightful, tailored one.

Static vs. Dynamic System Prompts: Two Sides of the Same Coin

In Pydantic-AI, we have two main types of system prompts to work with:

  • Static System Prompts: These are the workhorses. You define them upfront, usually when you’re creating your agent. They’re perfect for setting the overall tone and defining core capabilities.
  • Dynamic System Prompts: These are where things get really interesting. They allow you to inject information that might only be available at runtime. This could be anything from today’s date to a variable extracted from a user’s input.

An agent can, and often should, have both static and dynamic prompts. They work together to create a comprehensive and adaptable context. I’ll be going through several examples that utilize each.


Hands-On Examples: Building Better Agents, Step-by-Step

I learn best by doing, so let’s jump into some practical examples. I’ve put together a series of scenarios, each building on the previous one, to illustrate the power and flexibility of system prompts. You could follow along yourself – all the code examples can be adapted from the original video’s GitHub repository (though I won’t link to it directly here, as per the instructions).

Example 1: The “Hello World” Business Coach

Our first example is a simple “Hello World” scenario. We’ll create a business coach agent designed to help technology startups. The system prompt is straightforward:

You’re an experienced business coach and startup mentor specializing in guiding technology startups from ideation to sustainable growth.

Even this basic prompt makes a difference. When asked about creating a startup strategy for a Software-as-a-Service (SaaS) business, the agent provides a reasonably comprehensive list of considerations: market research, validation, gamification, user acquisition, and retention. However, it’s still fairly generic. It’s better than nothing, but it’s not groundbreaking. This highlights the importance of iterative refinement – starting simple and building up complexity.

Example 2: The Basic Code-Writing Agent

Next, we’ll build an agent that can write code. We’ll ask it to create a functional React component that displays a user profile, using Zustand for state management and Tailwind CSS for styling. The system prompt is again relatively simple:

You are a coding assistant that creates code based on user requests.

The agent produces a decent result, including instructions for installing dependencies, defining the Zustand store, and creating the React component. It even integrates the component into a larger application structure.

Example 3: Leveling Up the Code-Writing Agent

Here’s where we start to see the real power of a well-crafted system prompt. We’ll significantly expand the prompt for our code-writing agent, adding details about documentation, styling, testing, optimization, and even the requirement to generate a readme.md file. This, from my own professional experience is vital for any shared code. I actually used another LLM (like ChatGPT or Claude) to help me generate this more detailed prompt. This is a great technique – leverage the power of LLMs to help you build better LLMs! The result is dramatically improved. The code is now in TypeScript (TSX), includes code comments, defines types and interfaces within the Zustand store, and, crucially, generates a comprehensive readme.md file with instructions, explanations, and even license information. This demonstrates a key principle: a more detailed and descriptive system prompt leads to a more detailed and useful output.

Example 4: The Invoice-Writing Agent (with a Dynamic Twist)

Now, let’s create an agent that can write invoices. This example introduces a subtle but powerful concept: injecting variables into a static system prompt. We’ll use a template string within the system prompt to include today’s date. This ensures that the invoice always reflects the current date, regardless of when the agent is run. The system prompt looks something like this:

You are an invoice generation assistant. Generate an invoice with the following details… Use today’s date: {today_date} …

We then use Python’s date.today() function to populate the {today_date} placeholder. This is still technically a static prompt (because it’s defined before runtime), but it incorporates dynamic information. The resulting invoice includes the dynamically generated date, a due date (30 days later), and a breakdown of services based on the user’s input. This technique can be adapted to create all sorts of templates: emails, social media posts, presentations – anything that benefits from dynamic content.

Example 5: Introducing Dynamic System Prompts

Now, let’s dive into true dynamic system prompts. These are defined using functions decorated with @agent.system_prompt. This allows us to inject information that’s only known at runtime. We’ll create an agent that provides information about capital cities. The system prompt sets the agent’s role as an experienced historian. We’ll then define a dynamic system prompt called @comparison_city that takes a city name as a dependency and returns a string like “The city to compare is [city name]”. When we run the agent, we ask about the capital of the US and provide “Paris” as a dependency. The agent correctly identifies Washington DC, provides its history, and compares it to Paris. This opens up a wide, and frankly, exciting, world of possibilities.

Example 6: Agents Writing Their Own System Prompts!

This is where things get really meta. We’ll create two agents:

  • Prompt Agent: This agent is an expert prompt writer. Its job is to generate a system prompt for the second agent based on the user’s initial question.
  • Assistant Agent: This agent uses the dynamically generated system prompt from the Prompt Agent to answer the user’s question.

This is a powerful concept. Instead of manually crafting system prompts for every possible scenario, we can let an LLM do the heavy lifting! We’ll run this in a loop to simulate a conversation. The first time through the loop, the Prompt Agent generates a system prompt based on the user’s question (e.g., “I want to learn how to cook”). Subsequent interactions use this generated prompt, along with message history, to maintain context. The results are impressive. When asked about cooking, the Prompt Agent generates a system prompt that instructs the Assistant Agent to ask about specific cuisines, assess the user’s skill level, and suggest learning resources. When asked about learning Python, the Prompt Agent generates a completely different, tailored system prompt.

Bonus Example: Refining the Business Coach

Finally, let’s revisit our “Hello World” business coach. We’ll take the lessons learned and significantly expand the system prompt, adding details about product-market fit, raising venture capital, building high-performing teams, developing SaaS platforms, monetization strategies, launch and growth, and tech industry knowledge. The difference is night and day. The enhanced agent provides a much more detailed and relevant strategy, covering all the key areas outlined in the expanded system prompt. This drives home the core message of the power of great prompts. Step By Step.

Key Concepts

System Prompts: Instructions that frame the agent’s personality, behavior, and scope. They are crucial for getting desired results. Static System Prompts: Defined beforehand using the system_prompt parameter in the agent’s constructor. Dynamic System Prompts: Depend on runtime context and are defined using functions decorated with @agent.system_prompt.

Steps Common to the examples

Import libraries.

from pydantic_ai import Agent, RunContext
from pydantic_ai.models import OpenAI
from typing import List

Define the Model.

The examples shown use GPT-4-all-mini. You can change models for different results.

model = OpenAI(model_name="gpt-4o")

Example 1: Hello World - Basic Static System Prompt

Define the Agent and System Prompt: Create an agent and assign a simple system prompt.

system_prompt = "You're an experienced business coach specializing in guiding technology startups."
business_coach = Agent(model=model, system_prompt=system_prompt)

Run the Agent: Execute the agent with a basic user prompt.

result = business_coach.run_sync(user_prompt="Create a startup strategy for a SaaS business.")
print(result.data)

Example 2: Code Writing Agent - Basic Static System Prompt

Define Agent and System Prompt:

system_prompt = "You are a coding assistant that generates code based on user requests."
coding_agent = Agent(model=model, system_prompt=system_prompt)

Run with User Prompt: Ask the agent to create a React component.

result = coding_agent.run_sync(user_prompt="Create a functional React component that displays user profile (name, email, picture) using Zustand and Tailwind CSS.")
print(result.data)

Example 3: Enhanced Code Writing Agent - Detailed Static System Prompt

Craft a Detailed System Prompt: Use external tools (like ChatGPT or GPT-4All) to generate a comprehensive system prompt. Include requirements for documentation, styling, testing, and a readme.md file.

system_prompt = """You are a coding assistant. Generate well-documented code.
... (Include detailed instructions for the component, styling, testing, and readme.md generation) ...
"""
coding_agent = Agent(model=model, system_prompt=system_prompt)

Run with the Same User Prompt: As in Example 2. The difference will be in the output due to the enhanced system prompt.

result = coding_agent.run_sync(user_prompt="Create a functional React component that displays user profile (name, email, picture) using Zustand and Tailwind CSS.")
print(result.data)

Example 4: Invoice Writing Agent - Static System Prompt with Variable Injection

Import Date Libraries:

from datetime import date

Create a System Prompt Template: Use a docstring with curly braces {} as placeholders for variables.

system_prompt = """You are an invoice writing assistant.
... (Invoice details) ...
Use today's date: {today_date}
...
"""

Format the System Prompt: Inject the variable using .format().

formatted_prompt = system_prompt.format(today_date=date.today())
invoice_agent = Agent(model=model, system_prompt=formatted_prompt)

Run with User Prompt: Provide details for the invoice.

result = invoice_agent.run_sync(user_prompt="Create an invoice for Customer X. Services: Web Dev, AI Consulting, Strategy. Total: $50,000.")
print(result.data)

Example 5: Basic Dynamic System Prompt

Create Output Model (Pydantic Model):

from pydantic import BaseModel

class Capital(BaseModel):
    name: str
    year_founded: int
    history: str
    comparison: str

Define the Agent and Dynamic System Prompt: Use @agent.system_prompt to decorate a function that returns part of the prompt.

system_prompt = "You're an experienced historian. Provide capital city information and compare it to another city."
historian_agent = Agent(model=model, system_prompt=system_prompt, result_type=Capital)

@historian_agent.system_prompt
def comparison_city(context: RunContext):
    return f"The city to compare is {context.dependencies['comparison_city']}."

Run with User Prompt and Dependencies

result = historian_agent.run_sync(
    user_prompt="What is the capital of the US?",
    dependencies={"comparison_city": "Paris"}
)
print(result.data)

Example 6: Agent Writing its Own Dynamic System Prompt

Create Output Model (Pydantic Model):

from pydantic import BaseModel
from typing import List

class SystemPrompt(BaseModel):
  prompt: str
  tags: List[str]

Create the Prompt-Writing Agent:

prompt_writer_system_prompt = """You are an expert prompt writer. Create a system prompt for an AI agent
based on the user's question.  Do *not* answer the user's question, only generate the prompt.
Example: Start with 'You are a helpful assistant specialized in...'
"""
prompt_writer = Agent(model=model, system_prompt=prompt_writer_system_prompt, result_type=SystemPrompt)

Create the Assistant Agent:

assistant_agent = Agent(model=model)

@assistant_agent.system_prompt
def generated_prompt(context: RunContext):
    return context.dependencies["generated_prompt"]

@assistant_agent.system_prompt
def generated_tags(context: RunContext):
  return " ".join(context.dependencies["generated_tags"])

Run in a Loop:

message_history = []
user_question = input("Ask a question: ")

# Get the system prompt
prompt_result = prompt_writer.run_sync(user_prompt=user_question)
generated_system_prompt = prompt_result.data.prompt
generated_tags = prompt_result.data.tags

while True:
    #Run the assistant.
    result = assistant_agent.run_sync(
        user_prompt=user_question,
        message_history=message_history,
        dependencies={"generated_prompt": generated_system_prompt, "generated_tags": generated_tags},
    )

    message_history.append({"role": "user", "content": user_question})
    message_history.append({"role": "assistant", "content": str(result.data)})

    print(f"Assistant: {result.data}")
    user_question = input("Ask another question (or type 'exit' to quit): ")
    if user_question.lower() == 'exit':
        break

Bonus Example - Enhanced Business Coach

Steps are identical to example 1, the only change is that instead of using:

system_prompt = "You're an experienced business coach specializing in guiding technology startups."

A more detailed system_prompt is used.

system_prompt = """You're an experienced business coach specializing in guiding technology startups from ideation to sustainable growth.
... (Include detailed aspects such as Product-Market Fit, Venture Capital, Team Management, Monetization, etc.) ...
"""

Then the agent is created and executed as in Example 1.

Explore More

I Built a SaaS starter website for Free with Lovable

I Built a SaaS starter website for Free with Lovab...