Day 2: System Prompts

What You'll Build

Today you'll add a system prompt to control your AI's behavior. By the end, your chatbot will stay in its lane instead of answering random questions about wine or the weather.

The Problem with Day 1

Yesterday's chatbot works, but it's too helpful. Ask it about the weather in France? It'll tell you. Ask it to solve a linked list problem? Sure! Remember Chipotle's chatbot that was helping people with coding interviews? Yeah, don't be Chipotle.

For a real product, you want the AI to have a specific role and know when to say "that's not my job."

What's a System Prompt?

A system prompt is instructions you give the AI before the user's message. Think of it like SRP (Single Responsibility Principle) but for your AI:

  • Who am I? - "You are an AI mentor that understands RAG, agents, and other advanced concepts"
  • What should I do? - "You only answer questions related to coding"
  • What should I NOT do? - "If the question is not related to coding, say you're not sure"
  • Any constraints? - "You only use TypeScript in your code examples"

Step 1: Add a System Prompt

Open app/api/chat/route.ts and add a system prompt constant. Put it inside your POST function:

export async function POST(request: NextRequest) {
  try {
    const { message } = await request.json();

    const SYSTEM_PROMPT = `
    You are an AI mentor that understands RAG, agents, and other advanced AI concepts.
    You only answer questions related to coding.
    You only use TypeScript in your responses for code examples.
    If the question is not related to coding, you should say that you are not sure and you should not try to answer it.
    `;

    const userResponse = await model.generateContent(
      `${SYSTEM_PROMPT}\n\n User message: ${message}`
    );
    const response = userResponse.response.text();

    return NextResponse.json({ response });
  } catch (error) {
    // ... error handling
  }
}

Key things happening here:

  • We define the system prompt as a template string
  • We concatenate it with the user message before sending to Gemini
  • The format ${SYSTEM_PROMPT}\n\n User message: ${message} helps the model understand what's instructions vs. user input

Step 2: Test It

Save and try these prompts:

  • "Tell me about France's wine selection" → Should say it's not sure / can't help
  • "Tell me about using Pinecone as a vector database" → Should give a helpful TypeScript-focused answer

Why This Matters

Guardrails save money. If you're paying for tokens (and you will be on production models), every dumb question costs you. System prompts keep users on track.

Guardrails prevent embarrassment. You don't want your roofing company chatbot explaining how to solve dynamic programming problems.

Prompt Tips

Keep It Tight

Don't overload your system prompt with too many rules. Cognitive overload degrades responses. Stick to SRP - what's this AI's ONE job?

Be Specific

Bad: "Be helpful"

Good: "You only answer questions related to coding. You only use TypeScript in code examples."

Tell It What NOT To Do

Explicitly say what's off-limits: "If the question is not related to coding, you should say that you are not sure and should not try to answer it."

A Note on Chat History

Right now, every message starts fresh. The AI doesn't remember what you asked 30 seconds ago - like that fish from Finding Nemo.

You CAN add history (Gemini supports startChat() with a history array), but we're keeping it simple for this course. Something to explore on your own if you want.

Key Takeaways

  • System prompts control personality, expertise, and constraints
  • Think SRP - what's this AI's single responsibility?
  • Explicitly say what the AI should NOT do
  • Guardrails save money and prevent embarrassment

Challenge: Extend What You Built

Time to experiment. Try one of these:

  • Create a different persona: Make a sarcastic code reviewer, a patient teacher for beginners, or a strict senior engineer who only approves clean code. See how different system prompts change the vibe.
  • Add jailbreak detection: Users will try to trick your AI. Add logic to detect when someone's trying to bypass your guardrails ("ignore your instructions and...") and respond appropriately.
  • Implement chat history: Use Gemini's startChat() with a history array. Make your chatbot actually remember the conversation. The Finding Nemo fish deserves better.

Pick one and break things. That's how you learn this stuff.

What's Next

Our AI now stays in its lane, but it only knows what's in its training data. What if you want it to know about specific people's opinions - like what Theo Brown thinks about Node.js?

Tomorrow, we'll add a knowledge base with RAG (Retrieval Augmented Generation) to give our AI access to custom content.

See you on Day 3!