Day 1: Let's Build an AI Advisor
What You're Building This Week
Over the next five days, you're building an AI advisory board. By the end of the week, you'll be able to ask questions like "What does Theo think about testing?" or "What's Primeagen's take on Vim?" and get responses grounded in what these creators actually said.
Pretty cool, right?
Meet Your Advisors
Before we start coding, let me introduce the experts you'll be adding to your advisory board:
- Theo (t3.gg) - A former Twitch engineer who now runs a popular YouTube channel about web development. He's known for strong opinions on TypeScript, Next.js, and the JavaScript ecosystem. If you've heard of the T3 Stack, that's him.
- ThePrimeagen - A former Netflix engineer famous for his Vim skills and no-nonsense takes on software engineering. He streams coding, talks about performance, and has very strong feelings about text editors.
- Brian (me) - I'm a senior engineer who's spent the last couple years building AI products at startups. One got acquired, another helped record labels find TikTok influencers. I'll show you that at the end of the week.
You can swap these out for anyone you want - business people, fitness gurus, whatever. The concepts are the same.
What We'll Cover
- Day 1 (today): Make your first LLM API call
- Day 2: System prompts to control AI behavior
- Day 3: RAG (Retrieval-Augmented Generation) with a knowledge base
- Day 4: Add real data from YouTube transcripts
- Day 5: See a real product demo and what's next
Prerequisites
You should be comfortable with:
- JavaScript/TypeScript basics
- Running commands in a terminal
- Basic React concepts (helpful but not required)
Setup: Get the Starter Code
Clone the project and install dependencies:
git clone https://github.com/projectshft/ai-advisor.git cd ai-advisor git checkout student-starter npm install
Important: Make sure you're on the student-starter branch, not main. Main has all the solutions and that's no fun. The whole point is to make this your own.
Setup: Get Your Gemini API Key
We're using Google's Gemini API because it has a generous free tier - perfect for learning.
- Go to ai.google.dev
- Click "Get API key in Google AI Studio"
- Sign in with your Google account
- Click "Create API key"
- Copy your new API key
Create a .env file at the root of your project (if it doesn't exist) and add:
GEMINI_API_KEY=your_api_key_here
About the Interface
You might notice the UI looks like an old AS/400 terminal - green screen, retro vibes. That's intentional. We're keeping the frontend simple so you can focus on the AI concepts, not CSS. Feel free to make it prettier if you want, but the styling isn't the point.
How LLM APIs Work
At its core, an LLM API is just like any other API:
- You send text (a "prompt") to the API
- The API processes it with a large language model
- You get text back (the "completion")
That's it. Text in, text out. Everything else - chatbots, AI assistants, code generators - is built on top of this basic pattern. Don't let it feel magical. It's just another API that returns text instead of JSON.
Step 1: Build the API Route
Open app/api/chat/route.ts. This is where the magic happens (except it's not magic, it's just an API call).
// app/api/chat/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { GoogleGenerativeAI } from '@google/generative-ai';
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY!);
const model = genAI.getGenerativeModel({ model: 'gemini-2.0-flash' });
export async function POST(request: NextRequest) {
try {
const { message } = await request.json();
const userResponse = await model.generateContent(message);
const response = userResponse.response.text();
return NextResponse.json({ response });
} catch (error) {
console.error('Chat API error:', error);
return NextResponse.json(
{ error: 'Failed to generate response' },
{ status: 500 }
);
}
}
Key points:
- GoogleGenerativeAI - The SDK that handles authentication and API calls
- gemini-2.0-flash - A fast model on the free tier (check docs for latest available models)
- generateContent() - Takes your prompt and returns a response
Heads up: Model names change! If you're using Cursor or Claude to help you code (and you should be), they might suggest models that don't exist anymore. Always check the Gemini docs for current models.
Step 2: Test It!
Make sure your dev server is running:
npm run dev
Open http://localhost:3000 and try sending a message. You should see the AI respond!
Try these prompts:
- "What is machine learning?"
- "Write a haiku about coding"
- "Explain APIs to a 5 year old"
Step 3: Look Under the Hood
Add some console logging to see what's actually happening:
const userResponse = await model.generateContent(message); console.log(JSON.stringify(userResponse.response, null, 2)); const response = userResponse.response.text();
Check your terminal - you'll see metadata about tokens used, the formatted response, and more. This is just an API response! Nothing magical.
Understanding What Happened
When you sent a message:
- The React frontend sent a POST request to /api/chat
- Your API route called generateContent() with your message
- The Gemini SDK sent your message to Google's servers
- Gemini's model processed your text and generated a response
- The response traveled back through the chain to your browser
A Note on Frameworks and Languages
Here's the thing - the frameworks and languages don't matter. You're a software developer. You could build this in Rust, Python, whatever. The syntax changes, the concepts do not.
I've literally rewritten the same product from Python to TypeScript and back again at different startups. Same exact stuff. So extend things. Break things. Make it weird. That's the whole point.
Key Takeaways
- LLM APIs are simple: text in, text out
- The SDK handles authentication and HTTP requests for you
- Always handle errors - APIs can fail
- Check the docs for model names - they change frequently
- This is just another API. Don't let it feel magical.
Challenge: Extend What You Built
Don't just move on - push yourself a bit. Try one of these:
- Swap providers: Rewrite your API route to use OpenAI or Anthropic instead of Gemini. Check out their docs - OpenAI, Anthropic. You'll see they all work basically the same way.
- Add streaming: Right now you wait for the full response. Can you make it stream token by token? Check Gemini's
generateContentStream()method. - Log everything: Add detailed logging to see tokens used, response time, model version. This is what you'd need in production.
Pick one and do it before tomorrow. Breaking things is how you learn.
What's Next
Right now, the AI is generic - it has no specific personality or expertise. Ask it about the weather in France and it'll happily tell you. Ask it to solve a linked list problem? Sure! Remember Chipotle's chatbot that was helping people with coding interviews? Yeah, don't be Chipotle.
Tomorrow, we'll fix that with system prompts to keep our advisor in its lane.
See you on Day 2!