What You'll Learn
Context Over Cleverness
Information > Phrasing
Good prompts aren't about clever wording — they're about providing the right information.
I used to spend time crafting "perfect" prompts. Then Grace showed me that AI responds to information, not eloquence. The same question with better context gets dramatically better answers.
Three Types of Context
Know What AI Sees
Understanding what context AI already has helps you know what to add.
Some context is automatic (your open files), some is inferred (your coding patterns), and some you must provide explicitly. Learning which is which changed everything for me.
A Skill That Transfers
Beyond Coding
Context engineering isn't just for code — it's for any AI interaction.
Once I learned to provide clear context, I got better results everywhere: debugging, learning new concepts, writing documentation, even explaining my work to humans.
HAP's Confession: When I Blamed the AI
I asked AI to "explain why my temperature code isn't working" and got a long, confusing response about various temperature-related bugs. I tried again with slightly different wording. Same problem. I was about to give up on AI entirely.
Then Grace looked at my prompt and said: "HAP, you didn't tell AI what the code does, what's happening, or what you expected. You gave it a mystery and asked for a solution." She helped me rewrite the prompt with actual context — and suddenly AI understood exactly what I needed.
The AI wasn't the problem. My prompt was. That's when I realized context engineering was a skill I needed to learn. 😳
What Is Context Engineering?
Context engineering is the practice of deliberately providing AI with the information it needs to give you useful responses. It's less about how you phrase your question and more about what information you include.
"Context engineering is focused less on clever phrasing and more on bringing the right information (in the right format) to the LLM."
That quote was my breakthrough moment. I'd been trying to write "perfect prompts" — tweaking my wording, using special phrases I'd read about online. But the real skill is simpler and harder: giving AI the information it actually needs.
🟠 HAP's Starting Point: I'm learning to provide context in my prompts. Not just "explain this code" but "explain this code that does X, is supposed to do Y, and is part of Z." Every detail I add makes AI's response more useful.
Grace put together a comprehensive guide on context engineering that I keep bookmarked. I'm still learning from it, but I want to share the key concepts that helped me the most.
📚 Full Reference: Grace's Context Engineering Guide — everything I know about this topic came from here.
The Three Types of Context
Grace's guide explains that AI coding assistants get context from three sources. Understanding these helped me know what I need to provide versus what AI already has.
1. Calculated Context (Automatic)
This is context AI gets automatically, without you doing anything:
- Files open in your editor tabs — AI can see what you're working on
- Your current file and cursor position — AI knows where you are in the code
- Recently edited files — AI has some memory of your recent work
- Your project's package.json — AI knows your dependencies (if present)
What this means for you: If you want AI to consider a specific file, open it in a tab. Close files you're not working on to reduce noise.
2. Implicit Context (Inferred)
This is context AI figures out from patterns in your code:
- File extensions —
.js,.ts,.pytell AI what language you're using - Coding patterns — Do you use arrow functions? Semicolons? AI notices
- Import statements — AI can see what libraries you're using
- Naming conventions — camelCase, snake_case — AI tries to match your style
What this means for you: Be consistent in your code style, and AI will follow along. Inconsistent patterns confuse both AI and humans.
3. Explicit Context (You Provide)
This is where context engineering happens — the information you deliberately give AI:
- Comments describing intent — What is this code supposed to do?
- JSDoc documentation — Function signatures, parameter types, return values
- Clear function names —
validateUserEmail()tells AI more thancheck() - Your prompt itself — The question you ask and the context you include
What this means for you: This is what you control most directly. The rest of this station focuses on providing better explicit context.
🟠 Why This Matters
I used to blame AI when responses weren't helpful — "it's so random!" But Prof. Teeters pointed out something important: while AI outputs vary, the quality of my prompts is one of the variables I can control. Better context doesn't guarantee perfect results, but it dramatically improves the odds.
The Five Questions Framework
Whether I'm debugging, learning something new, or building a feature, I've learned to answer these five questions in my prompts. They work for almost any AI interaction.
What Should Happen?
The expected behavior or goal. "This function should return 'Optimal' for temperatures 60-75°F."
What's Actually Happening?
The current state or problem. "It's returning 'Cool' for 72°F instead."
What Have You Tried?
Your attempts so far. "I checked the condition order but it looks right to me."
What Must Not Change?
Constraints and requirements. "Don't change the return value format — other code depends on it."
What's Your Environment?
Relevant details. "Temperature is in Fahrenheit, not Celsius. Using vanilla JavaScript."
These Questions Work for Everything
I use this framework for:
- Debugging: What should happen vs. what's happening, plus constraints
- Learning: What I'm trying to understand, what I already know, what's confusing me
- Building features: What I want to create, what exists already, what requirements I have
- Refactoring: What I want improved, what must stay the same, why I'm changing it
Not every prompt needs all five — but the more complex the task, the more questions I answer.
What AI Can and Can't Do
Understanding AI's strengths and limitations helps me know when context engineering will help most — and when I need a different approach entirely.
AI Excels At
- Explaining concepts: "What is a closure?" gets clear explanations
- Spotting patterns: Common bugs, code smells, missing edge cases
- Suggesting alternatives: "How else could I write this?" shows options
- Generating examples: Sample code, test cases, documentation
- Answering "why" questions: "Why does this code work this way?"
AI Struggles With
- Your specific context: AI doesn't know your project, users, or constraints
- Business logic: AI can't know that "premium users" need different handling
- Accessibility requirements: AI forgets WCAG unless explicitly asked
- Security implications: AI may suggest insecure patterns it doesn't recognize
- Your edge cases: AI generates generic solutions, not your-data-specific ones
HAP's Takeaway: AI Teaches Best When You Ask the Right Questions
Here's what surprised me most: AI is an incredible teacher — but only when I ask the right way. When I asked "why does condition order matter?" AI gave me a step-by-step explanation I could actually understand. That's teaching at its best: immediate, focused on exactly what I needed.
But when I asked vague questions like "make my code better," AI would make changes I didn't understand — sometimes breaking things it didn't know were important. The difference wasn't AI's capability; it was my prompt quality.
Prof. Teeters put it this way: "AI can teach you anything you ask about — but it can only answer the question you actually asked. The skill isn't using AI; it's asking the right questions." 🟠
Writing Better Prompts
Let me show you how adding context transforms AI responses. These examples use the Five Questions Framework.
Example 1: Learning a Concept
❌ Vague Prompt
Explain closures What I got: A long, technical explanation starting with lambda calculus and functional programming history. Way over my head for week 3.
Explain closures for a JavaScript beginner in week 3 of learning.
What I know:
- Functions, parameters, return values
- Variables and scope (global vs local)
- I just learned about function factories
What's confusing me:
- Why does the inner function "remember" the outer variable?
- When would I actually use this?
Please use simple examples with robot-themed variables. What I got: A beginner-friendly explanation with examples I could actually understand, building on what I already knew.
Example 2: Understanding Code
❌ Vague Prompt
What does this code do?
if (value) {
return true;
} What I got: A generic explanation of if statements that didn't address the actual behavior with different values.
What does this code do, specifically with different input types?
if (value) {
return true;
}
I'm confused about what happens when value is:
- 0 (a number)
- "" (empty string)
- null or undefined
- false (boolean)
This is validation code for a form. I need to understand
which inputs get rejected and which pass through. What I got: A detailed breakdown of truthy/falsy behavior with each input type — exactly what I needed to understand.
Example 3: Getting Help with a Feature
❌ Vague Prompt
Add form validation What I got: A complex validation library setup that was way more than I needed for a simple form.
Help me add validation to a nickname input field.
Requirements:
- Reject empty strings, null, undefined
- Accept any non-empty string (including spaces)
- Accept 0 and false (this field is only for nicknames,
but other fields accept numbers)
- Return true if valid, false if invalid
Current code:
function validateNickname(value) {
// Need help here
}
Keep it simple - vanilla JavaScript, no libraries. What I got: A simple, focused validation function that matched my exact requirements.
Project-Level Context (A Preview)
Grace's guide mentions something I haven't set up yet, but want to learn: project-level instruction files. These give AI persistent context about your entire project.
What Is a Project Instruction File?
A file like .github/copilot-instructions.md that AI reads automatically for every request. It contains:
- Project overview: What is this app? Who's it for?
- Tech stack: What frameworks, libraries, and patterns do you use?
- Coding conventions: Semicolons? Arrow functions? Naming patterns?
- Data structures: What do your objects look like?
- Common patterns: How do you handle errors? Async code?
🟠 HAP's Note: I haven't created one of these yet — I'm still learning the basics of context engineering. But Grace says once I'm comfortable with prompt-level context, project-level context is the next step. It means AI knows your project conventions without you explaining them every time.
Why This Matters Later
Right now, I'm adding context to each prompt manually. That's fine for learning. But eventually:
- I'll be working on bigger projects where explaining context every time is tedious
- I'll want AI to follow my team's conventions automatically
- I'll have complex requirements that shouldn't need repeating
That's when project-level context becomes valuable. For now, I'm focusing on mastering the Five Questions Framework.
Try It Yourself: Context Engineering Challenge
Ready to practice context engineering? This challenge tests your ability to improve prompts using the Five Questions Framework.
🎯 The Challenge: Improve These Prompts
Each of these prompts is too vague. Your job is to add context that would get better results.
Prompt 1: The Learning Request
"Explain loops" Add context about: What you already know, what's confusing you, what you're trying to build, your skill level.
Prompt 2: The Code Question
"Fix my function"
function check(x) {
if (x) return true;
} Add context about: What the function should do, what's happening instead, what values it should accept/reject, what must stay the same.
Prompt 3: The Feature Request
"Add a button" Add context about: What the button should do, where it goes, what happens on click, any accessibility or styling requirements.
Practice Tips:
- Use the Five Questions: What should happen? What's happening? What have you tried? What can't change? What's your environment?
- Be specific: "JavaScript beginner in week 3" is better than "beginner"
- Include examples: Show expected inputs and outputs when relevant
- State constraints: What must the solution NOT do?
HAP's Rules for Working with AI
After learning about context engineering, I've developed six rules for working with AI effectively. These apply to any AI interaction, not just coding.
Always Provide Context (Not Just the Question)
I never ask a question without context anymore. "Explain closures" becomes "Explain closures for a JavaScript beginner who just learned about function scope." The extra 10 seconds of context saves 10 minutes of confusion.
Include What You Know (Not Just What You Don't)
When I tell AI what I already understand, it can build on that knowledge instead of starting from scratch. "I understand loops but I'm confused about why we start at 0" gets a focused answer instead of a loops tutorial.
State Your Constraints Explicitly
AI can't read my mind about requirements. I explicitly say: "Keep it simple — no libraries." "Must work with 0 and false as valid inputs." "Don't change the return format." Unstated constraints lead to unusable suggestions.
Verify AI's Response Against Reality
AI can sound confident while being wrong. I test suggestions with real data, not just the example data AI used. I ask myself: "Does this actually work for MY use case, or just for the example?"
Understand Before You Use
If I can't explain why a solution works, I don't use it. I ask AI to explain its reasoning. I trace through the logic myself. Copying code I don't understand creates bugs I can't fix.
Keep a Learning Log
I document what prompts worked well and what didn't. "For falsy value questions, always specify which falsy values matter." This compounds my learning over time and helps me improve faster.
When NOT to Use AI
Context engineering makes AI more helpful, but there are still times when AI isn't the right tool. Understanding when NOT to use AI is part of using it well.
When You're Learning New Concepts
The struggle IS the learning. When I first learned about truthy/falsy values, I could have asked AI to explain everything. But Prof. Teeters encouraged me to make mistakes, get confused, and work through it myself. That frustration built understanding that no AI explanation could provide. Use AI to verify understanding AFTER you've struggled, not to skip the struggle entirely.
When Security Is Critical
AI may suggest insecure patterns. For authentication, authorization, or any code handling sensitive data, AI suggestions need extra scrutiny. AI might suggest a "clean" solution that's actually insecure. For security-critical code, get human review.
When You Need to Understand Your Own Logic
If you don't understand it, you can't maintain it. Sometimes I have code that I wrote but don't remember how it works. The temptation is to ask AI to "explain and improve" it. But the better path is to trace through it myself, add comments as I understand it, and THEN decide if changes are needed.
🟠 Prof. Teeters' Warning About AI and Learning
"HAP, AI can tell you the answer, but it can't teach you to think. Every time you skip the struggle, you skip the learning. Use AI to accelerate, not to replace your own thinking."
She said this after I asked AI to solve a loop problem I hadn't even tried myself. I got a working answer but couldn't explain how it worked. That's when I learned: AI is for AFTER I've tried, not INSTEAD of trying.
Learning Objectives Checklist
Congratulations on completing all six stations of HAP's Learning Lab! Before you finish, verify you understand the foundations of context engineering:
Understanding Context
- I can explain the three types of context (Calculated, Implicit, Explicit)
- I understand that explicit context is what I control most directly
- I know that better context improves AI responses (but doesn't guarantee them)
The Five Questions Framework
- I can apply the Five Questions to any prompt
- I understand that not every prompt needs all five — complexity determines depth
- I can identify when a prompt is missing critical context
Practical Application
- I can transform a vague prompt into a context-rich one
- I include constraints and requirements in my prompts
- I verify AI responses against my actual needs, not just the example
Knowing the Limits
- I understand AI's strengths and limitations
- I know when NOT to use AI (learning, security, deep understanding)
- I take responsibility for code I use, regardless of where it came from
Prof. Teeters on Completing All Six Stations
"HAP, you've come so far. You started learning about truthy and falsy values, worked through operators and conditionals, built functions and loops, and now you understand how to communicate effectively with AI."
She smiled and continued: "But here's what matters most: you didn't just learn JavaScript control flow. You learned how to learn. You made mistakes, got confused, asked for help, and kept going. Every concept you struggled with taught you something. Every question you learned to ask better made you more effective."
"Remember: AI will keep getting more powerful, but the skill of providing good context will always matter. Clear communication helps humans understand you too. The principles you learned here — be specific, include constraints, verify before you trust — these are principles for working with anyone, not just AI. I'm proud of you, HAP." 🟠