🎤 Competition · Awards
Before this: Read Mission Control — Judge Prep first. Also have a notebook in progress before practicing the interview.

Judge Interview Playbook

You have 10–15 minutes. Most teams waste half on one topic and half the team never speaks. This guide teaches you to run it as a conversation — and send every member home knowing what they did right.

// Section 01
What Judges Actually Want
Judges are not looking for a presentation. They want proof that students understand their own work.
🏆
One sentence: judges want proof that your team understands why you made each decision — not just what you built.

It Is a Conversation, Not a Presentation

This is the most important idea in this guide. Teams with a memorized speech usually score lower than teams ready to talk. A judge is not your audience — they are a curious engineer asking questions. Respond to what they actually ask, not to what you planned to say.

❌ Presentation Mode
  • Memorized speech from the top
  • One person talks the whole time
  • Describes what the robot does
  • Ignores follow-ups to finish the script
  • Answers every question with more features
✅ Conversation Mode
  • Listens first, then responds
  • Multiple team members contribute
  • Explains why decisions were made
  • Follows where the judge takes it
  • Uses robot and notebook as references

The Three Things Judges Measure the Whole Time

  1. Student ownership — does this student actually understand what they built? Are the words theirs?
  2. Engineering process — can they trace a decision from problem to brainstorm to test to result?
  3. Team collaboration — does everyone contribute? Do hand-offs feel natural?
⚠️
On student ownership: if a mentor coached you on exactly what to say, judges notice. The answers will not match your vocabulary or confidence when a follow-up arrives. Know your own work. Use your own words.

Award-Specific Interview Focus

Excellence — robot, notebook, outreach, and team dynamics together
Design — engineering design process, iteration cycles, notebook depth
Innovate — creative solutions; explain what makes your approach different. The Innovate Award requires a written submission form included in your notebook. You must nominate one specific aspect of your design. Submitting multiple aspects nullifies your consideration. See the Innovate Award Submission Guide below.
Think — programming; explain code decisions in plain language
Build — mechanical construction; engineer leads with build decisions and data
RECF
The Interview Scores the Same Criteria as the Notebook
Judges use the same 8 rubric criteria for the interview that they use for notebooks — but the interview lets them probe deeper. “We chose a four-bar” scores Developing. “We chose a four-bar because we needed constant end-effector angle and our motor budget could not support a wrist joint” scores Expert. Same robot. Different answer. Different score.
🎤 Interview line: “We prepare for judge interviews by mapping every likely question to a specific notebook entry. When a judge asks about our biggest design challenge, we reference the exact EDP cycle where we identified the problem, tested solutions, and iterated — and we cite the page number. Judges can verify everything we say against our notebook in real time. That traceability is what separates an Expert interview from a Developing one.”

Innovate Award — Submission Guide

The Innovate Award recognizes a creative, novel design solution. It is not automatically considered — your team must submit a written form included in your engineering notebook. Here is exactly what that requires and how to do it well.

Critical rule: You may only submit one aspect per event. Submitting multiple aspects automatically nullifies your team’s consideration. Choose one thing and argue it precisely.

The Three Fields on the Submission Form

Field 1 — Brief Description
What is the novel aspect of your design?
Be specific and mechanical. “Our intake” is too vague. “A passive singulator that uses controlled compression to orient game elements without a dedicated motor” is what judges can evaluate. Name the mechanism, describe what it does, and state what makes it non-obvious.
Field 2 — Documentation Location
Page numbers and sections where the development is documented.
Judges flip directly to these pages. If your notebook doesn’t show the development process — brainstorm, decision, build, test — citing those pages hurts rather than helps. Make sure the documentation exists before you submit.
Field 3 — Why It Is Unique
Why is your approach different from other approaches to the same problem?
This is the hardest field and the most important. You need to know what other teams do and explain why yours is different. “We came up with it ourselves” is not an argument. “Most teams use active mechanisms requiring a dedicated motor; ours achieves the same outcome passively, reducing weight and eliminating a stall failure point” is an argument.

What Judges Are Looking For

Timing: The form must be in your notebook at the event. You cannot hand it to a judge verbally or submit it separately. Include it behind the front cover or in a clearly marked appendix section, and reference it in your TOC. The Innovate Award Submission slide is in the appendix of the Spartan Design notebook template (slide 37).

Choosing What to Submit

The strongest submissions are usually not the robot’s primary mechanism — they are the clever solution to a secondary problem that most teams solved poorly or ignored entirely. Ask: what does your robot do that makes other engineers say “how did you do that?” That’s your candidate.

// Section 02
The 8 Rubric Categories in Plain English
Every judge question maps to one of eight areas. Know them, and nothing will surprise you.
📝
Every judge question maps to one of these 8 categories. Know them, and you will recognize what is coming — and who should answer it.

All 8 Categories

1
Identify the Problem
What does the game require? What constraints shaped your design?
“What problem were you solving?” / “What did you prioritize?”
Strategist leads
2
Generate Solutions
What did you brainstorm? What mechanisms or approaches did you consider?
“What other designs did you consider?”
Engineer leads
3
Select the Best Solution
How did you choose? What criteria mattered? Decision matrix?
“Why that approach?” / “How did you decide?”
Both — strat sets up
4
Build and Program
How did you build it? What did you code? What did implementation teach you?
“How does the autonomous work?” / “What was hardest to build?”
Engineer leads
5
Test and Evaluate
How did you test it? What did you measure? Results vs targets?
“How do you know it works?” / “What data did you collect?”
All three
6
Iterate
What changed between v1 and now? How many design cycles? What drove each?
“What changed after your first competition?” / “Walk me through your redesign.”
Engineer leads
7
STEM Connections
What science or math explains your robot? Name the concept and connect it to a mechanism.
“What physics explains that?” / “Can you explain the math?”
Strat sets up, eng goes deep
8
Overall EDP
Can you trace a decision from problem to result? Have you repeated the loop multiple times?
“Walk me through your design process.”
Strategist — the overview
Time math: 10 minutes ÷ 8 categories ≈ 75 seconds per category. Spending 4 minutes on the intake leaves 6 minutes for 7 categories. Cover them all.
// Section 03
Roles, Ownership, and the Hand-Off
How to split speaking time so every member contributes and hand-offs feel natural.
🏆
The hand-off: when a question belongs to a teammate, answer the first sentence and pass it. “We chose a four-bar for constant angle — [Name] can walk you through how we decided.”

Who Owns What

🏎 Driver
  • Autonomous performance and data
  • Match strategy and in-match decisions
  • Driver control tuning choices
  • What you would do differently next match
⚙ Engineer
  • Mechanism design and tradeoffs
  • Build decisions, materials, CAD
  • Test protocols and numbered results
  • Failures and how they were fixed
📊 Strategist
  • Game analysis and problem definition
  • EDP overview — the full loop
  • Notebook organization and decisions
  • STEM connection overview + outreach

Speaking Role Planner

Before every competition, agree on who speaks first for each category. Saves awkward pauses in the actual interview.

How to Practice the Hand-Off

  1. Identify which rubric category the question maps to
  2. Answer the first sentence yourself
  3. Pass to the right person by name
  4. The receiver picks up mid-thought — no re-starting
If one person answers every question, judges assume the others do not understand the work — because that is usually what is true.

When You Do Not Know the Answer

Intellectual honesty scores higher than a confident wrong answer.

// Section 04
Using the Notebook and Robot as Evidence
Your notebook and robot are physical evidence. Know how to use them as references.
📝
The notebook is not a prop. It is a reference. Know which page has your decision matrix and test data. Use it to back up what you say — not as something to read from.

Using the Notebook

Open it at the start. Hand it to the judge. Then use it actively:

Know your page numbers. “My brainstorming is pages 4–8, test logs are 12–18.” That tells judges the notebook was written and read by the same person.

Using the Robot

The Complete Arc — Strategy to Iteration in One Answer

5 rubric categories in 25 seconds

“The game required scoring blocks from the center reliably — our problem statement. We brainstormed three intake approaches and used a decision matrix to choose rollers based on cycle speed and jam rate. Our first design had 15% jam rate on tilted elements — page 12. We increased roller gap 4mm, brought it below 3%, and held that for 8 sessions.”

That answer covers: problem → brainstorm → selection criterion → test result → specific change → verification. Rubric categories 1, 2, 3, 5, and 6.

Game Analysis Connection

Before competition, know: what specific situation your robot targets, the expected point value, and how your mechanism choice reflects the scoring math. See Game Analysis.

Testing Data Connection

Know your autonomous consistency (n=10, %), your mechanism’s key metric, and one specific change made from data with its before/after result. See Testing, Data & Iteration.

// Section 05
Strong vs Weak Answers and Common Mistakes
The formula that separates Expert from Developing, with side-by-side examples.

The Formula: Claim — Evidence — Decision

Every strong answer has all three. Every weak answer is missing at least one.

❌ Weak — description only
“We have a four-bar lift on our robot.”
✅ Strong — claim + evidence + decision
“We chose a four-bar because it maintains constant end-effector angle. We compared it to a direct arm — simpler, but changes angle as it raises, requiring a wrist joint we could not afford in motor budget. The four-bar solved both constraints.”
❌ Weak — vague result
“Our autonomous is really consistent now.”
✅ Strong — result, failure mode, fix, verification
“9 of 10 runs within 2 inches on competition tiles. The one miss was IMU drift from incomplete calibration. Added a 1.5-second wait. Eight clean runs since.”
💡
The test: could a judge learn something from your answer they could not have guessed? If no — you described the robot instead of explaining your thinking.

Common Mistakes

❌ One person answers everything

Judges notice immediately. If the team lead carries the whole interview, judges assume the others do not understand the work.

Fix: assign ownership before competition. Enforce the hand-off in every practice run.
❌ 5 minutes on robot features

Teams describing what the robot does run out of time for process, iteration, and STEM categories.

Fix: 60-second robot overview max. Judges can see the robot. They want your decisions.
❌ Scripts that fall apart on follow-up

A memorized speech survives until the first unexpected follow-up. Then the team freezes.

Fix: prepare topics and evidence, not scripts. Know the material well enough to answer whatever comes.
❌ Claims the notebook cannot support

“We ran 10 tests” when the notebook shows 3. Judges read the notebook at the same time you are talking.

Fix: notebook audit before every competition. If you say it, it must be there.
❌ Mentor-coached answers

When students use vocabulary that clearly is not theirs, judges notice. Follow-up questions expose it immediately.

Fix: know your own work. Use your own words. Judges want to know what you learned.
// Section 06
Mock Interview Tools
Timer, question bank, rubric self-check, speaking planner, and last-5-minutes checklist.

Mock Interview Timer

Set the length and press start. The phase label tracks which category you should be on.

10:00
Ready — press Start to begin

Common Judge Question Bank

One person reads these aloud. Everyone else answers. Rotate who answers each one.

🏆 Worlds Rubric Questions (23–24) — These are the exact questions used at Worlds. Each maps to a rubric score of 0–5. Judges score detail, evidence, and student ownership — not features. Reading from a script lowers your score.

Question Randomizer

Every question below has 8–12 different phrasings. The randomizer picks a different wording each time — so your team practices answering the idea, not memorizing a script.

Press Start to begin
One person asks the question. Everyone else answers without looking at notes.

Rubric Self-Check

Rate your team. 1 = never discussed. 5 = we have data, page references, and can handle any follow-up.

Secondary Interview Prep

If judges call your team back
  • This is a good sign. Secondary interviews usually mean you are in contention.
  • They will go deeper. They are probing for real understanding, not trying to catch you.
  • Bring the notebook again. Same rules apply.
  • Award-specific: Think Award — programmer explains one algorithm in detail. Design Award — engineer traces one full EDP cycle with before/after data.
  • Still a conversation. Listen. Answer the actual question. Pass to the right person.
// Section 07
Competition Day
The pre-interview protocol, what to do during it, and the debrief that improves your next one.
🏆
Most common pre-interview mistake: teams use the 30 minutes before judges arrive to fix the robot. Show up focused. The robot is what it is.

Last 5 Minutes Checklist

During the Interview

Interview Prep Timeline

After the Interview

STEM
Communication as Engineering
In professional engineering, the ability to explain your work to a non-expert is considered as important as the technical work itself. Engineers write reports, present findings, and defend decisions their entire careers. The judge interview is exactly this — practiced in a low-stakes environment where the feedback loop is fast. Teams that learn to do it in VRC, in their own words, from real knowledge, are building a skill they will use every year of their careers.
Related Guides
🏆 Mission Control — Judge Prep Tool → 📝 Engineering Notebook → 📝 Notebook: Start Here → 🎤 Interview Skills Lab → 📊 Game Analysis — Push Back → 🔬 Testing, Data & Iteration → 📅 Season Timeline →
A judge asks: "What was the hardest design decision you made this season?" What structure makes the strongest answer?
⬛ List everything your team found difficult to show the depth of work
⬛ State the decision, explain why it was hard, describe how you resolved it using your matrix or test data, and what you learned
⬛ Explain that all decisions were straightforward because your team was well-prepared
📝
← ALL GUIDES