Four guides, one system. Where to start on kickoff day, when to write each entry type, what Emerging vs Expert actually means, and what Design Award judges look for across 5 minutes of review.
// Section 01
The Notebook Pathway — Where to Start
Four guides, one pathway. Which guide to open on kickoff day, which to open the night before competition, and how they connect.
🎯
Start here if you have never set up a notebook before. This pathway walks you through every resource on this site in the order that makes sense — from kickoff day to pre-competition audit.
Open Getting Started first. Complete the first three entries before doing anything else. The template can wait until Day 2.
If you’re a coach setting up
Open Template Guide first. Build the master, duplicate for each team, then run kickoff. Students start in Getting Started.
If competition is in 2 weeks
Open Mission Control → Notebook Audit tab. Fix the red items. Then open Engineering Notebook → Interview Prep.
If you want Design Award
Open Engineering Notebook and read the rubric table. Your goal is Expert on all six criteria. Then read Judge Interview Playbook.
1 / 5
// Section 02
When to Write What
A season-by-season schedule showing when each entry type gets written and which site resources support each phase.
📅
The notebook is written in parallel with the robot — not after it. This schedule shows when each type of entry gets written and what site resources support each phase.
Kickoff Day
K
First Three Entries
All Team Members
Write: (1) Team roster with roles and ownership statements. (2) Season goals — measurable targets for robot performance and notebook quality. (3) Game analysis — scoring breakdown, priority elements, criteria and constraints.
Build log (date, members, what changed, why, before/after photo), CAD drawings, programming log with constants. Orange slides. One entry per change — not one entry per week.
Re-identify the problem based on competition data. The iteration divider slide signals the new EDP loop. This is what separates Expert notebooks from Proficient ones.
Run the notebook audit in Mission Control or the Template Guide. Fix every red item. Every entry needs Written By, Witnessed By, and Date. TOC must be current.
ExpertDetailed build log with photos. Code shown alongside design intent. Every change has a reason.
5. TEST & EVALUATE
EmergingTesting mentioned but results not recorded.
ProficientTests performed and results noted with some data.
ExpertOriginal testing. Data tables with n≥5 trials. Benchmark targets set before testing. Conclusions drive next action.
6. ITERATE (EDP CYCLES)
EmergingOnly one design cycle shown across the season.
ProficientTwo or more cycles visible with some continuity.
ExpertMultiple full cycles. Each cycle explicitly linked to data from the previous one. V1 → V2 comparison shows measurable improvement.
🔭
“Fully Developed” = scoring Emerging or higher on the first four criteria. That threshold gets your notebook scored at all. Below it, judges set it aside. Above it, every criterion is ranked and compared against other teams.
RECF
The Rubric Is a Sorting Tool, Not a Test
Judges use the rubric to rank notebooks quantitatively first, then apply qualitative judgment for final award decisions. A notebook that scores Expert on every criterion is not automatically the Design Award winner — but it is guaranteed to be in the final deliberation. A notebook that scores Emerging on most criteria will not make the cut, regardless of how good the robot was.
3 / 5
// Section 04
Student Ownership and EN4
What student-centered means in practice, what mentors can and cannot do, and the RECF EN4 rule on AI-generated content.
⚠️
RECF EN4 is explicit: using AI tools to generate, organize, enhance, or alter notebook content violates the Student-Centered Policy. This includes using AI to draft entry text, improve writing quality, suggest what to write, or fill in placeholder prompts. The template provides structure. Students provide everything else.
What “Student-Centered” Means in Practice
A student-centered notebook has one test: can every team member explain every entry they wrote, in detail, to a judge who asks follow-up questions? If yes, the notebook is student-centered. If no, it is not — regardless of how well-organized it looks.
✅
A student who says “I wrote that entry the night after we tested the intake and the numbers surprised us” owns that entry.
✅
A student who can point to the page, explain the test conditions, and describe what changed in response to the data owns that entry.
❌
A student who reads from the notebook but cannot answer “why did you choose that test protocol” does not own that entry.
❌
Entries all written in one session after the season ends. Judges use version history to verify chronological writing. This is visible.
❌
Writing style that does not match the student’s vocabulary, grade level, or demonstrated knowledge in the interview. Judges notice vocabulary mismatches.
The Originality Check — 3 Questions Before Every Submission
Could I explain this entry to a judge who asks three follow-up questions? If not, rewrite it in your own words until you can.
Does this entry describe something that actually happened, in the order it happened? A good entry reads like a lab notebook, not like a report written after the fact.
Are there at least two different writing styles visible across all entries? On a 3-person team, judges expect three voices. Identical phrasing across all entries is a flag.
What Mentors Can and Cannot Do
✅ Mentors can
Set up the template structure
Explain what the rubric criteria mean
Ask students questions about their work
Review entries and point out missing elements
Show examples of strong vs weak entries
Set up version history monitoring
❌ Mentors cannot
Write entries or rewrite student text
Tell students exactly what to write
Use AI to draft or improve entries
Fill in decision matrix scores for students
Edit entries after submission
Reconstruct entries retroactively
❌
RECF EN4: “The use of artificial intelligence / large language model (AI/LLM) programs or tools to generate, organize, enhance, or alter Engineering Notebook content or programming code is contrary to the RECF Student-Centered Policy.” This is not a gray area. If AI wrote it, it is a violation.
4 / 5
// Section 05
What Judges Actually Look For
What judges see in 5-8 minutes, the 5 things they skip, and what separates Design Award notebooks from the rest.
🏆
The notebook is judged on the same criteria as the interview. A team that knows their notebook cold — can point to any entry, explain what happened, and describe what they did next — will win more judge interviews than a team with a beautiful notebook they barely remember writing.
How Judges Evaluate Notebooks
Judges typically have 5–8 minutes per notebook. They are not reading every word. They are looking for:
Evidence that the EDP happened — problem defined, options compared, decision documented, built, tested, data recorded
Evidence that it happened more than once — iteration is the single biggest differentiator between Proficient and Expert
Evidence that multiple students contributed — different writing styles, different “Written By” names
Evidence that entries were written in real time — dates that match the season calendar, version history that shows progressive editing
The 5 Things Judges Skip
Decoration without content. Themed slide backgrounds, icons, and custom fonts do not score rubric points. Substance scores. Judges skip visually busy slides that say nothing.
Entries without dates. An undated entry is evidence-free. Judges cannot tell if it was written the day it happened or the week before competition.
Summaries of decisions without showing the process. “We chose a four-bar lift” is not evidence. “We compared three lift designs using a decision matrix — here are the scores and here is why we weighted torque most heavily” is evidence.
Test logs with no data. “We tested the intake and it worked” is not a test log. It is a one-sentence absence of evidence.
One author across 80 slides. If every entry shows the same “Written By” name, judges assume only one person understands the robot. They will probe the others in the interview.
What Separates Design Award Winners
Across the rubric criteria, Design Award notebooks consistently show:
3+ full EDP cycles with each cycle explicitly referencing data from the previous
Decision matrices for every major choice — drivetrain, primary mechanism, autonomous strategy, rebuild decisions
Test data with before/after comparisons — not just “we tested it” but “V1 jam rate: 15%, V2 jam rate: 3% after roller gap change”
STEM connections that name specific principles — gear reduction, moment of inertia, PID control, Newton’s second law — connected to specific mechanisms
Competition reflections that feed directly into the next EDP cycle
📝
The interview and the notebook tell the same story. When a judge asks “why did you choose that intake design,” the answer should match what is on page 18 of the notebook. Practice the interview with the notebook open. Point to the evidence as you speak.
The squad system generates notebook evidence across the whole season. Every entry type, mapped to an EDP phase.
📝 Documenting the squad system
What to write — and when.
The squad system generates notebook evidence across the whole season — from kickoff day through the final tournament. Here is what to document and where it fits into the existing entry format.
"Your role is what you train to master. Your squad is where you compete right now."
Kickoff day entry
Team roster and role ownership
Record each student's permanent role: Driver, Engineer, or Strategist
Record current squad assignment: Qualifier Squad or League Squad
Note that squad assignments may change during the season
Identify role pairs: who trains with whom across squads
Season-long entries
Paired training and cross-training
Log shared reps between role partners — who practiced, what was drilled
Document cross-training observations: what each student learned outside their primary role
Record growth evidence: skill benchmarks, consistency improvements, reliability flags
Note any squad movement and what readiness factor drove the decision
Before every tournament
Tournament role roll call
Record the roll call date and who participated
Document which competition roles each student is confirmed ready for: drive, pit, scouting, strategy, notebook/interview
Note any contingency assignments if attendance changes
Record evidence of readiness: sessions logged, drills completed, benchmarks met
After every event
Tournament reflection — both squads
Drive team reflection: what the competing squad observed in matches
Scout reflection: what the observing squad recorded about opponents and field conditions
Strategy adjustments: what changed and what data drove the change
Team depth evidence: note when a student filled an unexpected role and how preparation made it possible
What judges can see — evidence checklist
A notebook that documents the squad system well gives judges visible evidence of team depth, student ownership, and real-season development — across multiple students, multiple events, and multiple EDP cycles.
✓Role and squad assignments documented on kickoff day — with a note that squads can change
✓Multiple authors visible across entries — different writing styles, different "Written By" names, both squads represented
✓Paired training logs — specific sessions, specific partners, measurable outcomes
✓Pre-tournament roll call entries with readiness confirmation for each student
✓Post-event reflections from both the competing squad and the observing/scouting squad
✓Scouting contributions documented — what data the scout collected and how it informed the next match or event
✓Strategy adjustments linked to specific match data — not just "we changed the auton" but why and what the data showed
✓Any squad movement documented — what changed, what readiness evidence supported the move, how it affected training
The engineering notebook applies the same documentation standard used in regulated industries — aerospace, medical devices, and pharmaceutical manufacturing — where every design decision must be traceable to a date-stamped record written at the time of the decision. In these fields, undocumented or backdated records invalidate the entire engineering process. RECF EN4 mirrors this standard: the notebook must be contemporaneous (written at the time), chronological (entries in date order), and student-authored. These are not stylistic preferences — they are the same requirements that make engineering records legally and professionally defensible.
🎤 Interview line: “We treat our notebook like an engineering lab record — every entry is written the same day as the work, not reconstructed later. In regulated engineering, a decision that isn’t documented at the time it was made doesn’t count as evidence. We follow the same standard because judges can verify timestamps and version history.”
A judge opens your notebook and checks version history. All three kickoff entries were created the same night — two weeks after game reveal. What rubric criterion is most directly affected?
⬛ Grammar and Professionalism — backdated entries look unpolished
⬛ Chronological integrity — notebooks must be written contemporaneously; version history proves backdating, which undermines the entire documentation standard
Notebook entry tip:Identify the Problem + All EDP phases — Green through Red slides — The notebook pathway generates evidence across every EDP phase. Use this guide as a season checklist: every section maps to a specific slide color and rubric criterion. A notebook that follows this pathway consistently — entry by entry, event by event — is the kind judges use as the benchmark for a 1st-place Design Award.