Activity 5.15: Compare Auto Routines
🎯 Goal
By the end of this activity you will have:
- Examined a top team’s autonomous routine structure on GitHub
- Compared their path design, Named Command patterns, and auto composition to your team’s
- Identified strategies that help top teams score more game pieces in 15 seconds
- Documented your findings in a structured comparison table
Step 1: Choose a Top Team to Study
Pick a team known for strong autonomous performance:
| Team | GitHub | Why Study Their Autos |
|---|---|---|
| 6328 Mechanical Advantage | github.com/Mechanical-Advantage | Excellent PathPlanner usage, AdvantageKit logging for auto debugging |
| 254 The Cheesy Poofs | github.com/Team254 | Custom auto framework, highly optimized paths |
| 1678 Citrus Circuits | github.com/frc1678 | Multi-note autos, clean Named Command patterns |
Find their most recent season’s robot code repository and navigate to their autonomous-related files.
Team I’m studying: _______________ Repository URL: _______________
Step 2: Find Their Auto Files
In the top team’s repository, look for:
-
PathPlanner files — usually in
src/main/deploy/pathplanner/paths/— individual path files (.path)autos/— auto routine files (.auto)
-
Named Command registrations — search for
NamedCommands.registerCommandin their RobotContainer or a dedicated auto configuration class -
Auto builder setup — search for
AutoBuilderto see how they configure PathPlanner integration
If the team doesn’t use PathPlanner (some top teams use custom auto frameworks), look for their auto command classes instead.
Document what you find:
| Item | Their Team | Location in Repo |
|---|---|---|
| Number of paths | ||
| Number of auto routines | ||
| Where Named Commands are registered | ||
| Auto builder configuration |
Step 3: Analyze Their Best Auto
Pick their most complex or highest-scoring auto routine and analyze it:
- How many paths does it chain together?
- How many Named Commands does it use?
- Do they use event markers to overlap driving with mechanism actions?
- What are their path constraints? (max velocity, acceleration)
- How do they handle the transition between scoring and picking up?
Top teams often:
- Use event markers aggressively — starting the intake before reaching the game piece, aiming the shooter while driving back to score
- Have higher path speeds — 3.5-4.5 m/s with precise constraint zones for slowing down
- Chain 4-6 paths in a single auto for 3-4 note autos
- Use parallel command groups — running multiple mechanisms simultaneously
- Have fallback logic — if a game piece isn’t detected, skip to the next one instead of waiting
- Name everything clearly — descriptive path and command names make the auto readable
Step 4: Analyze Your Team’s Auto
Now look at your team’s autonomous setup:
- Open your team’s PathPlanner project and list the auto routines
- Open
RobotContainer.java and find the Named Command registrations - Pick your team’s best auto routine and analyze it with the same questions
| Item | Your Team’s Value |
|---|---|
| Number of paths in best auto | |
| Number of Named Commands used | |
| Event markers used? | |
| Max path velocity | |
| Total game pieces scored |
Step 5: Complete the Comparison
🔍 Comparison Exercise: Autonomous Routine Structure
Team to study: (fill in — e.g., Team 6328 Mechanical Advantage)
Repository: (fill in — e.g., https://github.com/Mechanical-Advantage/RobotCode2025)
File/folder to examine: (fill in — e.g., src/main/deploy/pathplanner/autos/ and src/main/java/frc/robot/RobotContainer.java)
Guiding Questions
- How many auto routines does the top team have compared to your team? What does this tell you about their autonomous strategy flexibility?
- How do they structure their Named Commands differently? Do they use more granular commands (one per action) or larger composite commands?
- Do they use event markers to overlap driving with mechanism actions? How much time does this save compared to sequential execution?
- How do their path constraints compare to yours? Are they driving faster, and if so, what enables that (better odometry, vision correction, more testing)?
Document Your Findings
| Aspect | Their Team | Our Team | Why the Difference? |
|---|---|---|---|
| Number of auto routines | |||
| Paths per best auto | |||
| Named Commands count | |||
| Event markers used | |||
| Max path velocity (m/s) | |||
| Game pieces scored (best auto) | |||
| Named Command granularity | |||
| Fallback/error handling | |||
| Path naming convention |
Step 6: Identify Improvements
Based on your comparison, identify specific improvements for your team’s autonomous:
| Priority | Improvement | Effort | Impact |
|---|---|---|---|
| 1 | Low / Medium / High | Low / Medium / High | |
| 2 | Low / Medium / High | Low / Medium / High | |
| 3 | Low / Medium / High | Low / Medium / High |
Focus on high-impact, low-effort improvements first. For example:
- Adding event markers to existing paths (low effort, saves 0.5-1.0 seconds per auto)
- Increasing path speed slightly after testing (low effort, saves time on every path)
- Adding a second auto routine for a different starting position (medium effort, gives strategy flexibility)
Strong answers include:
-
A specific difference — e.g., “Team 6328 has 8 different auto routines for different starting positions and alliance partner strategies. We only have 2. Their flexibility lets them adapt to any match situation.”
-
A specific improvement — e.g., “Adding event markers to start the intake 0.5 seconds before reaching the game piece. This would save about 0.5 seconds per pickup, which adds up to 1-1.5 seconds over a 3-note auto — potentially enough time for an extra game piece.”
-
Testing requirements — e.g., “I’d need to test the event marker timing to make sure the intake is fully deployed before the robot reaches the game piece. If the marker triggers too early, the intake deploys in the wrong position. I’d test this in simulation first, then on the robot with AdvantageScope logging to verify timing.”
Bonus: Present Your Findings
Create a short presentation (3-5 slides or bullet points) for your team:
- Which top team you studied and why
- The key differences in autonomous approach
- Your top 3 recommended improvements
- The testing plan for implementing them
Sharing this analysis helps your whole team make better autonomous decisions.
What’s Next?
Congratulations — you’ve completed Unit 5! You’ve mastered deploying and debugging, learned from the FRC community, explored vendor documentation, understood swerve drive concepts, and built autonomous routines.
When you’re ready for the final challenge, Unit 6: Advanced Engineering starts with Lesson 6.1: AdvantageKit — where you’ll explore advanced logging, simulation, control theory, vision processing, state machines, unit testing, and competition readiness.