Activity 2.5: Trace a Sensor Reading
🎯 Goal
By the end of this unit you will have:
- Traced the Limelight vision camera data from raw reading to pose estimator update
- Understood the “reverse trace” technique — following sensor data from input to where it’s used
- Identified the filtering and rejection logic that protects the robot from bad data
- Documented the sensor trace using the same step-by-step format from Activity 2.4
In Activity 2.4 you traced forward — from a button press to a motor output. This time you’ll trace backward — starting from a sensor (the Limelight camera) and following the data through the code to see where and how it’s used.
Why Trace a Sensor?
Button traces answer: “What happens when the driver does something?” Sensor traces answer a different question: “Where does this data come from, and what does the robot do with it?”
Our robot has two Limelight vision cameras (front and back) that see AprilTags on the field. The cameras estimate where the robot is on the field, and the code uses that estimate to gently correct the drivetrain’s position over time. This is called vision pose estimation — the cameras help the robot know where it is.
Let’s trace how that data flows through the code.
The Assignment
Trace how a Limelight camera reading flows from
robotPeriodic()through LimelightHelpers to the drivetrain’s pose estimator.
Open these files in your IDE — you’ll be jumping between them:
Step 1: Find the Entry Point — robotPeriodic()
Every sensor trace starts with the same question: when does this code run?
Open robotPeriodic() method:
@Overridepublic void robotPeriodic() { CommandScheduler.getInstance().run(); updateVisionPoseEstimate(); clampPoseToField(); m_robotContainer.updateCalibrationTelemetry();}This method runs every 20ms regardless of robot mode (disabled, auto, teleop). That means vision data is being processed 50 times per second, all the time. The key call for our trace is updateVisionPoseEstimate().
Step 2: Read the Robot’s Heading
Scroll down in Robot.java to find the updateVisionPoseEstimate() method. The first thing it does is read the robot’s current state:
double robotYawDeg = m_robotContainer.drivetrain.getState().Pose .getRotation().getDegrees();double yawRateDps = m_robotContainer.drivetrain.getPigeon2() .getAngularVelocityZWorld().getValueAsDouble();
var speeds = m_robotContainer.drivetrain.getState().Speeds;double translationalSpeed = Math.hypot( speeds.vxMetersPerSecond, speeds.vyMetersPerSecond);The code grabs three things from the drivetrain:
robotYawDeg— which direction the robot is facing (from the Pigeon gyro)yawRateDps— how fast the robot is spinning (degrees per second)translationalSpeed— how fast the robot is driving (meters per second)
These values are used to decide whether to trust the camera data at all.
Step 3: Rejection Checks — When to Ignore Vision
Before even asking the cameras for data, the code checks if conditions are good enough to trust vision:
boolean spinning = Math.abs(yawRateDps) > 150;boolean drivingFast = translationalSpeed > 3.0;- Spinning > 150°/s → Reject. The camera needs accurate heading to work, and fast spins make the heading unreliable.
- Driving > 3 m/s → Reject. Camera blur and processing lag mean the estimate is for where the robot was, not where it is.
Step 4: Send Heading to the Cameras
Even if the code plans to reject the result, it always sends the current heading to both Limelights:
LimelightHelpers.SetRobotOrientation("limelight-front", robotYawDeg, yawRateDps, 0, 0, 0, 0);LimelightHelpers.SetRobotOrientation("limelight-back", robotYawDeg, yawRateDps, 0, 0, 0, 0);This is where Robot.java talks to SetRobotOrientation() method writes the robot’s heading to NetworkTables so the Limelight’s MegaTag2 algorithm can use it. MegaTag2 trusts the Pigeon gyro for heading and only corrects X/Y position.
Step 5: Get the Pose Estimate from Each Camera
Next, the code asks each camera for its best guess of where the robot is:
LimelightHelpers.PoseEstimate frontEst = LimelightHelpers.getBotPoseEstimate_wpiBlue_MegaTag2("limelight-front");LimelightHelpers.PoseEstimate backEst = LimelightHelpers.getBotPoseEstimate_wpiBlue_MegaTag2("limelight-back");getBotPoseEstimate_wpiBlue_MegaTag2() is a static method in LimelightHelpers that reads the camera’s pose estimate from NetworkTables. It returns a PoseEstimate object containing:
pose— the estimated X/Y position on the field (as aPose2d)timestampSeconds— when the image was capturedtagCount— how many AprilTags the camera sawavgTagDist— average distance to the visible tags (in meters)
Step 6: Fuse the Estimate into the Drivetrain
If conditions are good (not spinning, not driving too fast, vision enabled), the code feeds each camera’s estimate into the drivetrain:
if (!spinning && !drivingFast && m_robotContainer.isVisionEnabled()) { if (frontEst != null) fuseCameraEstimate(frontEst, translationalSpeed); if (backEst != null) fuseCameraEstimate(backEst, translationalSpeed);}The fuseCameraEstimate() method (also in Robot.java) does additional filtering before accepting the data:
- No tags visible → skip (nothing to fuse)
- Single tag farther than 3m → skip (too noisy with elevator flex)
- Pose is off the field → skip (obviously wrong)
- Pose is more than 3m from current position → skip (prevents teleporting)
If the estimate passes all checks, the code calculates a standard deviation (how much to trust it) based on tag count, distance, and robot speed. More tags + closer + slower = more trust.
Finally, it calls two methods on the drivetrain:
m_robotContainer.drivetrain.setVisionMeasurementStdDevs( VecBuilder.fill(xyStdDev, xyStdDev, 9999999));m_robotContainer.drivetrain.addVisionMeasurement( estimate.pose, estimate.timestampSeconds);The 9999999 for rotation means: never trust vision for heading — the Pigeon gyro is always the authority on which way the robot faces. Vision only corrects X/Y position.
These methods live in
Step 7: See the Complete Trace
Here’s the full sensor trace from camera reading to pose update:
Step 8: Verify Your Understanding
-
Every 20ms (50 times per second) — it’s called from
robotPeriodic(), which runs in every robot mode. -
Spinning faster than 150°/s and driving faster than 3 m/s. Either condition means the camera data is too unreliable to use.
-
MegaTag2 needs the robot’s heading as input. It uses the Pigeon gyro heading to calculate X/Y position. Without accurate heading, the pose estimate would be wrong.
-
Never trust vision for heading (rotation). The Pigeon gyro is always the authority on which direction the robot faces. Vision only corrects X/Y position.
-
The drivetrain’s Kalman filter — specifically
CommandSwerveDrivetrain.addVisionMeasurement(). The filter blends vision with wheel odometry to gently correct position drift.
Step 9: Document Your Trace
Add this sensor trace to your Trace Worksheet reference sheet. Use the same table format from Activity 2.4:
| Step | File | Location | What Happens |
|---|---|---|---|
| 1 | Robot.java | robotPeriodic() | Calls updateVisionPoseEstimate() every 20ms |
| 2 | Robot.java | updateVisionPoseEstimate() | Reads heading, spin rate, and speed from drivetrain |
| 3 | Robot.java | Rejection checks | Rejects all vision if spinning > 150°/s or driving > 3 m/s |
| 4 | LimelightHelpers.java | SetRobotOrientation() | Sends Pigeon heading to both Limelights via NetworkTables |
| 5 | LimelightHelpers.java | getBotPoseEstimate_wpiBlue_MegaTag2() | Reads MegaTag2 pose estimate from each camera |
| 6 | Robot.java | fuseCameraEstimate() | Filters bad estimates, calculates trust level (std dev) |
| 7 | CommandSwerveDrivetrain.java | addVisionMeasurement() | Feeds validated estimate into Kalman filter |
What’s Next?
You’ve completed your first sensor trace — from a Limelight camera image all the way through filtering and validation to the drivetrain’s pose estimator. You’ve seen how the code protects itself from bad data with multiple layers of rejection checks.
In Activity 2.6: Autonomous, you’ll see how the corrected pose estimate is used during autonomous routines to follow pre-planned paths accurately.