Skip to content

Activity 2.5: Trace a Sensor Reading

🎯 Goal

By the end of this unit you will have:

  • Traced the Limelight vision camera data from raw reading to pose estimator update
  • Understood the “reverse trace” technique — following sensor data from input to where it’s used
  • Identified the filtering and rejection logic that protects the robot from bad data
  • Documented the sensor trace using the same step-by-step format from Activity 2.4

In Activity 2.4 you traced forward — from a button press to a motor output. This time you’ll trace backward — starting from a sensor (the Limelight camera) and following the data through the code to see where and how it’s used.


Why Trace a Sensor?

Button traces answer: “What happens when the driver does something?” Sensor traces answer a different question: “Where does this data come from, and what does the robot do with it?”

Our robot has two Limelight vision cameras (front and back) that see AprilTags on the field. The cameras estimate where the robot is on the field, and the code uses that estimate to gently correct the drivetrain’s position over time. This is called vision pose estimation — the cameras help the robot know where it is.

Let’s trace how that data flows through the code.


The Assignment

Trace how a Limelight camera reading flows from robotPeriodic() through LimelightHelpers to the drivetrain’s pose estimator.

Open these files in your IDE — you’ll be jumping between them:

  • Robot.java
  • LimelightHelpers.java
  • CommandSwerveDrivetrain.java

Step 1: Find the Entry Point — robotPeriodic()

Every sensor trace starts with the same question: when does this code run?

Open Robot.java — robotPeriodic() (around line 79) and look at the robotPeriodic() method:

Robot.java — robotPeriodic()
@Override
public void robotPeriodic() {
CommandScheduler.getInstance().run();
updateVisionPoseEstimate();
clampPoseToField();
m_robotContainer.updateCalibrationTelemetry();
}

This method runs every 20ms regardless of robot mode (disabled, auto, teleop). That means vision data is being processed 50 times per second, all the time. The key call for our trace is updateVisionPoseEstimate().


Step 2: Read the Robot’s Heading

Scroll down in Robot.java to find the updateVisionPoseEstimate() method. The first thing it does is read the robot’s current state:

Robot.java — updateVisionPoseEstimate() start
double robotYawDeg = m_robotContainer.drivetrain.getState().Pose
.getRotation().getDegrees();
double yawRateDps = m_robotContainer.drivetrain.getPigeon2()
.getAngularVelocityZWorld().getValueAsDouble();
var speeds = m_robotContainer.drivetrain.getState().Speeds;
double translationalSpeed = Math.hypot(
speeds.vxMetersPerSecond, speeds.vyMetersPerSecond);

The code grabs three things from the drivetrain:

  1. robotYawDeg — which direction the robot is facing (from the Pigeon gyro)
  2. yawRateDps — how fast the robot is spinning (degrees per second)
  3. translationalSpeed — how fast the robot is driving (meters per second)

These values are used to decide whether to trust the camera data at all.


Step 3: Rejection Checks — When to Ignore Vision

Before even asking the cameras for data, the code checks if conditions are good enough to trust vision:

Robot.java — rejection checks
boolean spinning = Math.abs(yawRateDps) > 150;
boolean drivingFast = translationalSpeed > 3.0;
  • Spinning > 150°/s → Reject. The camera needs accurate heading to work, and fast spins make the heading unreliable.
  • Driving > 3 m/s → Reject. Camera blur and processing lag mean the estimate is for where the robot was, not where it is.

Step 4: Send Heading to the Cameras

Even if the code plans to reject the result, it always sends the current heading to both Limelights:

Robot.java — sending heading to cameras
LimelightHelpers.SetRobotOrientation("limelight-front",
robotYawDeg, yawRateDps, 0, 0, 0, 0);
LimelightHelpers.SetRobotOrientation("limelight-back",
robotYawDeg, yawRateDps, 0, 0, 0, 0);

This is where Robot.java talks to LimelightHelpers.java. The SetRobotOrientation() method writes the robot’s heading to NetworkTables so the Limelight’s MegaTag2 algorithm can use it. MegaTag2 trusts the Pigeon gyro for heading and only corrects X/Y position.


Step 5: Get the Pose Estimate from Each Camera

Next, the code asks each camera for its best guess of where the robot is:

Robot.java — getting pose estimates
LimelightHelpers.PoseEstimate frontEst =
LimelightHelpers.getBotPoseEstimate_wpiBlue_MegaTag2("limelight-front");
LimelightHelpers.PoseEstimate backEst =
LimelightHelpers.getBotPoseEstimate_wpiBlue_MegaTag2("limelight-back");

getBotPoseEstimate_wpiBlue_MegaTag2() is a static method in LimelightHelpers that reads the camera’s pose estimate from NetworkTables. It returns a PoseEstimate object containing:

  • pose — the estimated X/Y position on the field (as a Pose2d)
  • timestampSeconds — when the image was captured
  • tagCount — how many AprilTags the camera saw
  • avgTagDist — average distance to the visible tags (in meters)

Step 6: Fuse the Estimate into the Drivetrain

If conditions are good (not spinning, not driving too fast, vision enabled), the code feeds each camera’s estimate into the drivetrain:

Robot.java — fusing vision data
if (!spinning && !drivingFast && m_robotContainer.isVisionEnabled()) {
if (frontEst != null) fuseCameraEstimate(frontEst, translationalSpeed);
if (backEst != null) fuseCameraEstimate(backEst, translationalSpeed);
}

The fuseCameraEstimate() method (also in Robot.java) does additional filtering before accepting the data:

  1. No tags visible → skip (nothing to fuse)
  2. Single tag farther than 3m → skip (too noisy with elevator flex)
  3. Pose is off the field → skip (obviously wrong)
  4. Pose is more than 3m from current position → skip (prevents teleporting)

If the estimate passes all checks, the code calculates a standard deviation (how much to trust it) based on tag count, distance, and robot speed. More tags + closer + slower = more trust.

Finally, it calls two methods on the drivetrain:

Robot.java — feeding the Kalman filter
m_robotContainer.drivetrain.setVisionMeasurementStdDevs(
VecBuilder.fill(xyStdDev, xyStdDev, 9999999));
m_robotContainer.drivetrain.addVisionMeasurement(
estimate.pose, estimate.timestampSeconds);

The 9999999 for rotation means: never trust vision for heading — the Pigeon gyro is always the authority on which way the robot faces. Vision only corrects X/Y position.

These methods live in CommandSwerveDrivetrain.java, which inherits them from CTRE’s swerve drivetrain base class. The drivetrain uses a Kalman filter to blend the vision estimate with wheel odometry — gently nudging the position over time rather than making sudden jumps.


Step 7: See the Complete Trace

Here’s the full sensor trace from camera reading to pose update:

Limelight Vision Trace: Camera → Pose Estimator
🟢 Limelight camera captures an image of AprilTags on the field
Robot.java robotPeriodic() — runs every 20ms
The periodic loop calls updateVisionPoseEstimate(). This runs 50 times per second in every robot mode.
Robot.java updateVisionPoseEstimate() — read robot state
Reads the robot's current heading (yaw) from the Pigeon gyro, spin rate, and driving speed from the drivetrain. These values determine whether to trust vision.
Robot.java updateVisionPoseEstimate() — rejection checks
Checks if the robot is spinning > 150°/s or driving > 3 m/s. If either is true, all vision data is rejected for this cycle.
LimelightHelpers.java SetRobotOrientation() — send heading to cameras
Writes the Pigeon heading and yaw rate to NetworkTables so both Limelights can use it for MegaTag2 pose calculation.
LimelightHelpers.java getBotPoseEstimate_wpiBlue_MegaTag2() — read pose estimate
Reads each camera's MegaTag2 pose estimate from NetworkTables. Returns a PoseEstimate with the robot's estimated field position, timestamp, tag count, and average tag distance.
Robot.java fuseCameraEstimate() — filter and validate
Applies additional filters: rejects if no tags visible, single tag > 3m away, pose off-field, or pose > 3m from current position. Calculates trust level (standard deviation) based on tag count, distance, and speed.
CommandSwerveDrivetrain.java setVisionMeasurementStdDevs() + addVisionMeasurement()
Feeds the validated pose estimate into the drivetrain's Kalman filter. The filter blends vision with wheel odometry to gently correct position drift over time. Rotation std dev is set to 9999999 — vision never overrides the Pigeon heading.
🔵 Drivetrain's estimated field position is gently corrected by vision data, improving accuracy for autonomous paths and turret aiming

Step 8: Verify Your Understanding

Checkpoint: Sensor Trace Verification
Answer these questions from memory, then reveal the answers to check your work: (1) How often does updateVisionPoseEstimate() run? (2) Name two conditions that cause vision data to be rejected entirely. (3) Why does the code send the Pigeon heading to the Limelights before reading their estimates? (4) What does a standard deviation of 9999999 for rotation mean? (5) What is the final destination of the vision data?
  1. Every 20ms (50 times per second) — it’s called from robotPeriodic(), which runs in every robot mode.

  2. Spinning faster than 150°/s and driving faster than 3 m/s. Either condition means the camera data is too unreliable to use.

  3. MegaTag2 needs the robot’s heading as input. It uses the Pigeon gyro heading to calculate X/Y position. Without accurate heading, the pose estimate would be wrong.

  4. Never trust vision for heading (rotation). The Pigeon gyro is always the authority on which direction the robot faces. Vision only corrects X/Y position.

  5. The drivetrain’s Kalman filter — specifically CommandSwerveDrivetrain.addVisionMeasurement(). The filter blends vision with wheel odometry to gently correct position drift.


Step 9: Document Your Trace

Add this sensor trace to your Trace Worksheet reference sheet. Use the same table format from Activity 2.4:

StepFileLocationWhat Happens
1Robot.javarobotPeriodic()Calls updateVisionPoseEstimate() every 20ms
2Robot.javaupdateVisionPoseEstimate()Reads heading, spin rate, and speed from drivetrain
3Robot.javaRejection checksRejects all vision if spinning > 150°/s or driving > 3 m/s
4LimelightHelpers.javaSetRobotOrientation()Sends Pigeon heading to both Limelights via NetworkTables
5LimelightHelpers.javagetBotPoseEstimate_wpiBlue_MegaTag2()Reads MegaTag2 pose estimate from each camera
6Robot.javafuseCameraEstimate()Filters bad estimates, calculates trust level (std dev)
7CommandSwerveDrivetrain.javaaddVisionMeasurement()Feeds validated estimate into Kalman filter

What’s Next?

You’ve completed your first sensor trace — from a Limelight camera image all the way through filtering and validation to the drivetrain’s pose estimator. You’ve seen how the code protects itself from bad data with multiple layers of rejection checks.

In Activity 2.6: Autonomous, you’ll see how the corrected pose estimate is used during autonomous routines to follow pre-planned paths accurately.