135 Consul - FRC Programming Framework
Table of Contents
1 | Welcome to Consul
- Purpose of the Framework
- Design Philosophy (IO-centric)
- Multi-Branch Workflow
2 | Core Architecture
- Core Features (Base Branch)
- Directory Structure
- IO Architecture
3 | Development Tools
- Software Ecosystem
- Simulation and Control Tools
4 | Hardware Integration
- Multi-Vendor Support
- Motor Constant Container
- Self-Checking Hardware System
5 | Feature Branches
- Branch Structure and Workflow
- Directory Isolation
- Merge Conflict Management
6 | Advanced Systems
- Custom AI Integration (Orange Pi 5)
- Vision Simulation
- Physics Engine Integration
7 | Programming Guide
- Creating IO Interfaces
- Subsystem Development
- Command Patterns
8 | Using Consul
- Drive Commands
- Autonomous Programming
- Tuning and Characterization
- Competition Workflow
9 | Appendix
- Planned Features
- Credits & References
Welcome to Consul
Travel Back to Table of ContentsPurpose of the Framework
135 Consul is a modular, template-based FRC programming framework designed to minimize integration time when combining multiple subsystems. The framework's core goal is to enable teams to:
- Rapid Prototyping: Get mechanisms working in simulation before hardware exists
- Parallel Development: Multiple programmers work on separate subsystems without merge conflicts
- Hardware Flexibility: Switch between CTRE and REV vendors without rewriting subsystem logic
- Reliable Debugging: Replay match logs with modified code to diagnose issues without re-running matches
- Competition Readiness: Reduce on-robot tuning time through accurate simulation and live-tunable constants
When implemented correctly, the only file requiring manual merge resolution when combining branches is RobotContainer.java. This dramatically reduces the integration phase from weeks to days.
Design Philosophy (IO-centric)
135 Consul uses an Input/Output (IO) abstraction layer inspired by AdvantageKit's logging architecture. This design separates three concerns:
1. Hardware Interface (IO Layer)
- Reads sensor values and writes actuator commands
- Vendor-specific implementations (CTRE, REV, Simulation)
- Contains zero business logic - only hardware interaction
- All methods return
voidexceptgetSelfCheckingHardware()
2. Data Transport (IOInputs)
- Annotated with
@AutoLogfor automatic AdvantageKit logging - Stores timestamped sensor readings (position, velocity, current, temperature)
- Enables replay: logs become the "ground truth" instead of live hardware
3. Control Logic (Subsystem Layer)
- Contains all decision-making: state machines, PID controllers, safety checks
- Calls IO methods but never directly accesses hardware
- Vendor-agnostic: same code works with any IO implementation
Key Benefit: You can replay a match log, swap in different control code, and see exactly how the robot would have behaved—without touching the physical robot.
Multi-Branch Workflow
135 Consul uses a feature branch strategy where each mechanism lives on its own Git branch:
Branch Structure:
main (135-consul)
├── drive (base drivetrain - do not modify)
├── intake-branch
├── shooter-branch
├── climber-branch
└── vision-branch
Workflow Steps
- Fork for Your Season: Create a fork of 135-consul for your game (e.g.,
2025-reefscape) - Merge Feature Branches: Merge only the mechanism branches you need into
drive
git checkout drive
git merge intake-branch
git merge shooter-branch
- Isolate Constants: Each branch stores its constants in
utils/[branchName]/Constants.java- Good:
utils/intake/IntakeConstants.java - Bad: Adding intake constants to the global
Constants.java
- Good:
- Expected Merge Conflicts: Only two files should conflict:
RobotContainer.java- Subsystem instantiation and command bindingRobot.java- Rare, only for mode-specific initialization
Directory Isolation Example
utils/
├── drive/ # Core framework constants
│ └── DriveConstants.java
├── intake/ # Intake branch constants
│ └── IntakeConstants.java
└── shooter/ # Shooter branch constants
└── ShooterConstants.java
Why This Works: By keeping each branch's code in its own subdirectory, Git can auto-merge most changes. Two programmers can simultaneously develop an intake and a shooter without ever touching the same files.
Anti-Pattern to Avoid
Don't add all mechanism constants to the global Constants.java
Don't modify drive/* files unless absolutely necessary
Don't create commands that control multiple subsystems from different branches
Core Architecture
Travel Back to Table of ContentsCore Features (Base Branch)
Drivetrain Support
- Swerve: Full support with 250Hz odometry threads, momentum-based setpoint calculation to prevent skidding, skid detection using FRC 1690's algorithm, and complete simulation using DCMotorSims
- Mechanum: 4-motor holonomic drivetrain with full pathfinding support and simulated physics
- Tank: Differential drive with DifferentialDriveSim for accurate simulation
All drivetrains support:
- Pathplanner autonomous execution
- Choreo trajectory optimization
- Real-time pathfinding with obstacle avoidance
- Drive-to-pose commands with custom velocity/acceleration profiles
- Slow/fast mode switching
- Full simulation with battery voltage modeling
Multi-Vendor Support
- CTRE Motors:
- Talon FX & Kraken motor controllers
- Latency compensation for improved control
- FOC (Field-Oriented Control) modes
- Integrated with Phoenix 6
- REV Motors:
- Spark Max & Spark Flex controllers
- NEO & NEO Vortex motors
- Breakout board encoder support
- Encoders:
- CTRE CANcoders
- REV Through-Bore/Absolute encoders
- Thrifty Absolute Encoders
- DutyCycle encoders
- Custom conversion factors for mechanism-specific units
Custom AI Integration (Orange Pi 5)
- Multi-Input, Multi-Output Neural Network: Gradient Boosting Regressor model supporting any number of inputs/outputs
- Anti-Overfitting: Automatic test/train data split using Central Limit Theorem (n≥30 samples)
- Near-Zero Latency: ~0.3ms response time via USB/Network Tables communication
- No AI Knowledge Required: Simple JSON-based API for data exchange between RoboRIO and Orange Pi
- Simulation Support: Full PyDriverStation integration for testing without hardware
- Use Cases: Double-jointed arm kinematics, trajectory optimization, sensor fusion
Motor Constant Container
Safety wrapper for System Identification constants that:
- Validates all SysID values (Ks, Kv, Ka, Kp, Kd) before accepting them
- Prevents robot crashes from incorrect characterization
- Centralizes all mechanism constants in one location
- Provides type-safe getters for feedforward/feedback values
- Throws descriptive errors for invalid configurations
Simulation with Vision + Physics
- AdvantageKit Integration: Full replay capability for log-based debugging
- Physics Engine: dyn4j-based collision detection between robot, field elements, and game pieces
- Vision Simulation:
- 4 PhotonVision cameras accessible via localhost streams
- Limelight simulation with accurate tX/tY/tV calculations
- AprilTag detection with trust-based filtering
- Mechanism Simulation: State-space models for arms, elevators, and flywheels with DCMotorSim
- Battery Simulation: Realistic brownout conditions and voltage sag
Directory Structure
src/main/java/frc/robot/
├── commands/
│ ├── drive/ # Core drivetrain commands (DO NOT MODIFY)
│ └── [branchName]/ # Branch-specific commands
│ └── ExampleCommand.java
├── subsystems/
│ ├── drive/ # Core drivetrain subsystems (DO NOT MODIFY)
│ └── [branchName]/ # Branch-specific subsystems
│ └── ExampleSubsystem.java
├── utils/
│ ├── drive/ # Core utilities (DO NOT MODIFY)
│ └── [branchName]/ # Branch-specific constants
│ └── Constants.java
├── Constants.java # Global constants
├── DataHandler.java # USB logging & Orange Pi communication
├── Main.java
├── Robot.java # Main robot class (merge conflicts expected)
└── RobotContainer.java # Subsystem/command binding (merge conflicts expected)
Key Principles:
- Each feature branch creates its own subdirectories under
commands/,subsystems/, andutils/ - The
drive/*Directories contain core framework code and should rarely be modified - Merge conflicts should only occur in
Robot.javaandRobotContainer.javawhen combining branches - Branch-specific constants go in
utils/[branchName]/Constants.java, not the global Constants file
IO Architecture
IO Implementation Philosophy
The IO (Input/Output) layer enables:
- Hardware Abstraction: Same subsystem code works with CTRE, REV, or simulated hardware
- Replay Capability: AdvantageKit can replay matches with different code by separating inputs from logic
- Multi-Vendor Support: Switch motor controllers without rewriting subsystem logic
- Simulation: Test code without physical hardware
Three-Layer Structure:
- Interface (
ExampleIO.java): Defines methods all implementations must have - Implementations (
ExampleIOSpark.java,ExampleIOTalon.java,ExampleIOSim.java): Hardware-specific code - Subsystem (
ExampleS.java): Business logic that calls IO methods
Creating IO Interfaces
Step 1: Define the Interface
public interface ExampleIO {
@AutoLog // AdvantageKit annotation
public static class ExampleIOInputs {
public double positionRotations = 0.0;
public double velocityRPM = 0.0;
public double appliedVolts = 0.0;
public double currentAmps = 0.0;
public double tempCelsius = 0.0;
}
// ALL methods must return void (except getSelfCheckingHardware)
public default void updateInputs(ExampleIOInputs inputs) {}
public default void setVoltage(double volts) {}
public default void setPosition(double rotations) {}
public default SelfChecking[] getSelfCheckingHardware() {
return new SelfChecking[0];
}
}
Step 2: Implement for Each Vendor
public class ExampleIOSpark implements ExampleIO {
private final CANSparkMax motor;
private final RelativeEncoder encoder;
public ExampleIOSpark(int canID) {
motor = new CANSparkMax(canID, MotorType.kBrushless);
encoder = motor.getEncoder();
}
@Override
public void updateInputs(ExampleIOInputs inputs) {
inputs.positionRotations = encoder.getPosition();
inputs.velocityRPM = encoder.getVelocity();
inputs.appliedVolts = motor.getAppliedOutput() * motor.getBusVoltage();
inputs.currentAmps = motor.getOutputCurrent();
inputs.tempCelsius = motor.getMotorTemperature();
}
@Override
public void setVoltage(double volts) {
motor.setVoltage(volts);
}
}
Step 3: Use in Subsystem
public class ExampleS extends SubsystemChecker {
private final ExampleIO io;
private final ExampleIOInputsAutoLogged inputs = new ExampleIOInputsAutoLogged();
public ExampleS(ExampleIO io) {
this.io = io;
}
@Override
public void periodic() {
io.updateInputs(inputs); // Read hardware
Logger.processInputs("ExampleS", inputs); // Log for replay
// Business logic here - NO hardware calls!
}
public void setSpeed(double rpm) {
// Convert RPM to voltage using feedforward, then:
io.setVoltage(calculatedVolts);
}
}
Development Tools
Travel Back to Table of ContentsSoftware Ecosystem
AdvantageKit & AdvantageScope
AdvantageKit is a logging framework that enables deterministic replay of robot code. Every sensor input is recorded to a log file, allowing programmers to:
- Replay matches with modified code to test fixes without re-running the robot
- Compare "before" and "after" behavior side-by-side (e.g., vision tuning improvements)
- Debug issues that only occur during competition by examining exact sensor states
AdvantageScope is the visualization tool for AdvantageKit logs. It provides:
- 3D field visualization with odometry tracking
- Real-time and historical plotting of any logged value
- Mechanism state viewers (swerve modules, arms, elevators)
- Video synchronization with log data for match review
135 Consul Integration: All subsystems use the @AutoLog annotation on IOInputs classes, automatically generating logging code. The IO architecture ensures all hardware interactions are captured for replay.
Elastic Development Dashboard
Elastic is a customizable alternative to Shuffleboard/Glass, offering:
- Lower Latency: Faster NetworkTables updates with reduced overhead
- Custom Widgets: Swerve module visualizations, field diagrams, custom telemetry displays
- Modern UI: Cleaner interface with better organization and theming
- Dynamic PID Tuning: LoggableTunableNumbers allow live-tuning of PID/feedforward values without redeploying code
135 Consul Usage:
- View real-time drivetrain odometry on an interactive field widget
- Monitor subsystem health through custom status indicators
- Adjust control parameters during testing via tunable number widgets
- Access system identification test selector for running SysID routines
Touchboard Driver Dashboard
Touchboard is the driver-facing dashboard used during matches, providing:
- Clean, competition-focused interface
- Game piece status indicators
- Robot state visualization (intake position, shooter ready, etc.)
- Auto routine selection
- Camera streams for driver assistance
Key Difference from Elastic: Touchboard is optimized for drivers, not programmers. It shows only essential match information without debug telemetry.
PathPlanner & Choreo
PathPlanner
PathPlanner is the autonomous execution engine that:
- Follows pre-generated trajectories from Choreo
- Provides on-the-fly pathfinding to any field position
- Supports event markers for triggering commands mid-path (intake, shoot, etc.)
- Handles holonomic drive constraints (acceleration, velocity limits)
Choreo
Choreo is a trajectory optimization tool that generates time-optimal paths by:
- Calculating physically feasible paths that respect robot dynamics
- Eliminating pauses between waypoints (unlike basic PathPlanner generation)
- Optimizing velocity profiles through corners
- Exporting trajectories for PathPlanner execution
135 Consul Workflow:
- Design path waypoints in Choreo
- Set velocity direction at endpoints for smooth transitions between commands
- Export trajectory to PathPlanner
- Add Named Commands at event markers (e.g.,
StartIntake,PrepareShooter) - Execute autonomously with branching logic for reactive paths
Pro Tip: Remove "stop points" at path ends when handing off to another drive command (e.g., SmartIntake) to save 0.2-0.3 seconds per transition.
135 Squire
(New Post Coming Soon!)
135 Squire is the pit display and diagnostics system, providing:
Pre-Match Checklist
- Systematic verification of all robot systems before matches
- Tracks completion status across multiple matches
Subsystem Testing
- Runs automated system check commands on all mechanisms
- Reports hardware faults detected via the self-checking API
- Displays motor temperatures and CAN utilization
Match Analytics
- Live event viewer showing team schedule and match results
- Performance metrics tracking across the competition
- Integration with Blue Alliance API for real-time updates
Hardware Monitoring
- PDH channel amperage visualization
- Battery voltage trending
- Code runtime analysis (flags loops exceeding cycle time)
Critical Feature: Test mode system checks that verify every subsystem functions correctly before queueing for matches. Catches issues like loose connectors, inverted motors, or configuration errors.
PyDriverStation/Data Handler
PyDriverStation
Python-based utility for interacting with Orange Pi coprocessors, providing:
- AI Model Management: Upload/update Gradient Boosting models
- Training Interface: Collect data and train multi-input/output regression models
- Simulation Server: Runs Orange Pi logic on development machines
- Status Monitoring: Verify Pi connection and model health
DataHandler
Java class on the RoboRIO that manages:
- USB Logging: Writes telemetry to flash drive (deprecated in favor of AdvantageKit)
- Orange Pi Communication: JSON-based request/response protocol via NetworkTables
- Selective Logging: Choose which data goes to USB vs. NetworkTables vs. ignored
Communication Pattern:
// RoboRIO sends inputs to Pi
DataHandler.log(armAngle, "armAngle"); // Key provided = sent to Pi
// RoboRIO receives outputs from Pi
JsonObject aiOutputs = DataHandler.getOrangePiData();
double voltage = aiOutputs.get("shoulderVoltage").getAsDouble();
Use Case: Double-jointed arm inverse kinematics - send shoulder/elbow angles, receive optimal motor voltages for desired end-effector position.
Southmoon
(New Post Coming Soon!)
We're working on improving our documentation process; it's the first year we've been doing any true "frameworks" or projects requiring heavy documentation since we have moved to OA. We'll update with more direct documentation as the modules supporting Consul finish getting programmed and tested, including Southmoon.
Simulation and Control Tools
Interactive Field Simulation
135 Consul uses dyn4j, a Java-based 2D physics engine, to simulate realistic robot-field interactions:
Simulated Physics:
- Collision Detection: Robot body interacts with field obstacles, walls, and defense robots
- Game Piece Dynamics: Notes/cones/cubes have accurate mass, friction, and momentum
- Defense Bot Simulation: Configurable opposing robot that follows scripted paths during auto
- Intake Simulation: Game pieces transition from field positions to robot manipulator
Workflow:
- Define field obstacles in
WorldPhysics.java(update annually for new game) - Set robot collision bounds and center of mass
- Configure game piece spawn locations and physics properties
- Use
SimGamePiece.intake(pieceIndex)to simulate game piece collection
Tuning Max Translational Acceleration: Adjust LoggableTunableNumber for max accel until the PhysicsField/Robot and RobotState/EstimatedPose overlays align during simulated square orbits. This provides more accurate values than manual testing.
Dynamic & Automatic PID Tuning
Dynamic Tuning (Live Adjustment)
LoggableTunableNumbers enable real-time parameter adjustments without code redeployment:
LoggableTunableNumber kP = new LoggableTunableNumber("Flywheel/kP", 0.5);
@Override
public void execute() {
if (kP.hasChanged(hashCode())) {
controller.setP(kP.get());
}
}Workflow:
- Deploy code with initial PID values
- Open Elastic dashboard and locate tunable number widget
- Adjust values using keyboard input and press Enter
- Changes apply immediately without disabling the robot
- Once tuned, update hardcoded values in Constants file
Automatic SysID Testing
System Identification Test Selector automates characterization across all mechanisms:
Setup:
- Create
SysIdRoutinefor each mechanism with ramp rate, hold voltage, and timeout - Add a mechanism to test the queue enum in Robot.java
- Use test controller to select queued mechanism via Elastic
Execution Process (per mechanism):
- CTRE: Press the Start button on test controller
- Hold Y (quasistatic forward)
- Hold B (quasistatic reverse)
- Hold A (dynamic forward)
- Hold X (dynamic reverse)
- CTRE: Press Select button
- Reboot robot before testing next mechanism
Log Export:
- REV: Extract
.wpilog→ Open in AdvantageScope → Export to new.wpilog - CTRE: Use Phoenix Tuner X Log Extractor → Convert to AdvantageScope format
Analysis in SysID Tool:
- Import
.wpilogand drag test data to appropriate slots - Map velocity/position/voltage to correct NetworkTables paths
- Select motor type gain preset
- Copy Ks, Kv, Ka, Kp, Kd into
MotorConstantContainer
Static & Feed Forward Characterization
Feedforward Model Components
- Ks (Static Friction): Voltage to overcome friction at rest
- Kv (Velocity): Voltage per unit velocity (V/(rad/s) or V/(m/s))
- Ka (Acceleration): Voltage per unit acceleration
- Kg (Gravity): Voltage to hold position against gravity (arms/elevators)
Characterization Methods
Method 1: System Identification (Requires Physical Robot)
- Run SysID routine as described above
- Provides empirical Ks, Kv, Ka values
- Most accurate for actual hardware
Method 2: Recalc (CAD-Based Estimation)
- Input mechanism properties from CAD: mass, gear ratio, moment of inertia
- Generates theoretical Kv, Ka, Kg values
- Use when hardware isn't available yet
- Still requires SysID for Ks (static friction varies by build quality)
Tuning Priority:
- Kg First: Adjust until mechanism holds position without drifting
- Kv & Ka: Usually Recalc values work well; fine-tune if needed
- Ks Last: Increase until motion starts smoothly without overshoot
CTRE-Specific Notes:
- Ks is in amps, not volts (when using FOC modes)
- Kp is in amps per rotation of error
Drive to AprilTag, Pose, Line Commands
Drive to Pose
Holonomic pathfinding command that navigates to any field coordinate during teleop or auto:
Features:
- Combines custom pathfinding with 6328's drive-to-pose and 3015's obstacle avoidance
- Accepts
Pose2dtarget with desired end rotation - Dynamically recalculates path if target moves or obstacles detected
- Works with swerve and mecanum drivetrains
Usage:
new DriveToPose(drivetrainS, new Pose2d(5.0, 3.0, Rotation2d.fromDegrees(180)))Drive to AprilTag
Extends drive-to-pose with vision-based targeting:
- Uses Limelight pipeline 0 for AprilTag detection
- Calculates 3D pose of tag from tX/tY/distance
- Drives to specified offset from tag (e.g., 1m away, facing tag)
- Maps speeds to avoid sudden stops at endpoint
Aim to Pose/AprilTag
Non-overriding rotation command that aims at target while preserving driver translation control:
new AimToPose(drivetrainS, targetPose).alongWith(new DrivetrainC(...))Use Cases:
- Aim at the speaker while driving
- Track moving game pieces during approach
- Works in auto with
RaceGroupto maintain aim during path following
Implementation: Calculates the rotation to the target using GeomUtil.getRotationToTarget(currentPose, targetPose), applies rotation PID while allowing X/Y joystick input.
Orchestra (CTRE)
CTRE Motor Music System that plays MIDI files through Talon motor PWM:
Setup:
- Convert the MIDI file using Phoenix Tuner X Music Chirp Generator
- Place
.chrpfile indeploy/songs/directory - All Talon motors are automatically registered via
getSelfCheckingHardware()
Usage:
new OrchestraCommand("songName.chrp").schedule();Behavior: Command takes exclusive control of all Talon motors until the song completes. The robot must be disabled or in test mode for safety.
Branching Autos
Reactive autonomous that makes decisions based on match state and opponent behavior:
Architecture
Commands.race(
botAborter(), // Monitors abort conditions
mainPath.andThen(nextCommand)
)
.andThen(new BranchAuto(conditionalPath, fallbackCommand))Bot Aborter Logic
Checks if opponent will reach game piece first:
- Calculate distance from both robots to target game piece
- Compare velocities and estimated arrival times
- Abort current path if opponent arrives >0.75s before us
- Prevents dangerous collisions and wasted cycle time
Branch Auto Execution
- If previous command completed: Execute conditional path
- If previous command aborted: Execute fallback (e.g., go to different game piece)
Example: Auto starts racing opponent to center note. If opponent wins, branch to wing note instead of following original path.
Choreo Integration: Use "Waypoint Velocity Direction" to aim velocity toward next action when removing stop points between commands.
Radius Characterization
Wheelbase Radius Calibration for swerve/mecanum drivetrains:
Purpose: Correct for actual turning radius vs. theoretical CAD values due to:
- Wheel scrub/friction
- Module alignment tolerance
- Carpet interaction
Method:
- Command robot to rotate 10 full revolutions at constant angular velocity
- Measure actual rotations completed via gyro
- Calculate correction factor:
actualRadius = theoreticalRadius * (measuredRotations / 10.0) - Update
DriveConstants.WHEELBASE_RADIUS
Impact: Improves accuracy of field-relative driving and autonomous path following.
Battery Percentage Estimation
BatterySim models voltage sag under load to predict brownout conditions:
Simulation Features:
- Tracks cumulative battery drain from all motors
- Calculates voltage drop based on total current draw and internal resistance
- Triggers brownout warnings when voltage falls below threshold
- Resets battery state when re-enabled (simulates battery swap)
Usage: Automatically integrated into all drivetrain simulations. Monitor BatterySim/Voltage in AdvantageScope to see realistic voltage behavior during high-current operations (acceleration, climbing, shooter spinup).
Tuning: Adjust battery internal resistance in Constants.java to match observed voltage sag from match logs.
Hardware Integration
Travel Back to Table of ContentsSubsystemChecker Overhaul
Features
SubsystemChecker is the base class that all subsystems must extend (instead of SubsystemBase). It provides:
Pit Display Integration
systemCheckCommand(): Returns automated test routine for each subsystemgetTemps(): Provides motor temperature telemetry for overheating detectiongetOrchestraDevices(): Registers CTRE motors for music playback
Hardware Fault Detection
- Automatic registration of motors, encoders, and sensors for health monitoring
- Real-time fault reporting to 135 Squire pit display
- Tracks disconnections, voltage issues, and communication errors
Test Mode Execution
When robot is enabled in Test Mode, pit display can trigger:
- Sequential automated checks of all subsystems
- Physical verification (motors spin, arms move, etc.)
- Pass/fail reporting with specific error descriptions
Critical Pattern:
public class ExampleS extends SubsystemChecker { // NOT SubsystemBase
public ExampleS() {
registerSelfCheckHardware(); // MUST call in constructor
}
@Override
public Command systemCheckCommand() {
return Commands.sequence(
// Test movements/operations
);
}
@Override
public SelfChecking[] getSelfCheckingHardware() {
return new SelfChecking[] {
new SelfChecking(motor1),
new SelfChecking(motor2),
new SelfChecking(encoder)
};
}
}Supported Hardware
Self-checking API supports:
Motor Controllers
- CTRE Talon FX / Kraken (via Phoenix 6)
- REV Spark Max / Spark Flex
- Detects: CAN errors, brownout resets, firmware issues
Encoders
- CTRE CANcoder
- REV Through-Bore Encoder (via breakout board)
- Thrifty Absolute Encoder
- DutyCycle Encoder (any)
- Detects: Disconnections, invalid readings, out-of-range values
Sensors
- Pigeon 2.0 / NavX Gyros
- Limelight
- PhotonVision cameras (via Orange Pi)
- Detects: Lost connection, invalid data, calibration issues
Pneumatics
- REV Pneumatic Hub
- CTRE Pneumatic Control Module
- Detects: Low pressure, solenoid faults
Tools
135 Squire Pit Display
Pre-Match Workflow:
- Connect to robot via NetworkTables
- Enable robot in Test Mode (critical - disables fault checking in disabled mode)
- Click "Run All Checks" in Systems tab
- Monitor physical robot during automated tests
- Review pass/fail results on pit display
Displays:
- Green checkmarks for passing subsystems
- Red X with specific error message for failures
- Motor temperatures with warning thresholds
- CAN bus utilization percentage
- Code runtime (should stay below middle line)
Elastic Dashboard Integration
- Live fault monitoring during practice
- Temperature trending over time
- Self-check hardware status indicators
Safety Features
Motor Constant Containers
Purpose: Prevent robot crashes from invalid System Identification values.
Validation Checks
public MotorConstantContainer(double ks, double kv, double ka, double kp, double kd) {
if (ks < 0 || ks > 12) throw new IllegalArgumentException("Ks out of range");
if (kv < 0) throw new IllegalArgumentException("Kv must be positive");
if (ka < 0) throw new IllegalArgumentException("Ka must be positive");
// kp and kd can be zero for pure feedforward control
}Catches:
- Negative voltage constants (physically impossible)
- Ks > battery voltage (would always saturate)
- Swapped Kv/Ka units (m/s vs rad/s confusion)
- Missing decimal points (10 instead of 0.10)
Usage:
public static final MotorConstantContainer FLYWHEEL =
new MotorConstantContainer(
0.12, // Ks [V]
0.0023, // Kv [V/(rad/s)]
0.00015, // Ka [V/(rad/s²)]
0.5, // Kp
0.0 // Kd
);Crash Prevention Checks
SysID Value Bounds
Motor constant containers reject values that would cause:
- Voltage saturation: Output always at ±12V
- Integral windup: Unbounded accumulation in PID
- Oscillation: Negative damping from sign errors
Unit Conversion Verification
Each drivetrain wrapper enforces correct units:
public REVModuleConstantContainer(
double wheelDiameterMeters, // Must be in meters
double driveGearRatio, // Motor rotations per wheel rotation
double turnGearRatio
) {
if (wheelDiameterMeters > 1.0) {
throw new IllegalArgumentException("Wheel diameter likely in inches, not meters");
}
}Mechanism Limit Enforcement
State-space models require physical bounds:
- Elevator min/max height (prevents cable unwinding)
- Arm min/max angles (prevents frame collision)
- Flywheel max RPM (prevents wheel disintegration)
Self-Checking Hardware
Automatic Fault Monitoring
When registered hardware fails, 135 Squire displays:
- Device Name: "Front Left Drive Motor"
- Fault Type: "CAN Timeout", "Over Temperature", "Brownout Reset"
- Timestamp: When fault first occurred
- Persistence: Red until fault clears
Implementation Pattern
@Override
public SelfChecking[] getSelfCheckingHardware() {
return new SelfChecking[] {
new SelfChecking(leftMotor),
new SelfChecking(rightMotor),
new SelfChecking(armEncoder),
new SelfChecking(gyro)
};
}Fault Sources Detected
- Sticky Faults: Persisted from previous boot (firmware crash, brownout)
- Live Faults: Current communication loss, sensor disconnection
- Temperature: Motor exceeds 80°C warning threshold
- Voltage: Bus drops below brownout threshold (6.8V)
Test Mode Requirement: Fault checking adds 10+ms to cycle time, so it's disabled unless robot is in Test Mode to maintain 20ms loop timing during matches.
Subsystem Faults
Pit Display Error Reporting
Example Fault Messages:
- Intake: "Motor 20 CAN Timeout - Check wiring"
- Shooter: "Encoder not returning valid position"
- Drivetrain: "Front Right module over temperature (85°C)"
- Vision: "Limelight connection lost"
System Check Command Failures
If systemCheckCommand() detects issues:
- Encoder velocity doesn't match commanded speed → "Encoder inverted or disconnected"
- Motor current remains at zero → "Motor not spinning - check breaker"
- Position doesn't change → "Mechanism stuck or brake engaged"
Manual Verification Steps
After automated checks, physically verify:
- All mechanisms returned to safe positions
- No unusual sounds (grinding, clicking)
- Cable connections secure
- Pneumatic pressure above 60 PSI
System Identification
Running SysId Tests
Setup Process
1. Configure Test Routine (in Subsystem):
private final SysIdRoutine sysId = new SysIdRoutine(
new SysIdRoutine.Config(
Volts.of(1).per(Second), // Ramp rate
Volts.of(7), // Hold voltage
Seconds.of(10) // Timeout
),
new SysIdRoutine.Mechanism(
(voltage) -> io.setVoltage(voltage.in(Volts)),
null, // Log callback (already handled by AdvantageKit)
this
)
);2. Select Mechanism (via Elastic):
- View
QUEUED TESTvalue in NetworkTables - Use Test Controller (last USB port in Driver Station) to cycle through mechanisms
3. Execute Test Sequence:
| Button | Test Type | Description |
|---|---|---|
| Start (CTRE only) | Begin logging | Starts recording data |
| Y | Quasistatic Forward | Slow ramp forward |
| B | Quasistatic Reverse | Slow ramp reverse |
| A | Dynamic Forward | Fast acceleration forward |
| X | Dynamic Reverse | Fast acceleration reverse |
| Select (CTRE only) | End logging | Stops recording |
4. Post-Test:
- Disable robot
- REBOOT RoboRIO before next mechanism test
- Verify USB stick has new log file
Critical Requirements
- USB stick in top RoboRIO port
- Battery above 12.5V (prevents brownout during dynamic tests)
- Mechanism at safe starting position
- Clear space for full range of motion
- Do NOT run multiple tests without rebooting
Exporting Logs (REV vs CTRE)
REV Motor Workflow
1. Extract Base Log:
- Robot OFF, USB stick in computer
- Open AdvantageScope
- File → Open → Select test
.wpilog - Do NOT use "Open Multiple"
2. Export for SysID:
- File → Export Data
- Format: WPILib Data Log (
.wpilog) - Fields to export:
NT:/URCL/<DeviceName>/MotorPositionRotationsNT:/URCL/<DeviceName>/MotorVelocityRPMNT:/URCL/<DeviceName>/AppliedOutputVoltage
- Save with descriptive name (e.g.,
intake_sysid.wpilog)
CTRE Motor Workflow
1. Extract with Phoenix Tuner X:
- Robot ON, USB stick in RoboRIO
- Open Phoenix Tuner X
- Menu (☰) → Tools → Log Extractor
- Select log file from RoboRIO USB
- Output Type: AdvantageScope (wpilog)
- Convert and save
2. Voltage Path:
- CTRE logs use
AppliedOutput(notAppliedOutputVoltage) - Multiply by battery voltage in SysID tool if needed
Analyzing Results
Import to SysID Tool
1. Launch:
- VS Code:
Ctrl+Shift+P→ "Start Tool" → "SysID"
2. Load Data:
- Open exported
.wpilogfile - Drag test segments to appropriate slots:
- Quasistatic Forward →
quasistatic-forward - Quasistatic Backward →
quasistatic-backward - Dynamic Forward →
dynamic-forward - Dynamic Backward →
dynamic-backward
- Quasistatic Forward →
3. Map Signal Paths:
- Position:
NT:/URCL/<Device>/MotorPositionRotations - Velocity:
NT:/URCL/<Device>/MotorVelocityRPM - Voltage:
NT:/URCL/<Device>/AppliedOutputVoltage(REV) orAppliedOutput(CTRE)
4. Configure Units:
- Set to mechanism units (rotations or meters, depending on conversion factor applied in IO)
- Check if gear ratio already applied (
setPositionConversionFactor)- If YES: Leave unit as 1:1
- If NO: Set gear ratio multiplier
Interpreting Results
Gain Preset: Select motor type
- NEO / NEO Vortex →
NEO - Falcon 500 / Kraken →
Falcon500orKraken
R² Values (Coefficient of Determination):
- Velocity R²: Should be > 0.95 (excellent model fit)
- Acceleration R²: Should be > 0.3 (acceptable - acceleration is noisy)
- Low R² indicates: Poor test execution, mechanical slop, or incorrect units
RMSE (Root Mean Square Error):
- Shows average prediction error in your chosen units
- Lower is better, but acceptable threshold depends on mechanism precision needs
Generated Constants:
- Ks: Static friction overcome voltage
- Kv: Velocity feedforward (V per unit/s)
- Ka: Acceleration feedforward (V per unit/s²)
- Kp, Kd: Initial PID gains (often need tuning)
Validation Steps
- Copy constants into
MotorConstantContainer(triggers validation) - Test in simulation - mechanism should reach setpoints smoothly
- Test on robot - verify no oscillation or steady-state error
- Fine-tune Ks if the mechanism struggles to start moving
Moment of Inertia (MOI)
Purpose: Calculate feedforward constants before the robot is built using CAD data.
When to Use MOI vs SysID
| Method | Pros | Cons | Best For |
|---|---|---|---|
| SysID | Empirically accurate, accounts for friction | Requires built robot | Final tuning |
| MOI (Recalc) | Works from CAD, no hardware needed | Approximation, ignores real friction | Early development, simulation |
Obtaining MOI from CAD
Request from design team:
- Moment of inertia (kg·m²) for rotating mechanisms
- Mass (kg) for linear mechanisms
- Gear ratio from motor to output
- Theoretical max speed/angle
Using MOI in Code
Flywheel Example:
LinearSystem<N1, N1, N1> plant = LinearSystemId.createFlywheelSystem(
DCMotor.getNEO(1), // Motor model
momentOfInertiaKgMetersSquared, // From CAD
gearRatio // Motor rotations per wheel rotation
);Elevator Example:
LinearSystem<N2, N1, N1> plant = LinearSystemId.createElevatorSystem(
DCMotor.getNEO(2), // Two NEOs
massKg, // Carriage + mechanism mass
drumRadiusMeters, // Spool radius
gearRatio
);Single-Jointed Arm Example:
LinearSystem<N2, N1, N1> plant = LinearSystemId.createSingleJointedArmSystem(
DCMotor.getNEO(1),
momentOfInertiaKgMetersSquared,
gearRatio
);MOI vs SysID Accuracy
- Kv, Ka from MOI: Usually within 10-20% of SysID values
- Kg from MOI: Accurate if CAD mass is correct
- Ks from MOI: Cannot be calculated - always requires SysID (depends on bearing friction, cable tension, etc.)
Recommended Workflow:
- Use MOI to get simulation working during build season
- Run SysID once mechanism is assembled
- Update only Ks initially (keep MOI-based Kv/Ka)
- Fine-tune Kv/Ka only if mechanism undershoots/overshoots
Feature Branches
Travel Back to Table of ContentsState Space Branch
Flywheel Systems
Purpose: Control high-speed rotating mechanisms (shooters, intakes with rollers) using velocity-based state-space models.
Implementation
// Create plant model
LinearSystem<N1, N1, N1> plant = LinearSystemId.createFlywheelSystem(
DCMotor.getNEO(1),
0.001, // MOI in kg·m²
gearRatio
);
// Create Kalman filter (observer)
KalmanFilter<N1, N1, N1> observer = new KalmanFilter<>(
Nat.N1(), Nat.N1(),
plant,
VecBuilder.fill(3.0), // Model tolerance (rad/s)
VecBuilder.fill(0.01), // Encoder noise (rad/s)
0.02 // dt = 20ms
);
// Create LQR controller
LinearQuadraticRegulator<N1, N1, N1> controller =
new LinearQuadraticRegulator<>(
plant,
VecBuilder.fill(8.0), // Qelms - velocity error tolerance
VecBuilder.fill(12.0), // Relms - voltage use penalty
0.02
);
// Combine into system loop
LinearSystemLoop<N1, N1, N1> loop = new LinearSystemLoop<>(
plant, controller, observer, 12.0, 0.02
);Periodic Cycle
@Override
public void periodic() {
io.updateInputs(inputs);
Logger.processInputs("Flywheel", inputs);
loop.correct(VecBuilder.fill(inputs.velocityRadPerSec));
loop.predict(0.02);
double voltage = loop.getU(0); // Get control output
io.setVoltage(MathUtil.clamp(voltage, -12, 12));
}Commands
- Set RPM:
flywheelS.setRPM(3500)- Reach target velocity - Joystick Control: Map stick input to RPM range
- Returns to zero when stick released (not position-holding)
Elevator Systems
Purpose: Linear position control (cascade lifts, telescoping arms, vertical lifts) with gravity compensation.
State-Space Model
Uses 2-state system: position and velocity
LinearSystem<N2, N1, N1> plant = LinearSystemId.createElevatorSystem(
DCMotor.getNEO(2),
carriageMassKg,
drumRadiusMeters,
gearRatio
);Trapezoid Profile Constraints
Limits acceleration/velocity for smooth motion:
TrapezoidProfile.Constraints constraints = new TrapezoidProfile.Constraints(
maxVelocityMetersPerSec,
maxAccelMetersPerSecSquared
);
TrapezoidProfile profile = new TrapezoidProfile(constraints);
TrapezoidProfile.State goalState = new TrapezoidProfile.State(targetHeightMeters, 0);Mechanism2d Visualization
Mechanism2d mech = new Mechanism2d(3, 3); // Canvas size
MechanismRoot2d root = mech.getRoot("elevator", 1.5, 0.2); // Origin
MechanismLigament2d elevator = root.append(
new MechanismLigament2d("lift", inputs.positionMeters, 90) // 90° = up
);
Logger.recordOutput("ElevatorMech", mech);Periodic Execution
@Override
public void periodic() {
io.updateInputs(inputs);
Logger.processInputs("Elevator", inputs);
// Update profile toward goal
profiledReference = profile.calculate(0.02, profiledReference, goalState);
loop.setNextR(profiledReference.position, profiledReference.velocity);
loop.correct(VecBuilder.fill(inputs.positionMeters, inputs.velocityMetersPerSec));
loop.predict(0.02);
io.setVoltage(loop.getU(0));
elevator.setLength(inputs.positionMeters); // Update visualization
}Commands
- Move to Position:
elevatorS.setGoal(1.5)- Go to height with profiled motion - Joystick Control: Slew between min/max positions while holding setpoint
- Holds position when stopped (gravity compensation via Kg)
Single Jointed Arms
Purpose: Rotational position control (intake arms, single-stage arms) with gravity compensation.
Key Differences from Elevator
- Uses
createSingleJointedArmSystem()instead of elevator - Angle constraints in radians, not meters
- Gravity torque varies with
cos(angle)- handled automatically by model
State-Space Creation
LinearSystem<N2, N1, N1> plant = LinearSystemId.createSingleJointedArmSystem(
DCMotor.getNEO(1),
momentOfInertiaKgMetersSquared,
gearRatio
);Visualization
MechanismLigament2d arm = root.append(
new MechanismLigament2d(
"arm",
armLengthMeters,
Units.radiansToDegrees(inputs.positionRadians) // Convert to degrees for display
)
);Commands
- Deploy/Retract: Preset angle positions (e.g., stowed = 0°, deployed = 90°)
- Joystick Control: Slew between min/max angles
- Gravity handling: Automatically applies correct voltage to hold any angle
Double Jointed Arms
Purpose: Complex two-joint systems (shoulder + elbow) requiring inverse kinematics - uses Orange Pi AI.
Why Orange Pi is Required
Standard state-space cannot solve inverse kinematics for coupled joints:
- Desired end-effector position (x, y) has infinite solutions for joint angles
- Orange Pi runs continuous-time dynamics simulation with constraints
- Returns optimal shoulder/elbow voltages for desired trajectory
Communication Architecture
RoboRIO sends:
DataHandler.log(shoulderAngleRad, "shoulderAngle");
DataHandler.log(elbowAngleRad, "elbowAngle");
DataHandler.log(targetX, "targetX");
DataHandler.log(targetY, "targetY");Orange Pi returns:
JsonObject outputs = DataHandler.getOrangePiData();
double shoulderVolts = outputs.get("shoulderVoltage").getAsDouble();
double elbowVolts = outputs.get("elbowVoltage").getAsDouble();Orange Pi Configuration (ServerSide.py)
Must set physical constants at top of file:
- Link lengths (shoulder, elbow segment lengths)
- Masses and moments of inertia
- Gear ratios
- Joint angle limits
Java Subsystem Structure
@Override
public void periodic() {
io.updateInputs(inputs);
Logger.processInputs("DoubleArm", inputs);
// Set motor voltages from Orange Pi
io.setShoulderVoltage(shoulderVolts);
io.setElbowVoltage(elbowVolts);
// Update visualization
if (Constants.currentMode == Mode.REAL) {
// Use real encoder positions
shoulder.setAngle(Units.radiansToDegrees(inputs.shoulderPositionRad));
elbow.setAngle(Units.radiansToDegrees(inputs.elbowPositionRad));
} else {
// Use Orange Pi predicted positions in sim
shoulder.setAngle(Units.radiansToDegrees(predictedShoulderRad));
elbow.setAngle(Units.radiansToDegrees(predictedElbowRad));
}
}Constructor Requirement
public DoubleArmS() {
// MUST send heartbeat to Orange Pi
Notifier piHeartbeat = new Notifier(() -> {
DataHandler.log(true, "armAlive");
});
piHeartbeat.startPeriodic(1.0); // Every 1 second
}Commands
- Position-only control (no joystick) - too complex for manual control
- Set target end-effector coordinates:
doubleArmS.setTarget(0.5, 1.2) - Orange Pi calculates optimal joint trajectory
Warning: The math behind this is 10+ slides of continuous dynamics - treat it as a black box unless you want to dive into Lagrangian mechanics.
LQR & Kalman Filtering Options
Linear Quadratic Regulator (LQR)
Purpose: Optimal feedback controller that minimizes control effort and error.
Tuning Parameters:
- Qelms (Q matrix): How much you care about state error
- Higher Q = More aggressive tracking, larger voltage swings
- Lower Q = Gentler control, accepts more error
- Units: Squared state units (e.g., rad²/s² for flywheel)
- Relms (R matrix): How much you penalize control effort
- Higher R = Less voltage used, slower response
- Lower R = More voltage used, faster response
- Units: Squared voltage (V²)
Default Values Work Well:
VecBuilder.fill(8.0), // Qelms - moderate error penalty
VecBuilder.fill(12.0) // Relms - penalize full battery voltageWhen to Tune:
- Increase Q if mechanism is too sluggish
- Decrease Q if mechanism oscillates
- Increase R if motors are getting hot or drawing excessive current
Kalman Filter (Observer)
Purpose: Estimates true system state from noisy measurements.
Tuning Parameters:
- Model Tolerance: How much you trust your physics model
- Higher = "My model might be wrong, trust sensors more"
- Lower = "My model is accurate, filter sensor noise heavily"
- Measurement Noise: Expected sensor error
- Higher = "My encoder is noisy, smooth readings aggressively"
- Lower = "My encoder is accurate, trust raw readings"
Typical Values:
VecBuilder.fill(3.0), // Model: within 3 rad/s of predicted
VecBuilder.fill(0.01) // Encoder: ±0.01 rad/s noiseBenefits:
- Smooth velocity estimates from position encoders
- Predict next state even if sensor reading is delayed
- Reject outliers (single bad reading doesn't corrupt state)
Latency Compensation Options
Problem: Motor controllers report encoder data with delay (REV Spark Max = 19.5ms). Controller makes decisions on old data, causing oscillation.
Solution: Predict future state to compensate for delay.
Implementation
WPILib handles this automatically:
LinearQuadraticRegulator<N1, N1, N1> controller =
new LinearQuadraticRegulator<>(
plant,
VecBuilder.fill(8.0),
VecBuilder.fill(12.0),
0.02, // Normal dt
0.0195 // Latency compensation (19.5ms for Spark Max)
);Formula: K_compensated = K * (A - BK)^(delay/dt)
- Adjusts gains to "look ahead" by the delay amount
- Stabilizes control loop that would otherwise oscillate
When to Use
- Always for REV motors (measured ~19.5ms delay)
- CTRE motors: Lower latency (~5ms), compensation optional
- Simulation: Set to 0.0 (no real hardware delay)
Verification: If system oscillates at low gains but math says it should be stable, latency compensation is likely needed.
CTRE State Space Branch
Differences from standard State Space:
Latency Compensation Built-In
CTRE Phoenix 6 has lower sensor latency (~5ms vs REV's 19.5ms) but still benefits from compensation.
Units
- Positions in rotations (not radians) by default
- Must set conversion factors on motor controller if using different units
Integration with CTRE Features
- Works with FOC (Field-Oriented Control) for quieter operation
- Compatible with Motion Magic for comparison
- Can use TorqueCurrentFOC for direct torque control
Most mechanisms are vendor-agnostic - same state-space math works for both. Only IO layer changes.
Vision Branch (PhotonVision + Limelight)
AprilTag Positioning
AprilTags are fiducial markers (like QR codes) placed at known field locations. Robots use them to:
- Determine absolute field position
- Correct odometry drift
- Align to scoring locations
135 Consul Vision Stack:
- PhotonVision: Multi-camera AprilTag detection for pose estimation
- Limelight: Game piece detection and targeting (pipeline 0 for tags, other pipelines for AI)
PhotonVision AprilTag Multi-Camera Fusion
Architecture: 4 simultaneous cameras fusing into single pose estimate.
Camera Setup
PhotonCamera[] cameras = {
new PhotonCamera("Front"),
new PhotonCamera("Back"),
new PhotonCamera("Left"),
new PhotonCamera("Right")
};
PhotonPoseEstimator[] estimators = new PhotonPoseEstimator[4];
for (int i = 0; i < 4; i++) {
estimators[i] = new PhotonPoseEstimator(
aprilTagFieldLayout,
PoseStrategy.MULTI_TAG_PNP_ON_COPROCESSOR,
cameras[i],
robotToCameraTransforms[i] // Camera mounting position/angle
);
}Pose Fusion Loop
for (PhotonPoseEstimator estimator : estimators) {
Optional<EstimatedRobotPose> result = estimator.update();
if (result.isPresent()) {
EstimatedRobotPose estimate = result.get();
// Calculate trust factors
double avgDistance = getAverageTagDistance(estimate);
double ambiguity = getAverageAmbiguity(estimate);
int numTags = estimate.targetsUsed.size();
// Filter bad estimates
if (shouldAcceptVisionEstimate(avgDistance, ambiguity, numTags, velocity)) {
poseEstimator.addVisionMeasurement(
estimate.estimatedPose.toPose2d(),
estimate.timestampSeconds,
calculateStdDevs(avgDistance, numTags)
);
increaseTagTrust(estimate.targetsUsed); // Reward good tags
} else {
decreaseTagTrust(estimate.targetsUsed); // Punish bad tags
}
}
}Trust-Based Filtering
Each AprilTag maintains a trust score (0-100%):
- Increases when pose matches the odometry-predicted position
- Decreases when pose is inconsistent (jumped position, high ambiguity)
- Low-trust tags are weighted less or ignored entirely
Rejection Criteria:
- Tag distance > 5 meters (poor accuracy at range)
- Ambiguity > 0.3 (multiple possible pose solutions)
- Pose correction > 0.5 meters while moving fast (likely false detection)
- Only 1 tag visible and robot velocity > 2 m/s (single-tag unreliable at speed)
Limelight AI Detection
Configuration:
- Pipeline 0: AprilTag detection (rarely used - PhotonVision handles this)
- Pipelines 1+: Neural network detection via Google Coral coprocessor
Google Coral Integration
$60 USB accelerator plugged into Limelight:
- Custom TensorFlow Lite models
- Detects game piece types and states ("cone upright" vs "cone fallen")
- Returns: tX (horizontal angle), tY (vertical angle), tV (target valid), tA (area)
Drive to AI Target Command
@Override
public void execute() {
if (limelight.getTV()) { // Target visible
double tX = limelight.getTX();
double tY = limelight.getTY();
// Calculate 3D position of game piece
Pose3d gamePiece = calculate3DPoseFromLimelight(tX, tY, cameraHeight, cameraPitch);
// PID to target with speed mapping
double distance = currentPose.getTranslation().getDistance(gamePiece.getTranslation());
double speed = MathUtil.interpolate(maxSpeed, 0, distance, desiredDistance);
drivetrainS.drive(speed, rotationPID.calculate(tX, 0), true);
}
}Simulation: Calculates tX/tY geometrically from known game piece positions - fully functional in sim.
PhotonVision Simulation
Setup:
VisionSystemSim visionSim = new VisionSystemSim("main");
visionSim.addAprilTags(aprilTagFieldLayout);
for (PhotonCamera camera : cameras) {
PhotonCameraSim cameraSim = new PhotonCameraSim(camera, cameraProperties);
visionSim.addCamera(cameraSim, robotToCameraTransform);
}Simulation Cycle
@Override
public void simulationPeriodic() {
visionSim.update(currentRobotPose); // Update based on simulated robot position
}Viewing Streams:
localhost:1190/stream.mjpg(Front camera)localhost:1192/stream.mjpg(Back camera)localhost:1194/stream.mjpg(Left camera)localhost:1196/stream.mjpg(Right camera)
Add these URLs to Shuffleboard/Elastic for live viewing.
Note: Wireframe rendering adds 10+ms to cycle time - use only when visual feedback is needed.
Pose Estimation & Filtering
SwerveDrivePoseEstimator
Fuses wheel odometry with vision measurements:
SwerveDrivePoseEstimator poseEstimator = new SwerveDrivePoseEstimator(
kinematics,
gyroRotation,
modulePositions,
initialPose,
stateStdDevs, // Trust in odometry (x, y, theta)
visionStdDevs // Trust in vision (x, y, theta) - updated per measurement
);Dynamic Standard Deviations
Vision trust varies based on measurement quality:
private Matrix<N3, N1> calculateStdDevs(double avgDistance, int numTags) {
double xyStdDev = 0.01 * Math.pow(avgDistance, 2) / numTags; // Worse at distance
double thetaStdDev = 0.01 * Math.pow(avgDistance, 2) / numTags;
return VecBuilder.fill(xyStdDev, xyStdDev, thetaStdDev);
}Effect: Close multi-tag readings heavily influence pose, distant single-tag readings are nearly ignored.
LED Branch
Addressable RGB LED control with simulation support.
Hardware
- REV Blinkin or WS2812B LED strips
- Controlled via PWM from RoboRIO
- Number of LEDs configurable in constants
Pre-made Commands
LEDConstantColorC
new LEDConstantColorC(ledS, Color.kRed);Solid color across entire strip.
LEDRainbowC
new LEDRainbowC(ledS, 0.05); // Speed parameterCycles through rainbow spectrum.
LEDBreathingC
new LEDBreathingC(ledS, Color.kBlue, 1.0); // Color and period (seconds)Fades brightness in/out smoothly.
LEDSineWaveC
new LEDSineWaveC(ledS, Color.kGreen, 0.1, 0.03); // Color, wavelength, speedTraveling sine wave pattern.
LEDGifC
new LEDGifC(ledS, "celebration", 0.05); // Folder name, frame intervalDisplays image sequence from USB stick:
- Split GIF into individual frame images
- Upload frames to
/media/sda1/leds/celebration/on RoboRIO - Images loaded at boot (survives USB removal during match)
- Loops through frames at specified interval
Simulation
All patterns render in Sim GUI - verify LED logic before deploying to the robot.
Binding to Robot State
Trigger noteDetected = new Trigger(() -> intakeS.hasNote());
noteDetected.whileTrue(new LEDBreathingC(ledS, Color.kGreen, 0.5));Advanced Systems
Travel Back to Table of ContentsOrange Pi 5 AI System
Architecture Overview
The Orange Pi 5 serves as a dedicated AI coprocessor running Python-based machine learning models for complex computations that exceed RoboRIO capabilities.
System Components
Hardware:
- Orange Pi 5 (8-core ARM CPU)
- Connected via Ethernet to robot network
- Assigned static IP:
10.1.35.2 - Runs Armbian Linux with Python 3.11.0
Software Stack:
- ServerSide.py: Main AI server running as systemd service
- Gradient Boosting Regressor: Scikit-learn model for multi-input/output prediction
- JSON Communication: NetworkTables bridge via DataHandler on RoboRIO
Use Cases:
- Double-jointed arm inverse kinematics
- Trajectory optimization beyond PathPlanner capabilities
- Sensor fusion requiring non-linear regression
- Any computation requiring >30 data point training sets
Python Environment Setup
Initial Imaging (New Orange Pi)
Requirements: Ethernet switch with RoboRIO, Orange Pi, laptop, and internet connection.
1. SSH Connection:
ssh [email protected]
# Password: raspberry2. System Updates:
sudo apt update
sudo apt upgrade -y
sudo apt install -y build-essential libssl-dev zlib1g-dev \
libncurses5-dev libncursesw5-dev libreadline-dev \
libsqlite3-dev libgdbm-dev libdb5.3-dev libbz2-dev \
libexpat1-dev liblzma-dev tk-dev libffi-dev \
libgdbm-compat-dev wget git3. Install Python 3.11.0:
cd /usr/src
sudo wget https://www.python.org/ftp/python/3.11.0/Python-3.11.0.tgz
sudo tar xzf Python-3.11.0.tgz
cd Python-3.11.0
sudo ./configure --enable-optimizations
sudo make altinstall
python3.11 --version # Verify installation4. Clone PyDriverStation:
cd /home/pi
# Ensure PyDriverStation repo is public during clone
git clone https://github.com/Team135BlackKnights/PyDriverStation.git
cd PyDriverStation/OrangePi
sudo /usr/local/bin/python3.11 -m pip install -r requirements.txtAuto-Update Configuration
Create Systemd Service
1. Create Service File:
sudo touch /etc/systemd/system/ai-server.service
sudo nano /etc/systemd/system/ai-server.service2. Service Configuration:
[Unit]
Description=Orange Pi AI Server
After=network.target
[Service]
Type=simple
User=pi
WorkingDirectory=/home/pi/PyDriverStation/OrangePi
ExecStart=/usr/local/bin/python3.11 ServerSide.py
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target3. Enable Service:
sudo chmod 644 /etc/systemd/system/ai-server.service
sudo systemctl daemon-reload
sudo systemctl enable ai-server.service
sudo systemctl start ai-server.service
sudo systemctl status ai-server.service # Should show green "active (running)"4. Verify Connection: Open AdvantageScope → RealOutputs/OrangePi/PiConnection should be true.
Configure SSH Key Authentication (for auto-deployment)
On RoboRIO:
bash
ssh roboRIO-135-frc.local # User: lvuser
chmod +x deploy/OrangePi/PushOrangePiCode.sh
ssh-keygen -t rsa -b 4096
# Press Enter for all prompts (no password, default location)
cd /home/lvuser/.ssh
cat id_rsa.pub # Copy this entire outputOn Orange Pi:
ssh [email protected]
nano ~/.ssh/authorized_keys
# Paste the id_rsa.pub content at the end of the file
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keysTest Connection:
# From RoboRIO
ssh [email protected]
# Should connect WITHOUT password promptModel Training & Deployment
Data Collection
On Robot (Java):
// Log training data to USB
DataHandler.log(shoulderAngle, "shoulderAngle");
DataHandler.log(elbowAngle, "elbowAngle");
DataHandler.log(endEffectorX, "endEffectorX");
DataHandler.log(endEffectorY, "endEffectorY");
DataHandler.log(shoulderVoltage); // No key = USB only
DataHandler.log(elbowVoltage); // No key = USB onlyData Requirements:
- Minimum n=30 samples (Central Limit Theorem)
- Cover full mechanism range of motion
- Include edge cases (max extension, folded position, etc.)
Training Process (ServerSide.py)
1. Load Data:
import pandas as pd
import numpy as np
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import train_test_split
# Read from USB log files
data = pd.read_csv('/path/to/training_data.csv')
# Split inputs and outputs
X = data[['shoulderAngle', 'elbowAngle', 'targetX', 'targetY']]
y = data[['shoulderVoltage', 'elbowVoltage']]2. Train Model:
# Split 80% train, 20% test
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Train Gradient Boosting model
model = GradientBoostingRegressor(
n_estimators=100,
learning_rate=0.1,
max_depth=3,
random_state=42
)
model.fit(X_train, y_train)
# Evaluate
score = model.score(X_test, y_test)
print(f"Model R² Score: {score}") # Should be >0.93. Save Model:
import joblib
joblib.dump(model, 'double_arm_model.pkl')Deployment
Set Constants in ServerSide.py:
# Physical constants from CAD
SHOULDER_LENGTH = 0.5 # meters
ELBOW_LENGTH = 0.4 # meters
SHOULDER_MASS = 2.0 # kg
ELBOW_MASS = 1.5 # kg
SHOULDER_MOI = 0.05 # kg·m²
ELBOW_MOI = 0.03 # kg·m²
SHOULDER_GEAR_RATIO = 100.0
ELBOW_GEAR_RATIO = 80.0
# Joint limits
SHOULDER_MIN_RAD = 0.0
SHOULDER_MAX_RAD = 3.14
ELBOW_MIN_RAD = -1.57
ELBOW_MAX_RAD = 1.57Auto-Deploy via RoboRIO: Set in Constants.java:
public static final boolean PUSH_ORANGE_PI_CODE = true;On deploy, PushOrangePiCode.sh automatically:
- SCPs
ServerSide.pyto Orange Pi - Restarts
ai-server.service - Verifies service health
Communication Protocol (RIO ↔ Pi)
RoboRIO Sends (DataHandler.java)
Request Pattern:
// Send inputs with keys (goes to NetworkTables → Orange Pi)
DataHandler.log(shoulderAngleRad, "shoulderAngle");
DataHandler.log(elbowAngleRad, "elbowAngle");
DataHandler.log(targetX, "targetX");
DataHandler.log(targetY, "targetY");
// Mark that we're requesting outputs
DataHandler.log(true, "requestVoltages");Receive Pattern:
JsonObject piOutputs = DataHandler.getOrangePiData();
if (piOutputs.has("outputs")) {
// Pi has sent response
JsonObject outputs = piOutputs.getAsJsonObject("outputs");
double shoulderVolts = outputs.get("shoulderVoltage").getAsDouble();
double elbowVolts = outputs.get("elbowVoltage").getAsDouble();
io.setShoulderVoltage(shoulderVolts);
io.setElbowVoltage(elbowVolts);
// Remove request flag (already received response)
DataHandler.removeKey("requestVoltages");
}Orange Pi Receives (ServerSide.py)
Polling Loop:
import json
from networktables import NetworkTables
NetworkTables.initialize(server='10.1.35.5') # RoboRIO IP
sd = NetworkTables.getTable('SmartDashboard')
while True:
# Check for request
if sd.getBoolean('requestVoltages', False):
# Read inputs
shoulder_angle = sd.getNumber('shoulderAngle', 0.0)
elbow_angle = sd.getNumber('elbowAngle', 0.0)
target_x = sd.getNumber('targetX', 0.0)
target_y = sd.getNumber('targetY', 0.0)
# Run model
inputs = np.array([[shoulder_angle, elbow_angle, target_x, target_y]])
voltages = model.predict(inputs)[0]
# Send outputs
outputs = {
'shoulderVoltage': float(voltages[0]),
'elbowVoltage': float(voltages[1])
}
sd.putString('outputs', json.dumps(outputs))
time.sleep(0.0003) # ~0.3ms latencyHeartbeat Pattern
RoboRIO Constructor:
Notifier piHeartbeat = new Notifier(() -> {
DataHandler.log(System.currentTimeMillis(), "rioAlive");
});
piHeartbeat.startPeriodic(1.0);Orange Pi Monitoring:
last_heartbeat = sd.getNumber('rioAlive', 0)
current_time = time.time() * 1000
if current_time - last_heartbeat > 2000: # 2 second timeout
print("WARNING: RoboRIO connection lost")
# Send zero voltages or safe defaultsManual Updates & Debugging
Deploy Code Manually
From Any Computer:
scp /path/to/ServerSide.py [email protected]:PyDriverStation/OrangePi/ServerSide.py
ssh [email protected]
sudo systemctl restart ai-server.service
sudo systemctl status ai-server.serviceView Live Logs
SSH to Orange Pi:
ssh [email protected]
sudo journalctl -u ai-server.service -f # Follow live logsView Recent Logs:
sudo journalctl -u ai-server.service --since "5 minutes ago"Clear Old Logs:
sudo journalctl --rotate
sudo journalctl --vacuum-time=1s -u ai-server.serviceCommon Issues
"ModuleNotFoundError":
cd /home/pi/PyDriverStation/OrangePi
sudo /usr/local/bin/python3.11 -m pip install <missing_package>
sudo systemctl restart ai-server.service"Connection Refused":
- Check RoboRIO can ping Orange Pi:
ping 10.1.35.2 - Verify service running:
sudo systemctl status ai-server.service - Check firewall:
sudo ufw status(should be inactive)
Model Not Updating:
- Verify
.pklfile in correct directory - Check file permissions:
ls -la double_arm_model.pkl - Ensure model loaded in ServerSide.py without errors
Mac Mini
Purpose: Alternative/supplementary AI coprocessor with more computational power.
Advantages Over Orange Pi
- 10x Processing Power: M1/M2 chips for neural network inference
- More RAM: 8-16GB vs Orange Pi's 4-8GB
- macOS Development: Easier Python debugging with native tools
- CoreML Integration: Hardware-accelerated inference on Apple Silicon
Setup (TODO)
- Install Python 3.11
- Clone PyDriverStation
- Configure as systemd-equivalent (launchd on macOS)
- Set static IP on the robot network
- Deploy models via SSH
Potential Use Cases:
- Vision processing (YOLOv8, custom CNNs)
- Real-time trajectory optimization during auto
- Multi-robot coordination (if rules allow networked bots)
Physics Simulation
Dyn4j Integration
Dyn4j is a Java 2D physics engine providing collision detection, rigid body dynamics, and constraint solving.
Why Physics Simulation?
Benefits:
- Test Autonomous Without Field: Simulate defense bots, game piece interactions
- Tune Drivetrain Constants: Find max acceleration by matching sim to reality
- Validate Safety: Ensure robot won't tip, collide with field elements
- Replay Matches: See what would happen if robot took different path
Core Concepts
Bodies: Rigid objects with mass, position, velocity Fixtures: Collision shapes attached to bodies (rectangles, circles, polygons) Joints: Constraints between bodies (hinges, sliders) World: Container managing all physics objects and stepping simulation forward
World Physics
Field Setup (WorldPhysics.java)
Annual Update Required - field obstacles change each game.
Example Structure:
public class WorldPhysics {
private final World world;
private final Body robot;
private final List<Body> gamePieces = new ArrayList<>();
private final Body defenseBot;
public WorldPhysics() {
world = new World();
world.setGravity(World.ZERO_GRAVITY); // Top-down view
// Add field boundaries
addWall(0, 0, 16.54, 0.1); // Bottom wall
addWall(0, 8.21, 16.54, 0.1); // Top wall
addWall(0, 0, 0.1, 8.21); // Left wall
addWall(16.54, 0, 0.1, 8.21); // Right wall
// Add field obstacles (update per game)
addObstacle(5.0, 4.0, 1.0, 0.5); // Example stage
}
private void addWall(double x, double y, double width, double height) {
Body wall = new Body();
wall.addFixture(Geometry.createRectangle(width, height));
wall.translate(x, y);
wall.setMass(MassType.INFINITE); // Immovable
world.addBody(wall);
}
}Robot Body Configuration
Critical Parameters:
// Robot dimensions (from CAD)
private static final double ROBOT_WIDTH = 0.762; // 30 inches
private static final double ROBOT_LENGTH = 0.762; // 30 inches
private static final double ROBOT_MASS = 45.0; // kg with battery
// Center of mass offset (if not geometric center)
private static final Transform COM_OFFSET = new Transform();
COM_OFFSET.translate(0.05, 0.0); // 5cm forward
// Collision shape
Body robot = new Body();
robot.addFixture(Geometry.createRectangle(ROBOT_WIDTH, ROBOT_LENGTH));
robot.setMass(new Mass(COM_OFFSET.getTranslation(), ROBOT_MASS,
calculateMOI(ROBOT_MASS, ROBOT_WIDTH, ROBOT_LENGTH)));
robot.getTransform().set(startPose.getTranslation().getX(),
startPose.getTranslation().getY());Simulation Step
In Robot Periodic:
@Override
public void simulationPeriodic() {
// Apply drivetrain forces
ChassisSpeeds speeds = drivetrainS.getChassisSpeeds();
Vector2 force = new Vector2(speeds.vxMetersPerSecond * ROBOT_MASS * 50,
speeds.vyMetersPerSecond * ROBOT_MASS * 50);
robot.applyForce(force);
// Apply rotation torque
double torque = speeds.omegaRadiansPerSecond * ROBOT_MOI * 10;
robot.applyTorque(torque);
// Step physics (20ms)
world.step(1, 0.02);
// Update odometry from physics
Transform tf = robot.getTransform();
Pose2d physicsPose = new Pose2d(
tf.getTranslationX(),
tf.getTranslationY(),
new Rotation2d(tf.getRotationAngle())
);
Logger.recordOutput("PhysicsField/Robot", physicsPose);
}Game Piece Simulation
Game Piece Properties
Example: Note:
public class SimGamePiece {
private final Body body;
private static final double NOTE_RADIUS = 0.18; // meters
private static final double NOTE_MASS = 0.235; // kg
public SimGamePiece(double x, double y) {
body = new Body();
body.addFixture(Geometry.createCircle(NOTE_RADIUS));
body.setMass(MassType.NORMAL);
body.translate(x, y);
// Friction/restitution
body.getFixture(0).setFriction(0.5);
body.getFixture(0).setRestitution(0.3); // Slight bounce
world.addBody(body);
}
}Intake Simulation
Transition Game Piece to Robot:
public void intake(int pieceIndex) {
if (pieceIndex < 0 || pieceIndex >= gamePieces.size()) return;
Body piece = gamePieces.get(pieceIndex).body;
// Remove from world physics
world.removeBody(piece);
// Mark as possessed
currentGamePiece = pieceIndex;
Logger.recordOutput("Robot/HasGamePiece", true);
}Outtake Simulation
Launch Game Piece:
public void shoot(double velocityMetersPerSec, double angleRadians) {
if (currentGamePiece < 0) return;
Body piece = gamePieces.get(currentGamePiece).body;
// Re-add to world at robot position
Pose2d robotPose = getRobotPose();
piece.getTransform().set(robotPose.getX(), robotPose.getY());
world.addBody(piece);
// Apply launch velocity
double vx = velocityMetersPerSec * Math.cos(robotPose.getRotation().getRadians() + angleRadians);
double vy = velocityMetersPerSec * Math.sin(robotPose.getRotation().getRadians() + angleRadians);
piece.setLinearVelocity(vx, vy);
currentGamePiece = -1;
}Defense Bot Simulation
Scripted Opponent Path
Configuration:
public class SimDefenseBot {
private final Body body;
private final List<Pose2d> waypoints;
private int currentWaypoint = 0;
private double startTime;
public SimDefenseBot(Pose2d startPose, List<Pose2d> path) {
body = new Body();
body.addFixture(Geometry.createRectangle(0.762, 0.762));
body.setMass(MassType.NORMAL);
body.translate(startPose.getX(), startPose.getY());
world.addBody(body);
waypoints = path;
startTime = Timer.getFPGATimestamp();
}
}Path Following
In Simulation Periodic:
public void update() {
double elapsed = Timer.getFPGATimestamp() - startTime;
// Find current target waypoint based on time
if (elapsed > 15.0) return; // Auto ends at 15s
double progress = elapsed / 15.0; // Normalize to [0, 1]
int targetIndex = (int)(progress * waypoints.size());
targetIndex = Math.min(targetIndex, waypoints.size() - 1);
Pose2d target = waypoints.get(targetIndex);
// Apply force toward target
Transform tf = body.getTransform();
double dx = target.getX() - tf.getTranslationX();
double dy = target.getY() - tf.getTranslationY();
Vector2 force = new Vector2(dx * 50, dy * 50); // Proportional control
body.applyForce(force);
}Accessing Opponent Pose
In Commands:
Pose2d opponentPose = RobotContainer.opposingBotPose;
// Calculate if opponent will reach game piece first
double ourDistance = currentPose.getTranslation().getDistance(gamePiece);
double theirDistance = opponentPose.getTranslation().getDistance(gamePiece);
double ourETA = ourDistance / currentVelocity;
double theirETA = theirDistance / opponentVelocity;
if (theirETA + 0.75 < ourETA) {
// Abort - opponent will get there first
cancel();
}Finding Max Translational Acceleration
Purpose: Empirically determine drivetrain acceleration limit for PathPlanner/Choreo.
Procedure
1. Enable Physics Simulation: Ensure WorldPhysics is active and robot body configured with correct mass/MOI.
2. Create Test Auto:
// Square orbit without collisions
PathPlannerPath squarePath = PathPlanner.loadPath("AccelTest",
new PathConstraints(4.0, 3.0, 2π, 4π)); // Initial guess: 3 m/s²3. Run in Simulation:
- Execute the square path repeatedly
- Observe
PhysicsField/RobotvsRobotState/EstimatedPosein AdvantageScope
4. Tune Max Accel:
LoggableTunableNumber maxAccel = new LoggableTunableNumber("Drive/MaxAccel", 3.0);- Too Low: Physics robot lags behind odometry (wheels slip in real life but not sim)
- Too High: Physics robot leads odometry (aggressive acceleration sim can't match)
- Correct: Chassis polygons overlap - parts inside each other during turns
5. Update Constants:
public static final double MAX_TRANSLATIONAL_ACCEL = 3.2; // m/s² (tuned value)Why This Works
Physics simulation models:
- Wheel friction limits (coefficient of friction × normal force)
- Center of mass effects on weight distribution
- Rotational inertia resisting direction changes
Programming Guide
Travel Back to Table of ContentsCommands Architecture
Superstructure The commands layer sits above subsystems and orchestrates robot behavior. All commands follow AdvantageKit's IO implementation pattern to support simulation and replay. Commands should contain logic and decision-making, while subsystems handle hardware interaction through their IO layers.
Trigger Bindings Commands are bound to controller inputs through RobotContainer.java. The framework uses a controller-based binding system where buttons/axes trigger specific commands. Test controllers are used separately for System Identification routines. All bindings are configured in one central location to avoid merge conflicts across branches.
Command Groups Commands combine into Sequential, Parallel, and Race groups. Race groups are mandatory for branching autonomous - they run multiple commands simultaneously and end when the first completes. This enables the bot aborter pattern where autonomous paths can be interrupted based on field conditions.
Specialized Commands
DrivetrainC Universal drive command handling all drivetrain types (Swerve, Mecanum, Tank) for both CTRE and REV. Automatically sets Y movement to zero for tank drives. Supports field-relative and robot-relative control. Squares joystick inputs for finer control at low speeds. Special handling for CTRE Swerve due to vendor-specific requirements.
Self-Check Commands Every subsystem extends SubsystemChecker (not SubsystemBase) and implements systemCheckCommand(). These run in Test mode through the Pit Display. Each drivetrain type has specific test sequences - Swerve tests X/Y translation and rotation, Mecanum tests rotation then translation, Tank tests rotation then forward movement. Tests validate motor outputs against expected values and report faults.
Bot Aborter Uses Limelight for game piece detection. Calculates distance to target game pieces and compares robot velocity against opposing robot velocity. Aborts autonomous paths if the opposing robot will reach the game piece 0.75 seconds before your robot. Prevents collisions while optimizing game piece collection. Works with simulation by calculating angular offsets from camera to simulated objects.
Drive to AI Target Creates 3D pose estimation of detected notes from Limelight data. Uses proportional control (P-only PID) to drive toward the target. Slows down as the distance decreases to avoid overshooting and knocking game pieces away. Only works with holonomic drivetrains.
Aim to AprilTag Continuously rotates robot to face a specified field position without overriding translational movement. Works during both teleop and autonomous. Must be ended explicitly, typically using a race group with a timeout or other completion condition.
Loggable Tuned Numbers Enable dynamic PID tuning without code redeployment. Created with a key string, initialized in static enable. Commands check hashCode() in execute() to detect changes. When values update through the Elastic dashboard, commands automatically apply new parameters. Only functional within Command classes. Used primarily for PID constants but applicable to any runtime-adjustable parameter.
Match State Tracking Custom enums track robot mode (Sim/Real/Replay) via currentMode and game state (Auto/Teleop/Endgame) via currentMatchState. Commands query these states to modify behavior based on the match phase. Enables different command behavior during autonomous versus teleop without separate implementations.
Sim Game Piece Physics-based game piece simulation using dyn4j engine. Game pieces spawn at field locations and interact with robot collision geometry. Commands call updateClosestPieceToManipulator() to find nearest piece, then intake(pieceIndex) to simulate collection. Index 0 reserved for currently held piece. Integrates with World Physics for realistic interactions during simulation and replay.
Command Templates State space commands follow consistent patterns - initialize() sets up controller bindings from SpaceConstants.Controls, then schedules subsystem methods. Flywheel commands use setRPM with joystick scaling. Elevator/Arm commands use moveElevator/deployIntake with createState() to convert physical units (meters/radians) into goal positions. Double jointed arms cannot accept joystick inputs due to multi-axis complexity - position-only control required.
Using Consul
Travel Back to Table of ContentsWorkflow & Best Practices
Branch Strategy Fork the 135 Consul repository annually for each game season. Create feature branches off your fork, NOT the main 135-consul repository. Merge needed drive features (Swerve, Mecanum, Tank) into your year's fork. Multiple programmers work on separate feature branches to avoid conflicts. Only RobotContainer.java should have merge errors when combining branches. Never merge year-specific code back into main 135-consul.
Directory Conventions Constants for each subsystem/command belong in the utils folder OF that subsystem/command, not globally. This prevents merge conflicts when multiple programmers work simultaneously. Follow the established structure: Subsystems contain IO implementations, commands reference subsystems through DrivetrainS interface, utilities stay local to their component. Deploy folder holds Orange Pi code and shell scripts.
Merge Conflict Management Proper directory conventions mean only RobotContainer.java creates conflicts - simple "yes, include both subsystems" resolutions. If conflicts appear elsewhere, constants weren't properly isolated to utils folders. Review the subsystem structure before merging. Use GitHub's automatic testing - code must pass simulation tests before merging unless you're a repository owner (Grant, Nish, Marsh).
Naming Conventions Subsystems end with "S" (DrivetrainS, IntakeS). Commands end with "C" (DrivetrainC). IO interfaces define abstract methods, IO implementations add vendor suffix (IOSparkMax, IOTalonFX, IOSim). Use descriptive names for LoggableTunedNumbers keys. MotorConstantContainers follow subsystem names. Test controller port must be LAST in Driver Station.
Code Formatting Use concise variable names (i, j for indices, e for events, el for elements) to maximize content within context limits. All .sh shell scripts must use LF line endings, NOT CRLF - check bottom right of VSCode. No logic in IO implementations - only input/output updates for replay compatibility. Red-boxed functions in documentation are mandatory implementations even if left empty for simulation.
Getting Started
Initial Setup Install WPILib, AdvantageScope, Elastic, Phoenix Tuner X (for CTRE), and REV Hardware Client (for REV). Install PyCharm for Orange Pi development with Python 3.11.0. Configure Driver Station with test controller on last port. Clone 135 Consul and create annual fork. USB must plug into RoboRIO top port for System Identification.
Forking & Merging Branches Fork 135-consul for the year (e.g., 2025-Season). Merge required drivetrain branches into your fork - only include needed drive types to reduce complexity. Create feature branches for subsystems (intake-branch, shooter-branch). Keep constants in feature branch utils folders. Test in simulation before merging to fork main branch.
Tuning Constants Use LoggableTunedNumbers for runtime tuning. Open Elastic, drag out Tunable Number widgets, adjust with keyboard and press Enter - updates apply immediately without disabling robot. Record final values and hardcode into constants files. For System Identification, follow test sequence: USB in top port, run tests via test controller (Start for CTRE, hold Y/B/A/X, Select to finish), reboot between ALL tests.
Simulation Environment Open WPILib (Ctrl+Shift+P), type "Simulate Robot Code", select "Use Real DriverStation". Launch AdvantageScope, Elastic, and FRC Driver Station. For Orange Pi testing, run ServerSide.py in PyCharm with IP set to localhost. Sim GUI optional, only needed for LED visualization. All subsystems work accurately including battery draw and brownouts. Connect all tools simultaneously - they coexist without conflict.
Pit Display Usage
Installation & Configuration Download from the provided link, extract, and run the executable. Click settings (bottom left), enter Twitch channel (team channel until Worlds), team number, and Blue Alliance event key. The setup persists between matches. The display shows real-time telemetry when connected to the robot network.
Systems Monitor Green lines represent PDH channel amperage. Code runtime must stay below the middle line (target <20ms, warning >10ms). Battery voltage should exceed the middle line. CAN utilization tracked continuously. Motor temperatures displayed per subsystem via getTemps() override. All data updates live during matches.
Running Diagnostics: Enable the robot in Test mode, then click "Run All Checks." Physically monitor the robot during automated tests. Display reports on which subsystems failed and specific error types. If Test mode is not enabled, motor temperature and fault detection are disabled (saves 10+ms runtime). Each subsystem's systemCheckCommand executes sequentially with validation.
Checklist Management Systems tab contains a pre-match checklist. Click through items as completed. It verifies battery connection, code deployment, camera functionality, pneumatic pressure, and controller pairing. It is customizable for team-specific procedures. It tracks completion status across competition days
Troubleshooting
Common Issues: Automatic testing failures: Code didn't pass simulation - fix reported errors unless you're the repository owner. Merge conflicts outside RobotContainer: Constants not isolated to utils folders. Runtime exceeding 20ms: Fault checking is running during match instead of Test mode only. Artifacts failing: localStorage/sessionStorage used (not supported - use React state or JavaScript variables). Shell scripts not executing: CRLF line endings instead of LF.
Orange Pi Debugging Check RealOutputs/OrangePi/PiConnection in AdvantageScope for connection status. View live console at RealOutputs/OrangePi/PiConsole for errors. SSH to [email protected] (password: raspberry). Check service status: sudo systemctl status ai-server.service. View recent logs: sudo journalctl -u ai-server.service --since "5 minutes ago". Navigate through logs with line numbers (type 9999, press Enter to skip to end). Verify Python 3.11.0: python3.11 --version.
Log Analysis AdvantageScope replays any .wpilog file for debugging. File > Open for single log (never Open Multiple). Export data for SysId: specify fields, save as new .wpilog. For CTRE: use Log Extractor in Phoenix Tuner X (Tools menu), convert to AdvantageScope format. REV: export directly from AdvantageScope. Replay shows exact robot behavior with updated code - validates fixes without robot access.
Performance Monitoring Code runtime appears in Pit Display - must stay under 20ms. AdvantageKit logger overhead minimal when properly implemented. USB logging deprecated but supported - primarily for Orange Pi communication now. Check DataHandler.updateHandlerState for communication bottlenecks. Fault checking adds 10+ms if running outside Test mode. BatterySim in simulation catches brownout conditions before deployment. Odometry threads run at 250Hz for FastSwerve - ensure IO updates don't block.
Appendix
Travel Back to Table of ContentsFuture Development
Known Limitations:are State Space elevator/arm simulations cannot use simulateGravity flag due to implementation architecture. CTRE variants have latency compensation but remain untested beyond Swerve - REV variants fully tested but suffer from encoder latency. Double jointed arm requires System Identification (no MOI shortcut), though simulation values work. THREE.CapsuleGeometry unavailable (introduced r142, framework uses r128) - use CylinderGeometry, SphereGeometry, or custom geometries instead. USB data logging is largely deprecated, primarily for Orange Pi communication now. Write speeds insufficient for high-frequency logging.
Contribution Guidelines:the Fork annually, never merge year-specific code to main 135-consul. Keep constants in subsystem utils folders. Test in simulation before pull requests. All code must pass automatic testing unless repository owner. Document new subsystems following IO implementation pattern - interface with @AutoLog inputs, separate implementations for Sim/CTRE/REV. Include systemCheckCommand and getOrchestraDevices overrides. Add self-checking hardware registration in constructor. Update tribal knowledge documentation for major features.
Credits & References
FRC Teams
- 254 (The Cheesy Poofs): Setpoint Generator with acceleration limiting
- 418 (Purple Haze): 2nd Order Kinematics
- 1678 (Citrus Circuits): Idea for vision filtering
- 1690 (Orbit): Skid Detection Algorithm for swerve momentum management
- 2183 (TigerBots): Touchboard
- 3015 (Ranger Robotics): PathPlanner, Pit Display event panel (event panel)
- 5516 (Shenzhen Robotics Alliance): Simulation physics
- 6328 (Mechanical Advantage): AdvantageKit logging framework, AdvantageScope visualization, 250Hz Advanced Skeleton Swerve, URCL motor wrapper, DriveToPose
Third-Party Libraries
- AdvantageKit: Logging and replay framework enabling deterministic debugging
- AdvantageScope: Log visualization and analysis tool with 3D field view
- Elastic: Customizable dashboard replacing Shuffleboard, lower latency
- PathPlanner: Trajectory generation and autonomous execution
- Choreo: Optimal path creation for Mecanum/Swerve (interop with PathPlanner)
- PhotonVision: AprilTag detection and pose estimation with multi-camera fusion
- Limelight: Game piece detection via Google Coral AI, AprilTag capability
- dyn4j: 2D physics engine for simulation, collision, and game piece interaction
- Three.js (r128): 3D rendering for visual artifacts
- Papaparse: CSV processing for data analysis
- GradientBoostingRegressor: AI model for Orange Pi regression tasks
Documentation Links First-Party: 135 Consul Wiki, Pit Display Wiki, PyDriverStation Wiki, Orange Pi API Code, Orange Pi Setup Guide, Tribal Knowledge (Wiring, Controls/Software, Programming)
Third-Party: AdvantageScope (WPILib, Wiki, Chief Delphi), AdvantageKit Wiki, Elastic (Wiki, Chief Delphi), PathPlanner Full Documentation, PhotonVision Full Documentation, Simulation (WPILib), State Space (WPILib, Debugging), System Identification (WPILib, Test Analysis), Kalman Filters (WPILib), FRC Programming Manual, Choreo (Full Documentation, Chief Delphi, PathPlanner Integration)