Skip to content
ASAmproStats
EPAScoresAutoTeleopEndgameTeamsEventsDistrictsAlliancesWorlds
Powered by FIRST FRC Events API
HomeMethodology

Methodology

How AmproStats computes team ratings. Each column on the leaderboard measures something different. This page explains what each metric is, how it's calculated, and why it matters.

What is EPA?

EPA (Expected Points Added)estimates how many points a single team contributes to their alliance in a match. Since FRC matches are 3v3, the raw alliance score doesn't tell you which robot did what. EPA solves this by modeling each team's individual contribution using an Exponentially Weighted Moving Average (EWMA).

After each match, the model compares the predicted alliance score to the actual score. The difference (the "error") is attributed back to the individual teams and used to update their ratings. Over many matches, each team's EPA converges toward their true contribution level.

Important:The values shown in all tooltips (e.g., "365.2" for a specific match) are EPA attribution values— the model's estimate of that team's individual contribution. They are not the raw alliance score.

The Equal Split Problem

Standard EPA (used by Statbotics) splits alliance prediction error equally among all 3 teams(1/3 each). This means if a weak team is paired with two elite robots and their alliance scores 500 points (100 more than predicted), the weak team gets credited with +33 points they didn't actually earn.

This is the core problem AmproStats was built to address. Our leaderboard provides multiple alternative metrics that each handle this differently, so you can compare and draw your own conclusions.

Leaderboard Columns

The leaderboard has 9 data columns organized into 4 groups.

Cumulative

EPA

The standard cumulative EPA (equivalent to Statbotics). Tracks each team across the entire season using EWMA. Carries forward a regressed prior from previous years so returning teams don't start from zero.

Scope: All events, all years (cross-season priors)

Error attribution: Equal 1/3 split among alliance members

Strength: Most stable, long-term view of team trajectory

Weakness: Weak teams on strong alliances get inflated

Best Event

These columns compute Isolated EPA— EPA from scratch at each event with no prior history. All teams start at an equal baseline. The value shown is from the team's single best event.

All Matches

Isolated EPA computed on all matches (quals + elims) at the team's best event. Includes playoffs, which can inflate scores for teams "carried" by elite alliance partners.

Quals

Isolated EPA computed only on qualification matches. In quals, alliance partners are randomly assigned, so this removes the playoff carry bias entirely. The cleanest single-event signal.

Season

These columns pool all matches from every event the team attended during the season and compute the metric on the combined set. This is notan average of per-event values — every individual match is weighted equally through a single global EWMA pass.

All Matches

Global EWMA across every match (quals + elims) from every event. Weights recent matches more heavily, so a team that improved over the season will show their current level.

Quals

Global EWMA across all qualification matches from every event. Combines the anti-elim-bias of quals-only with the stability of a full-season sample. Arguably the most balanced "true level" metric.

Peak

Pools all qual matches across the season, sorts them by EPA contribution, drops the bottom 10%, then replays the EWMA on the remaining 90%. Removes mechanical failures and disconnects without being as aggressive as Best 10.

Example: A team with 24 qual matches would drop their worst 2-3, keeping the top 21-22. The tooltip shows exactly which matches were dropped.

Best 10 Matches

Takes all matches (quals + elims) across the entire season, ranks them by EPA contribution, and replays the EWMA on only the top 10. Shows the team's absolute ceiling — what they look like when firing on all cylinders. All 10 may come from a single event if that's where they peaked.

Very optimistic by design. Use this as a ceiling estimate, not a realistic expectation.

Worst 10 Matches

Takes all matches across the season, ranks by EPA contribution, and replays the EWMA on only the bottom 10. Shows the team's floor — what happens when things go wrong. A high Worst 10 value means even this team's bad matches are still productive.

Great for identifying reliable alliance partners. A team with a high floor is less risky than one with a high ceiling but low floor.

Weighted

Weighted EPA

Instead of splitting alliance error equally (1/3 each), this method uses OPR (Offensive Power Rating)from least-squares regression to determine each team's proportional contribution. If Team A has an OPR of 80 and Teams B and C have OPRs of 30 each, Team A gets ~57% of the error attribution instead of 33%.

Stage 1: Solve least-squares OPR at each event (A*x = b with Tikhonov regularization)

Stage 2: Use OPR values as weights for EWMA error attribution (floored at 0.5)

Strength: Directly addresses the equal-split problem. Strong teams get fair credit.

Weakness: OPR solve needs 5-6+ matches per team to be reliable. Noisy early in events.

How Season Columns Are Computed

The Season columns do not average per-event ratings. Instead, they pool all match-level EPA attributions from every event into a single list, then run the computation once on the full set.

This matters because averaging per-event would weight each event equally, not each match. A team that played 12 quals at Event A and 6 quals at Event B would have Event B count the same as Event A despite having half the data.

# Wrong (old approach):

season_avg = mean(event_A_rating, event_B_rating)

# Correct (current approach):

all_matches = pool(event_A_matches + event_B_matches)

season_rating = EWMA(all_matches)

Best Event vs Season: What's the Difference?

Best Eventshows a team's peak — their single strongest showing at one competition. This can be a fluke (weak field, lucky schedule, one great day).

Seasonshows consistency — how a team performs across their entire body of work. A team ranked #5 in Best Event but #30 in Season is volatile. A team ranked #30 in Best Event but #5 in Season is exceptionally consistent but may lack a high ceiling.

When these two disagree significantly on a team, that disagreement itself is the most valuable signal on the leaderboard.

Why Separate Quals and All Matches?

In qualification matches, alliance partners are randomly assigned. If a team scores well, it's a reasonable signal that they contributed.

In elimination matches, alliance captains hand-pick the best available partners. A mid-tier team selected onto an elite alliance will play 3-7 elim matches alongside powerhouses. Because EPA splits error among all 3 members, that mid-tier team absorbs credit for points the elite robots scored. Their EPA gets artificially inflated.

However, some teams genuinely perform differently in elims — they may have robots that excel with coordination (e.g., a feeder bot that's mediocre alone but incredible with a good scorer). Removing elims throws away that signal. That's why we show both.

Reading the Tooltips

Hovering over any cell on the leaderboard shows a detailed tooltip. The numbers shown are EPA attribution values— the model's estimate of that team's individual contribution in each match. They are not raw alliance scores.

For example, if a tooltip shows "sf1 Sacramento Event 365.2", that means the model estimated this team personally contributed ~365 points in that semifinal match. The actual alliance score may have been higher (the other two robots contributed too) or lower (fouls subtracted).

Best Event tooltips: Show all events attended with per-event values. The best event is highlighted in blue.

Season tooltips (All/Quals): Show events attended as context, with the note that the final value is a global EWMA across all pooled matches.

Peak tooltip: Shows total qual matches, number dropped (bottom 10%), and lists the specific dropped matches with their values.

Best 10 / Worst 10 tooltips: List the specific 10 matches selected, with match number, event name, and EPA contribution value.