JPA-Rankings (JPAR) can be viewed here and have been updated as of November 23, 2023.

The JPA-Ranking (JPAR) is designed to compare relative puzzler speeds. A puzzle’s difficulty is calculated for each speed puzzling event based on the average completion time. If a puzzler is faster than average, their ranking will be below 1; if they are slower than average, their ranking will be above 1. As a puzzler competes in more events, their ranking will be adjusted based on their completion time in relation to other puzzlers and their JPAR. Here’s how it works:

  • For an event without previously ranked competitors, each person’s individual time is divided by the average completion time and that number is their JPAR. 

    • Example: If Puzzler 1 finishes in 50 minutes, but the average time is 60 minutes, their JPAR is 0.833, indicating they’re 0.167 times faster than average.

  • For an event with previously ranked competitors, each person’s completion time is divided by their JPAR (if they have one) to establish an Expected Event Average.

    • Example: If Puzzler 1 finishes in 50 minutes and has a 0.833 JPAR, if they perform as expected, the event average would be 60 minutes.

  • All competitors’ Expected Event Averages are then averaged together to create a Final Expected Event Average.

    • Example: If Puzzler 2 finishes in 54 minutes but has a 1.2 JPAR, the Expected Event Average would be only 45 minutes. Based on Puzzler 1 (from above) and Puzzler 2, the Final Expected Event Average would be 52 minutes 30 seconds.

  • From there all competitor’s times are divided by the Final Expected Event Average, which generates a JPAR for each person. If a competitor is unranked, this is their JPAR. If the competitor was previously ranked, their Event JPAR is averaged with their current JPAR to create a new adjusted ranking.

    • Example: The Event JPAR for Puzzler 2 is 1.029. When averaged with their current JPAR, their new ranking is 1.115. Their faster than usual time was reflected by lowering their JPAR.

Note: This is JPAR v2. The previous version of JPAR was designed when most of our results were from online competitions. There were many members who took part in most events, providing a stable comparison across events. JPAR v2 had to be redesigned to account for in-person competitions having very few participants in common. In simple terms, we determine an expected average time for each event using participants' existing JPAR scores. This is in turn used to generate a new JPAR score for all participants. This also accounts recency-weighting so that newer contest results will count much more.