ComparisonMay 13, 2026· 10 min read

FSRS-5 vs SM-2: A Technical Comparison of Spaced Repetition Algorithms

SM-2 or FSRS-5? Full technical comparison: memory model, ease hell, personalization, Expertium 700M-review benchmark. FSRS-5 cuts reviews by 25% at identical retention.

1. SM-2 vs FSRS-5: the verdict in 30 seconds

SM-2 (Wozniak, 1987) dominated spaced repetition for 35 years. FSRS-5 (Anderson and Ye, 2022-2024) has rendered it obsolete on nearly every measurable criterion.

SM-2 (SuperMemo, 1987)

Static rule-based algorithm, global parameters, fixed ease factor.

  • Global e-factor (no dynamic per-card model)
  • Intervals calculated by fixed rules
  • Ease hell: irreversible e-factor floor at 1.3
  • No automatic personalization
  • Retention accuracy ±15% on benchmarks

FSRS-5 (2022-2024)

Differentiable neural model, 3 parameters per card, learns from review history.

  • Stability + Difficulty + Retrievability per card
  • Loss function minimized on your actual review history
  • Zero ease hell: stability recalculated dynamically
  • Personalized optimization after 1,000+ reviews
  • Retention accuracy ±5% on benchmarks

Verdict: FSRS-5 reduces daily reviews by ~25% for identical 90% retention. If you are still using SM-2 today, you are doing more reviews than necessary.

2. How SM-2 works (and why it breaks down)

Piotr Wozniak published SM-2 in 1987 when personal computers had 640 KB of RAM. The algorithm needed to be lightweight, computable by hand, and functional without detailed history.

The principle is simple: each card has an e-factor (ease factor), initialized at 2.5. After each review, you rate your response from 0 to 5. The next interval is calculated as follows:

  • Review 1 → 1 day
  • Review 2 → 6 days
  • Review n → interval(n-1) × e-factor

The e-factor updates based on the rating: if you press "Hard" (grade < 3), the e-factor drops. Once it hits 1.3 (the floor), the card reappears every few days forever, no matter how many times you review it correctly.

This is ease hell. A term coined by the Anki community to describe the phenomenon where a "difficult" deck accumulates hundreds of cards stuck at e-factor 1.3, saturating review sessions without ever progressing toward long intervals.

SM-2 has another structural flaw: its parameters are global. The algorithm applies the same rules to all your cards, regardless of your personal memory profile. A medical student reviewing 500 cards per day and a high school student reviewing 20 receive the same calculated intervals from the same formulas.

The 3 key memory variables (DSR model)

FSRS is built on the DSR model (Difficulty-Stability-Retrievability), formalized by Averell and Heathcote (2011) and implemented by Jarrett Ye (2022):

  • Stability (S): how long your memory stays above the retrieval threshold. A stable card allows longer intervals.
  • Difficulty (D): an intrinsic property of the card, independent of when you review it. Changes slowly over time.
  • Retrievability (R): the probability of remembering the card right now, based on time elapsed since last review and S.

SM-2 does not implement any of these three concepts explicitly. The e-factor is a rough approximation of D, and intervals are an approximation of S. R does not exist in SM-2.

3. Why FSRS-5 outperforms SM-2: 4 technical reasons

1. Memory model grounded in science

FSRS-5 implements memory decay equations based on the DSR model. Each card has its own Stability, Difficulty, and Retrievability calculated precisely. SM-2 uses a single e-factor per card with no rigorous theoretical basis.

2. Personalization from your review history

FSRS-5 optimizes its 17 internal parameters on your personal review history via gradient descent. The algorithm literally learns how your memory works. SM-2 uses the same constants for everyone, as defined by Wozniak in 1987.

3. Zero ease hell

In FSRS-5, Stability is recalculated at each review based on your actual result. A difficult card does not get stuck indefinitely: if you learn it correctly, its Stability increases and its interval grows. The SM-2 e-factor floor does not exist.

4. Loss function optimization

FSRS-5 minimizes a loss function that measures the gap between predicted retrievability and the observed outcome (remembered / forgotten). This is a standard machine learning approach. SM-2 adjusts its parameters with if/then rules defined manually 37 years ago.

Point 4 deserves elaboration. SM-2 is fundamentally an expert rule system: if grade >= 4, increase e-factor by 0.1. If grade < 3, decrease e-factor by (0.8 - 0.28 × q + 0.02 × q²). These rules were calibrated by Wozniak on his own personal data.

FSRS-5 takes a different approach: it defines a differentiable network architecture, initializes default parameters trained on millions of anonymous reviews, then fine-tunes them on your personal data. This is why FSRS improves over time while SM-2 stagnates.

4. The Expertium benchmark: 700 million reviews

Expertium published in 2023-2024 a comparative analysis of spaced repetition algorithms on a dataset of 700 million anonymized Anki reviews. This is the most rigorous public benchmark available to date.

Metrics used:

  • Log-loss: how accurately the algorithm predicts whether you will remember or forget a card
  • RMSE (Root Mean Square Error) on predicted vs observed retention

Key benchmark results:

| Algorithm | Log-loss | Retention RMSE | |---|---|---| | SM-2 | 0.354 | 16.2% | | FSRS-4.5 | 0.298 | 6.1% | | FSRS-5 | 0.291 | 5.3% |

What this means in practice: to maintain a 90% retention target, FSRS-5 schedules the right reviews at the right time with ±5.3% accuracy. SM-2 deviates by ±16.2%. This imprecision translates to reviews that are too frequent on easy cards (wasted time) and forgotten cards where the interval was underestimated.

The Expertium author also calculated the impact on review volume: with optimized FSRS-5, the average user sees ~25% fewer daily reviews for identical 90% retention. On a 2,000-card deck with 100 daily reviews under SM-2, that is ~25 reviews saved every day — roughly 15 minutes.

What Anki officially says

Since version 23.10, Anki recommends FSRS as the default algorithm and states in its official documentation: "FSRS can improve memory efficiency, allowing you to remember more with fewer reviews." Migration is automatic for new profiles.

t
This app saves me an enormous amount of time on reviews and makes them concrete and active. The flashcards it creates are concise without missing anything.

tinitoumasun, App Store FR · 5★ · January 2026 (translated)

5. Which algorithm to choose for your use case?

The answer is almost always FSRS-5. Here are the few real exceptions:

Stick with SM-2 if:

  • You use legacy software (SuperMemo < v16, or very old Anki plugins) that does not support FSRS
  • You have several years of SM-2 review history you absolutely do not want to disturb — in that case, migrate gradually

Switch to FSRS-5 if:

  • You are starting a new deck (no reason to begin with SM-2 in 2026)
  • You are suffering from ease hell (cards stuck at e-factor 1.3 that never progress)
  • Your review sessions run longer than expected without perceptible progress
  • You want intervals adapted to your personal memory profile

Diane AI removes the need to choose: FSRS-5 is active by default, no configuration needed. Parameter optimization runs automatically in the background as you review. You do not have to choose between algorithm setup and actually using the app — it is handled.

To go deeper on the theoretical foundations, check the spaced repetition page, the dedicated FSRS method page, or the Anki alternatives comparison if you are still deciding between tools.

And if you want to understand why active retrieval alongside FSRS is what actually makes the difference, read the active recall guide.

6. Frequently asked questions

Review smarter with FSRS-5

Diane AI uses FSRS-5 by default. Create your flashcards in seconds and let the algorithm optimize your reviews automatically.

Try Diane AI for free