The Goal of Review: One Edge, One Leak, One Change
A good review does not produce insights. It produces constraints.
If your review ends with ten notes and zero changes, your journal is entertainment. The job of a weekly review is to answer three questions:
- •What is working that I should repeat (one edge)?
- •What is costing me that I should prevent (one leak)?
- •What is the smallest enforceable change I will run next week (one constraint)?
This framing matters because trading data is infinite. If you do not force a small output, you will drown in charts, opinions, and hindsight narratives.
Use R (risk units) so you can compare apples to apples.
Dollars are emotional and account-size dependent. R is process. If you risk $200 on one trade and $50 on another, R normalizes the result. Your weekly review should be able to say: "This setup is +0.30R average" or "Late entries are -0.40R average." Those sentences are actionable.
Your review should be boring on purpose.
A great review feels repetitive because it is a loop. You run it weekly, you ship one change, you measure, and you iterate. If the loop is exciting, it is probably too complex to keep.
Key Points
- •Weekly review output should be one edge, one leak, one constraint.
- •Measure in R to separate execution from account size.
- •Boring and repeatable beats deep and fragile.
The Review Stack: Daily Debrief, Weekly Review, Monthly Reset
You need two cadences: capture and correction.
Capture is daily (or per session). Correction is weekly. Monthly is optional, but powerful when you want to adjust your whole system.
1) Daily debrief (2 to 5 minutes)
The daily debrief is not analysis. It is hygiene. You are closing loops so data stays clean.
- •Log missing fields (entry, stop, size, setup label)
- •Tag state (calm, FOMO, tilt, fatigue)
- •Mark rule breaks (late entry, moved stop, oversize)
- •Write one sentence: "What would I do differently if I replayed this session?"
2) Weekly review (45 to 90 minutes)
Weekly is where behavior becomes visible without noise. You are looking for repeatable patterns, not stories.
- •Compute core metrics in R
- •Segment by setup and by state
- •Identify one edge to repeat and one leak to cut
- •Choose one constraint for next week
3) Monthly reset (30 to 60 minutes, optional)
Monthly is where you simplify. Remove tags you are not using. Update your playbook. Tighten guardrails if you are leaking. Expand only if you have stable execution.
If you only do one thing: do the weekly review. If weekly exists, daily becomes easier because it has a purpose.
Key Points
- •Daily is hygiene; weekly is correction.
- •Weekly review is where behavior patterns show up.
- •Monthly is for simplification and system updates.
Prep: Make the Dataset Reviewable (Fast)
A dataset is reviewable when you can sort it without thinking.
Before you run numbers, make sure the week is clean enough to compare. You do not need perfect detail, but you do need consistent fields.
Minimum fields (non-negotiable)
- •Date/time
- •Symbol/instrument
- •Direction
- •Entry and exit
- •Planned stop (or invalidation level)
- •Size (or notional)
- •Risk per trade (in dollars or percent)
- •Setup label
- •State tag
- •Rule break tags (if any)
If you are missing one field, do not add five more. Fix the missing field first. A journal dies from schema creep.
Normalize your tags once per week.
If you have three versions of the same tag ("late entry", "late-entry", "chase"), your review will lie to you. Pick one naming convention and stick to it.
Separate planned vs actual behavior.
Many traders only log what happened. Review requires intent. If you planned a 1R stop and moved it, that is the point. If you planned to take only A setups and took C setups, that is the point.
Your goal is to make review fast enough that you cannot avoid it.
Key Points
- •Reviewability is consistency, not detail.
- •Keep fields minimal and stable.
- •Normalize tags weekly so segmentation works.
Metrics That Matter (and the Ones That Waste Time)
Compute a small set of metrics you can act on.
The purpose of metrics is not to impress you. It is to reveal what to repeat and what to prevent.
Core metrics (weekly)
- •Trades taken (count)
- •Win rate (but never alone)
- •Average win and average loss (in R)
- •Expectancy (in R)
- •Largest win/loss (in R)
- •Max drawdown (weekly) and biggest red day/session
Expectancy is the anchor: (winRate * avgWinR) - ((1 - winRate) * avgLossR). If expectancy is positive and execution quality is stable, you have something to scale slowly. If expectancy is negative, you either have a strategy issue or an execution leak.
Useful secondary metrics (if you have clean data)
- •MFE/MAE (max favorable/adverse excursion)
- •Slippage and fees as a percent of gross
- •Time in trade (short vs long holds)
- •R distribution (how often you hit +2R, -1R, etc.)
Avoid vanity metrics
Avoid metrics that invite storytelling without decisions: top coins, "best day" screenshots, and any ratio you cannot explain in a sentence. If you cannot answer "what do I change next week because of this number?" it does not belong in the weekly review.
Key Points
- •Expectancy in R is more useful than PnL in dollars.
- •Win rate is meaningless without average win/loss.
- •Keep weekly metrics small and decision-linked.
Segmentation: Review by Setup and State (Not by Coin)
Segmentation is where the signal appears.
A flat week-level summary hides everything. You need to slice the data into buckets that map to decisions you can control.
Segment by setup label
Your setup labels should be simple and stable. If you have 30 setups, you have none. Start with 3 to 7. In review, rank setups by expectancy in R and by trade count. A setup with +1.2R average on 3 trades is not an edge yet. A setup with +0.25R on 60 trades might be.
Segment by state tag
State tags (tilt, FOMO, fatigue, calm) are often the fastest leak detector. Many traders discover their strategy is fine and their state is the problem. If fatigue trades are negative expectancy, your fix is a fatigue constraint, not a new indicator.
Segment by session block
Time blocks reveal fatigue, boredom trades, and late-session drift. If your last hour is consistently negative, the best edge you can build might be a hard stop time.
Segment by rule breaks
This is the most uncomfortable slice and the most valuable. Sort trades tagged "moved stop" or "oversize". The goal is not shame. The goal is to measure cost so you can justify a constraint.
Key Points
- •Segment by setup and by state to find real signal.
- •Keep setup labels small and stable.
- •Rule-break segmentation is the fastest way to find leaks.
Finding Edges and Leaks Without Overfitting
An edge is a repeatable pattern with enough samples.
Look for: positive expectancy, stable stop behavior, and execution quality that does not collapse under stress.
A quick edge checklist:
- •20+ samples (more is better)
- •Positive expectancy in R
- •Losses are normal and controlled (not blow-ups)
- •You can describe the setup in one sentence
- •You can define invalidation (where you're wrong) before entry
A leak is a repeatable behavior that drains expectancy.
Leaks often look like small decisions that compound: late entries, re-entry speed after loss, stop drift, and size creep. The journal makes them measurable.
A quick leak checklist:
- •The behavior is observable (not a mood)
- •It clusters in a condition (time block, state tag)
- •It costs R on average
- •It happens often enough to matter
Do not overfit a single week.
Weekly review is a cadence, not a verdict. If you see a candidate edge or leak, validate it across the last 4 to 8 weeks if possible. If the pattern persists, you ship a constraint. If it does not, you treat it as noise.
Key Points
- •Edges require samples and stable execution.
- •Leaks are observable behaviors tied to conditions.
- •Validate across multiple weeks before making big changes.
Turn Review Into Action: Constraints That Prevent Damage
Constraints are rules you can actually follow under stress.
Most traders try to fix leaks with motivation: "I'll be more disciplined." Motivation fails on the exact day you need it. Constraints create a hard boundary.
Examples that work
- •Cooldown after any rule break (10 to 30 minutes)
- •Trade cap (e.g. 4 trades max)
- •Time cap (hard stop time)
- •Half-size after a moved stop
- •No re-entry without a new level or new evidence
- •Journal gate: no new trade until the last trade is logged
Choose constraints based on the leak you measured.
If the leak is late entries, a cooldown does not fix it. You need an entry window rule or a reduced-size rule on late entries. If the leak is revenge re-entry, you need a cooldown gate and a cap.
Measure the constraint by prevention, not feelings.
A constraint is good if it prevents trades you should not take. That can feel frustrating in the moment. In review, you should be able to say: "This rule prevented 3 impulse trades" or "This rule reduced moved-stop frequency by half."
Key Points
- •Constraints beat motivation under stress.
- •Match constraint to the measured leak.
- •Judge a constraint by what it prevents.
A Weekly Review Template (Copy/Paste)
60-minute weekly review template
0 to 10 minutes: dataset hygiene - fill missing fields - normalize tags - mark rule breaks
10 to 25 minutes: metrics snapshot - trades count - win rate - avg win/loss in R - expectancy in R - largest win/loss - max drawdown
25 to 45 minutes: segmentation - top 3 setups by expectancy (with sample counts) - worst 2 tags/states by expectancy - worst time block (if applicable) - rule break cost (avg R, frequency)
45 to 60 minutes: decisions - One edge to repeat next week (write it) - One leak to cut next week (write it) - One constraint for next week (write it) - One measurement to check next week (what will prove it worked?)
If you cannot complete the review in 60 minutes, remove a metric or remove a tag. Speed is a feature.
Key Points
- •Timebox the review to force useful output.
- •End with decisions you can run next week.
- •If it's too slow, simplify the schema.
Common Review Mistakes (and Fixes)
Mistake: reviewing outcomes instead of execution.
Fix: grade trades as in-plan vs rule drift before looking at PnL. A winning rule-break is a negative signal.
Mistake: too many tags.
Fix: collapse to 3 to 7 tags you will use forever. If you cannot remember a tag in the moment, it does not exist.
Mistake: changing five things at once.
Fix: ship one constraint per week. Multiple changes create chaos and hide what actually worked.
Mistake: ignoring sample size.
Fix: treat low-sample "edges" as hypotheses. Validate across weeks before scaling risk.
Mistake: review that does not touch risk.
Fix: always check position sizing consistency and stop behavior. Most damage is risk drift, not strategy.
Key Points
- •Execution-first review prevents outcome bias.
- •Small tag sets and one constraint per week win long-term.
- •Sample size and risk consistency matter more than clever analytics.