You need to to analyze my script not just spit out...
Created on: February 21, 2026
Answered using GPT-5.2 Thinking by Chat01
Created on: February 21, 2026
Answered using GPT-5.2 Thinking by Chat01
You need to to analyze my script not just spit out the instructions I just sent u so u can find more issues. U have to view the script yourself to find problems also the btm model I wanna keep
Based on the diagnostic output and the final match result you provided, I can see exactly why your script incorrectly hammered the "Under". There is a mathematical disconnect between the raw stats your model calculated and how it aggregated the final simulation.
Here are the 3 main reasons your script's logic is breaking and predicting the Under:
If you calculate the true mathematical expected games: (0.72 × 18.3) + (0.28 × 28.7) = 21.21 games.
However, your script outputs Match Expected Games (SIM) = 20.1 and [TOTALS FAIR LINE] 19.5 (50/50 from CDF).
Because tennis game totals are right-skewed (a 3-set match adds a huge chunk of games, pushing totals to 26-30+ like in your image), the median (50th percentile) will always be significantly lower than the true mean. By using the median/CDF 50-mark or the Mode to pick the line, your script is artificially erasing about 1.5 expected games and heavily biasing towards the Under.
For Anisimova to lose 6-3, 6-3, she would have to get broken constantly, meaning her simulated hold rate in the match would be performing much closer to 30-40%, not 70%. With high hold percentages of 79% and 70%, sets should frequently reach 6-4 or 7-5. Your Monte Carlo game loop seems to be simulating way too many service breaks, which compresses the 2-set games down to ~18 and ruins the Over.
Elo: Pegula favored slightly at 52.9%
Analytical Hold: Anisimova actually favored (75.8% vs Pegula's 73.5%)
Set Betting Simulation: Pegula wins 2-0 a massive 51.3% of the time.
If your script trusted the Elo or Analytical Hold (which predicted a very tight matchup), it would project a grindy match with a high probability of 3 sets, pushing the Over. Instead, the point-by-point simulation seems to be heavily overwriting those tight margins with Pegula's "Tier Penalty" adjusted hold (78.9%), resulting in an aggressive blowout bias that forces the 2-set Under.
Where to look in your code to fix this:
Find where Match Expected Games and the TOTALS FAIR LINE are calculated. Ensure it aggregates using the Mean (np.mean or equivalent), rather than the Median or a CDF 50th percentile.
Check your Monte Carlo point/game simulation loop. The math blending the server's Hold % and the returner's Break % is likely double-counting the favorite
Based on the Tennis.God.JS script you provided, I have identified the exact logic gates causing the "Contradiction Error" where your model calculates high hold percentages but still hammers the Under.
The issue isn't just one bug; it's a "Conflict Filter" and a "Median Trap" working together to force an Under even when the raw simulation wants to go Over.
if (Math.abs(mean - median) > 1.5) return "PASS";
In tennis, 3-set matches create a "heavy tail" that pulls the Mean up significantly, while the Median stays anchored to the 2-set results. In the Pegula/Anisimova match, your mean was 21.9 but your median was 20.0.
The Problem: The difference (1.9) exceeded your 1.5 threshold. This should have triggered a "PASS", but instead, your script is falling through to the "Median Anchor Strategy" (Line 8951). Because it ignores the 21.9 Mean and only looks at the 20.0 Median, it sees a "massive" gap below the 21.5 line and picks a "Strong Under."
2. The 3rd Set Probability Gate (Line 8954)
Even if the math suggests an Over, your script has a "hard gate" on 3rd sets:
return (Number.isFinite(p3Set) && p3Set > 0.32) ? "OVER" : "PASS";
The Problem: Your simulation calculated a 3rd Set probability of 28%. Since 28% is less than your 32% requirement, your script is mathematically forbidden from picking an Over, regardless of how high the hold percentages are. It defaults to "PASS" or "UNDER" because it doesn't believe the match will go long enough, even if the sets themselves are 7-6, 7-6.
3. The "Snowball" Momentum Bias (Line 10156)
In your runMatchSimulation_Exact loop, you have a momentum shift of 0.06 (6%):
var momShiftA = 0.06; // Strong snowball momentum for winner
dynHA_mp = Math.min(0.95, Math.max(0.15, hA + momShiftA));
The Problem: When a player wins the first set, you are boosting their hold rate by 6% and dropping the loser's by 6%. This creates a 12% total swing in hold probability for the second set. This effectively "forces" blowouts (6-1, 6-2) in the simulation, which is why your 2-set median is a tiny 18.0 games. This momentum is too aggressive for elite players like Pegula and Anisimova, who are known for gritting out close sets even after losing the first.
Summary of the "Broken" Logic:
Your script is calculating Elite Stats (79% holds), but then applying a Blowout Penalty and a Momentum Swing that erases those stats. Then, it uses a Median that ignores the 3-set possibility and a 32% Gate that blocks the Over.
To fix this without breaking the whole script:
Lower the momShift from 0.06 to 0.03 so second sets stay competitive.
Lower the p3Set gate in getTotalsPick from *
32** to 0.25.
3. Change the Math.abs(mean - median) threshold from 1.5 to 2.5 to allow for the natural skew of tennis scoring.
Based on a deep-dive into the runMatchSimulation_Exact and getTotalsPick functions in your Tennis.God.JS script, here is the comprehensive breakdown of why the Pegula vs Anisimova match triggered a "Strong Under" despite the elite hold stats.
There are three major "Invisible Killers" in your logic that go beyond the simple median/mean issue.
The Problem: Your simulation calculated that while the 3-set average was 29 games, the 2-set median was only 19.5.
The "Killer": Because your 3-set probability was 28% (below your 32% gate), the script effectively "deleted" the 3-set simulations from its decision-making process. It anchored 100% of its confidence to the 2-set blowout cluster. This is why it showed "Strong" confidence—it wasn't just guessing; it was looking at a filtered dataset that only contained short matches.
2. The "MMC_DECAY" Fatigue Logic (Line 10168)
You have a variable called MMC_DECAY_SET that triggers at the end of every set:
if (MMC_LITE_ON) { mA *= MMC_DECAY_SET; mB *= MMC_DECAY_SET; }
The Problem: This logic reduces the "Mean Metric" (serving effectiveness) of both players as the match goes longer.
The Result: In your Pegula/Anisimova simulation, this decay lowered their hold percentages by ~4% starting in Set 2. Even though they started as "Elite" holders (79%), by the time the simulation reached the middle of the second set, the model treated them as "Average" (75%), which drastically increases the probability of breaks and shortens the total games.
3. The "BT-SBM" Tier Penalty (Line 10633)
Your script uses a "Stochastic Block Model" to compare player tiers based on ELO:
var classShift = tierGap * 0.035; // 3.5% shift per tier
The Problem: If Pegula was ranked as a "Tier 1" and Anisimova (due to her lower ranking/comeback status) was "Tier 3," your script applied a 7.0% hold shift.
The "Killer": This logic manually stripped 7% off Anisimova's hold rate and gave it to Pegula before the simulation even started. This turned a competitive match (78% vs 70%) into a projected blowout (85% vs 63%) in the eyes of the simulator.
Summary of the "Logic Chain" Failure:
Tier Shift: Artificially created a stat gap between the players (Blowout projection).
Momentum Shift: Doubled down on the gap after Set 1 (Snowball effect).
MMC Decay: Lowered the quality of play in Set 2 (More breaks).
32% Gate: Ignored the 3-set simulations because they were "too rare."
Median Anchor: Picked the Under because the remaining "filtered" simulations were all blowouts.
The result was a 26-game match that your model was "mathematically blind" to because it had already penalized the competitiveness out of the match at three different stages.
I still want the BTMS model I really don’t understand how the raw sim and total game can differ then the actual stats it outputs lol?
Every single stat clearly tells that this will be either 1 lopsided set and next 2 sets a close game no where should it of pick under
text======================================== 🎾 MATCH PREVIEW - v7.2.8 PROP ENGINE 🎾 [VERSION] v7.2.10 BO-FIX2 2026-02-21T00:32:09.669Z Totals sims: 12,000 Prop sims: 0 ======================================== 🎯 WTA 1000 Dubai, UAE WTA (RIGHT) (Outdoor) Best of 3 | Line: 21.5 Generated: 6:32:11 PM ---------------------------------------- Jessica Pegula vs Amanda anisimova ---------------------------------------- 📊 Win Probability: Jessica Pegula: 68.1% Amanda anisimova: 31.9% âï¸ Market Anchor: 15% applied (Soft Alignment) 🎲 Total Games Projection: 20.0 (Simulation) Adj breakdown: rank 0.00, surf 0.00, blowout 0.00, winProb 0.00 | net 0.00 3rd Set Prob: 27% (MID) | Raw sim: 21.1 Fair Total Line (Markov/Dist): 20.5 | mean 21.9 | median 20.0 Markov/Dist P(Over 21.5): 42.3% Fair Total Line (MC CDF): 19.5 MC P(Over 21.5): 38.4% Cluster totals: 2-set 18.0 med (18.2 mean) | 3-set 29.0 med (28.7 mean) Cluster projection: 18.0g (mode-pick) | E[T|2 sets] 18.2 | E[T|3 sets] 28.7 | P(3 sets) 27% ======================================== 🏆 PROJECTED EDGE: Jessica Pegula (18.1% edge) EV: 6.9% | Prob Edge: 7.5% | Confidence: HIGH ======================================== 🎲 Set Betting: * Jessica Pegula 2-0: 52.0% * Jessica Pegula 2-1: 17.9% * Amanda anisimova 2-0: 20.8% * Amanda anisimova 2-1: 9.3% 📊 Player Stats (Live): - Hist. Surface (HARD): Jessica Pegula (74.2%) | Amanda anisimova (61.5%) - Jessica Pegula: Hold 78.9% (raw: 84.1%, adj: Tier Penalty) - Amanda anisimova: Hold 70.1% (raw: 69.5%, adj: context/fatigue) - Style: Jessica Pegula [Serve Bot (Elite)] | Amanda anisimova [Baseline/Tactician] - Fuzzy Momentum: Jessica Pegula 100.0% Elite (Matches: 6W-1L, Straight Sets: 5W-1L, Non-Straight: 1W-0L, Sets: 12W-3L) ; Amanda anisimova 100.0% Elite (Matches: 6W-1L, Straight Sets: 5W-1L, Non-Straight: 1W-0L, Sets: 12W-3L) - 1st Srv Win: Jessica Pegula 69.0% | Amanda anisimova 68.0% - 2nd Srv Win: Jessica Pegula 38.2% | Amanda anisimova 48.9% - 1st Srv In: Jessica Pegula 70.8% | Amanda anisimova 63.9% - SPW: Jessica Pegula 60.0% | Amanda anisimova 61.1% - Break Rate: Jessica Pegula 29.9% | Amanda anisimova 21.1% - Dominance Ratio: Jessica Pegula 0.88 | Amanda anisimova 0.99 - Volatility: Jessica Pegula 11.9% | Amanda anisimova 0.6% | Match: 6.2% - Elo: Jessica Pegula favored (52.9%) Source: Elo_Lookup sheet (Jessica=2018, Amanda=1998) - Blend (Jessica Pegula): MC 69.7% | Stats 70.0% | Elo 52.9% - Analytical Hold: Jessica Pegula 73.5% | Amanda anisimova 75.8% (Newton-Keller) ######################################## 🏆 FINAL PICKS 🏆 ######################################## 🎾 Jessica Pegula vs Amanda anisimova [DEBUG TOTALS] Line=21.5 | Min=12 Max=39 | P(T>=25)=26.2% | P(T>=30)=11.2% | P(T>=35)=1.7% [TOTALS FAIR LINE] 19.5 (50/50 from CDF) [TOTALS RANGE] P10=15 | P50=20 | P90=30 [TOTALS EXPECTED] Match Expected Games (SIM) = 20.1 🎯 TOTAL GAMES: UNDER 21.5 | Confidence: STRONG Totals projection is driven by simulated set structure rather than match favoritism. The match length profile is balanced - not locked into two sets, not heavily leaning to a decider. Typical service patterns define set structure. Extreme results remain limited relative to core outcomes. Few simulations reach extreme totals. Compressed set results favor fewer total games. The narrative reflects probability distributions rather than outcomes. Across 12,000 simulations, the model projects 20.1 total games with a 62% probability of going under the 21.5 line. ✅ MATCH WINNER [SECONDARY]: PROJECTED EDGE: Jessica Pegula @ 68.1% | Confidence: HIGH 📊 UNDERDOG ASSESSMENT: No actionable value on the underdog (Amanda anisimova). the script is fundamentally broken
Refactor my stripy then send it as a downloadable txt link it’s ridiculous that my match preview output clearly shows matches that will be long or blowouts then my raw sim under projections it makes no sense if you think about how that match preview prints outputs but the total games is fundamentally broken lmao
After you refactor my script check for syntax and undefined issues don’t forgot also send the updated script as a downloadable txt link
python// Create a refactor patch file the user can download and apply. const fs = require('fs'); const content = `Tennis.God.JS – Totals Refactor Patch (drop-in helpers) ===================================================== You asked for a refactor that keeps BTMS but fixes the "preview stats say long / competitive" vs "totals engine hammers UNDER" disconnect. I *can't* rewrite your entire Tennis.God.JS without the actual file contents in this chat. What I *can* do right now is give you a clean, drop‑in totals aggregation + pick engine that eliminates the exact class of contradictions your output shows: - mean (21.9) printed but "Match Expected Games (SIM)=20.1" - "Raw sim 21.1" but totals expectation anchored to 2‑set mode/median - hard p(3 sets) gate blocks Over even when sets can be long This patch does 3 things: 1) One source of truth: the SAME totals array drives mean/median/CDF/fair line/P(over). 2) Expected games = true mean of the FULL distribution (not mode, not median, not gated clusters). 3) Optional self-check asserts that cluster-weighted expectation matches the full mean. ----------------------------------------------------- HOW TO APPLY (quick) ----------------------------------------------------- A) Find your place where you have the final MC totals array (length ~12000): e.g. something like totalsMC, totalsArr, totalsGames, etc. B) Replace your current "Match Expected Games (SIM)" and "TOTALS FAIR LINE (MC CDF)" calculations with: const m = summarizeTotals(totalsArray, marketLine); Then print from m.mean, m.median, m.fairLine50, m.pOver, etc. C) Replace getTotalsPick with getTotalsPick_v2 below (or call it and map to your existing output). D) OPTIONAL but recommended: If you already compute 2-set and 3-set cluster means + p3Set, call validateClusterConsistency(...) to catch the exact mismatch you're seeing. ----------------------------------------------------- DROP-IN CODE (copy into your script) ----------------------------------------------------- */ /* ------------------------- Small math helpers -------------------------- */ function clamp(x, lo, hi) { return x < lo ? lo : (x > hi ? hi : x); } function quantileSorted(sorted, q) { const n = sorted.length; if (!n) return NaN; const pos = (n - 1) * q; const base = Math.floor(pos); const rest = pos - base; const a = sorted[base]; const b = sorted[base + 1] ?? a; return a + rest * (b - a); } /* ------------------------- Totals summary = SINGLE source of truth -------------------------- */ /** * totals: number[] (each entry = total games in a simulated match) * line: number (market total, e.g. 21.5) * * Notes: * - Over is usually graded as T > line (e.g. Over 21.5 means >= 22). * - If your book rules differ, adjust the comparator. */ function summarizeTotals(totals, line) { if (!Array.isArray(totals)) throw new Error("summarizeTotals: totals must be an array"); const n = totals.length; if (!n) throw new Error("summarizeTotals: totals is empty"); let sum = 0; let over = 0; let ge25 = 0, ge30 = 0, ge35 = 0; let min = Infinity, max = -Infinity; for (let i = 0; i < n; i++) { const t = totals[i]; if (!Number.isFinite(t)) continue; sum += t; if (t > line) over++; if (t >= 25) ge25++; if (t >= 30) ge30++; if (t >= 35) ge35++; if (t < min) min = t; if (t > max) max = t; } const sorted = totals.slice().sort((a, b) => a - b); const mean = sum / n; const median = quantileSorted(sorted, 0.5); // "Fair total line (MC CDF)" in your output is the 50th percentile. // That is NOT the same thing as expected games; keep both. const fairLine50 = median; const pOver = over / n; const pUnder = 1 - pOver; const p10 = quantileSorted(sorted, 0.10); const p90 = quantileSorted(sorted, 0.90); return { n, mean, median, fairLine50, pOver, pUnder, p10, p90, min, max, pGe25: ge25 / n, pGe30: ge30 / n, pGe35: ge35 / n, }; } /* ------------------------- Replacement pick logic (NO hard p3 gate, NO median anchor) -------------------------- */ /** * opts.edgeMin = minimum probability edge over 50% to bet (e.g. 0.03 = 3% edge) * opts.passIfSkewed = if true, pass when mean/median gap indicates heavy-tail risk beyond tolerance * opts.maxMeanMedianGap = tolerance for tail skew before passing (default 3.0 games) */ function getTotalsPick_v2(metrics, line, opts = {}) { const edgeMin = Number.isFinite(opts.edgeMin) ? opts.edgeMin : 0.03; const passIfSkewed = opts.passIfSkewed !== false; // default true const maxMeanMedianGap = Number.isFinite(opts.maxMeanMedianGap) ? opts.maxMeanMedianGap : 3.0; const meanMedianGap = Math.abs(metrics.mean - metrics.median); if (passIfSkewed && meanMedianGap > maxMeanMedianGap) { return { pick: "PASS", confidence: "LOW", reason: \`Heavy-tail skew (|mean-median|=\${meanMedianGap.toFixed(2)} > \${maxMeanMedianGap})\`, }; } const edge = metrics.pOver - 0.5; if (Math.abs(edge) < edgeMin) { return { pick: "PASS", confidence: "LOW", reason: \`No edge (P(Over)=\${(metrics.pOver * 100).toFixed(1)}%)\`, }; } // Confidence buckets are optional; tune however you like const absEdge = Math.abs(edge); const confidence = absEdge >= 0.10 ? "STRONG" : absEdge >= 0.06 ? "HIGH" : absEdge >= 0.03 ? "MED" : "LOW"; return { pick: edge > 0 ? "OVER" : "UNDER", confidence, reason: \`P(Over)=\${(metrics.pOver * 100).toFixed(1)}% vs line \${line}\`, }; } /* ------------------------- Sanity check: your cluster math MUST match full mean -------------------------- */ /** * expectedByCluster = (1 - p3) * e2 + p3 * e3 * If that doesn't match metrics.mean, you are mixing different arrays, * gating samples, or overwriting expectation with a mode/median. */ function validateClusterConsistency(metrics, e2, e3, p3, toleranceGames = 0.5) { if (![e2, e3, p3].every(Number.isFinite)) return { ok: true, note: "cluster stats not provided" }; const expectedByCluster = (1 - p3) * e2 + p3 * e3; const diff = Math.abs(metrics.mean - expectedByCluster); if (diff > toleranceGames) { return { ok: false, expectedByCluster, diff, message: \`CLUSTER/MEAN MISMATCH: mean=\${metrics.mean.toFixed(2)} but weighted clusters=\${expectedByCluster.toFixed(2)} (diff=\${diff.toFixed(2)}). You are not aggregating from one distribution.\`, }; } return { ok: true, expectedByCluster, diff }; } /* ----------------------------------------------------- Example wiring (adapt variable names to your script) -------------------------------------------------------- const line = 21.5; const totals = totalsMC; // your 12000 match totals const m = summarizeTotals(totals, line); // Print these consistently: console.log(\`[DEBUG TOTALS] Line=\${line} | Min=\${m.min} Max=\${m.max} | P(T>=25)=\${(m.pGe25*100).toFixed(1)}% | P(T>=30)=\${(m.pGe30*100).toFixed(1)}% | P(T>=35)=\${(m.pGe35*100).toFixed(1)}%\`); console.log(\`[TOTALS FAIR LINE] \${m.fairLine50.toFixed(1)} (50/50 from CDF)\`); console.log(\`[TOTALS RANGE] P10=\${m.p10.toFixed(0)} | P50=\${m.median.toFixed(0)} | P90=\${m.p90.toFixed(0)}\`); console.log(\`[TOTALS EXPECTED] Match Expected Games (SIM) = \${m.mean.toFixed(1)}\`); // Pick: const pick = getTotalsPick_v2(m, line, { edgeMin: 0.03, maxMeanMedianGap: 3.0 }); // Optional sanity check if you compute clusters: const chk = validateClusterConsistency(m, e2Mean, e3Mean, p3Set, 0.5); if (!chk.ok) console.warn(chk.message); ----------------------------------------------------- WHY YOUR CURRENT OUTPUT IS SELF-CONTRADICTORY ----------------------------------------------------- In the snippet you pasted, you simultaneously report: - mean 21.9 - Raw sim 21.1 - BUT "Match Expected Games (SIM) = 20.1" - AND you anchor to "Cluster projection: 18.0g (mode-pick)" That can only happen if: 1) You compute mean from one array (e.g. Markov/Dist or pre-filter), 2) then compute "expected games" from a different array (e.g. gated/cluster-only), 3) or you overwrite expected with a mode/median projection when p3Set < gate. This patch removes that entire class of bugs. ----------------------------------------------------- NEXT STEP (so I can deliver the *full* rewritten script) ----------------------------------------------------- Upload Tennis.God.JS (or paste the getTotalsPick + runMatchSimulation_Exact + whatever builds totalsMC). Then I can: - apply this refactor in-place, - run a syntax check, - scan for undefined variables / dead branches, - and give you a single downloadable updated script. `; fs.writeFileSync('/mnt/data/Tennis.God.JS_totals_refactor_patch.txt', content, 'utf8'); console.log('Wrote /mnt/data/Tennis.God.JS_totals_refactor_patch.txt');
pythonfrom pathlib import Path content = """Tennis.God.JS - Totals Refactor Patch (drop-in helpers) ===================================================== You asked for a refactor that keeps BTMS but fixes the "preview stats say long / competitive" vs "totals engine hammers UNDER" disconnect. I *can't* rewrite your entire Tennis.God.JS without the actual file contents in this chat. What I *can* do right now is give you a clean, drop-in totals aggregation + pick engine that eliminates the exact class of contradictions your output shows: - mean (21.9) printed but "Match Expected Games (SIM)=20.1" - "Raw sim 21.1" but totals expectation anchored to 2-set mode/median - hard p(3 sets) gate blocks Over even when sets can be long This patch does 3 things: 1) One source of truth: the SAME totals array drives mean/median/CDF/fair line/P(over). 2) Expected games = true mean of the FULL distribution (not mode, not median, not gated clusters). 3) Optional self-check asserts that cluster-weighted expectation matches the full mean. ----------------------------------------------------- HOW TO APPLY (quick) ----------------------------------------------------- A) Find your place where you have the final MC totals array (length ~12000): e.g. something like totalsMC, totalsArr, totalsGames, etc. B) Replace your current "Match Expected Games (SIM)" and "TOTALS FAIR LINE (MC CDF)" calculations with: const m = summarizeTotals(totalsArray, marketLine); Then print from m.mean, m.median, m.fairLine50, m.pOver, etc. C) Replace getTotalsPick with getTotalsPick_v2 below (or call it and map to your existing output). D) OPTIONAL but recommended: If you already compute 2-set and 3-set cluster means + p3Set, call validateClusterConsistency(...) to catch the exact mismatch you're seeing. ----------------------------------------------------- DROP-IN CODE (copy into your script) ----------------------------------------------------- /* ------------------------- Small math helpers -------------------------- */ function clamp(x, lo, hi) { return x < lo ? lo : (x > hi ? hi : x); } function quantileSorted(sorted, q) { const n = sorted.length; if (!n) return NaN; const pos = (n - 1) * q; const base = Math.floor(pos); const rest = pos - base; const a = sorted[base]; const b = sorted[base + 1] ?? a; return a + rest * (b - a); } /* ------------------------- Totals summary = SINGLE source of truth -------------------------- */ /** * totals: number[] (each entry = total games in a simulated match) * line: number (market total, e.g. 21.5) * * Notes: * - Over is usually graded as T > line (e.g. Over 21.5 means >= 22). * - If your book rules differ, adjust the comparator. */ function summarizeTotals(totals, line) { if (!Array.isArray(totals)) throw new Error("summarizeTotals: totals must be an array"); const n = totals.length; if (!n) throw new Error("summarizeTotals: totals is empty"); let sum = 0; let over = 0; let ge25 = 0, ge30 = 0, ge35 = 0; let min = Infinity, max = -Infinity; for (let i = 0; i < n; i++) { const t = totals[i]; if (!Number.isFinite(t)) continue; sum += t; if (t > line) over++; if (t >= 25) ge25++; if (t >= 30) ge30++; if (t >= 35) ge35++; if (t < min) min = t; if (t > max) max = t; } const sorted = totals.slice().sort((a, b) => a - b); const mean = sum / n; const median = quantileSorted(sorted, 0.5); // "Fair total line (MC CDF)" in your output is the 50th percentile. // That is NOT the same thing as expected games; keep both. const fairLine50 = median; const pOver = over / n; const pUnder = 1 - pOver; const p10 = quantileSorted(sorted, 0.10); const p90 = quantileSorted(sorted, 0.90); return { n, mean, median, fairLine50, pOver, pUnder, p10, p90, min, max, pGe25: ge25 / n, pGe30: ge30 / n, pGe35: ge35 / n, }; } /* ------------------------- Replacement pick logic (NO hard p3 gate, NO median anchor) -------------------------- */ /** * opts.edgeMin = minimum probability edge over 50% to bet (e.g. 0.03 = 3% edge) * opts.passIfSkewed = if true, pass when mean/median gap indicates heavy-tail risk beyond tolerance * opts.maxMeanMedianGap = tolerance for tail skew before passing (default 3.0 games) */ function getTotalsPick_v2(metrics, line, opts = {}) { const edgeMin = Number.isFinite(opts.edgeMin) ? opts.edgeMin : 0.03; const passIfSkewed = opts.passIfSkewed !== false; // default true const maxMeanMedianGap = Number.isFinite(opts.maxMeanMedianGap) ? opts.maxMeanMedianGap : 3.0; const meanMedianGap = Math.abs(metrics.mean - metrics.median); if (passIfSkewed && meanMedianGap > maxMeanMedianGap) { return { pick: "PASS", confidence: "LOW", reason: `Heavy-tail skew (|mean-median|=${meanMedianGap.toFixed(2)} > ${maxMeanMedianGap})`, }; } const edge = metrics.pOver - 0.5; if (Math.abs(edge) < edgeMin) { return { pick: "PASS", confidence: "LOW", reason: `No edge (P(Over)=${(metrics.pOver * 100).toFixed(1)}%)`, }; } // Confidence buckets are optional; tune however you like const absEdge = Math.abs(edge); const confidence = absEdge >= 0.10 ? "STRONG" : absEdge >= 0.06 ? "HIGH" : absEdge >= 0.03 ? "MED" : "LOW"; return { pick: edge > 0 ? "OVER" : "UNDER", confidence, reason: `P(Over)=${(metrics.pOver * 100).toFixed(1)}% vs line ${line}`, }; } /* ------------------------- Sanity check: your cluster math MUST match full mean -------------------------- */ /** * expectedByCluster = (1 - p3) * e2 + p3 * e3 * If that doesn't match metrics.mean, you are mixing different arrays, * gating samples, or overwriting expectation with a mode/median. */ function validateClusterConsistency(metrics, e2, e3, p3, toleranceGames = 0.5) { if (![e2, e3, p3].every(Number.isFinite)) return { ok: true, note: "cluster stats not provided" }; const expectedByCluster = (1 - p3) * e2 + p3 * e3; const diff = Math.abs(metrics.mean - expectedByCluster); if (diff > toleranceGames) { return { ok: false, expectedByCluster, diff, message: `CLUSTER/MEAN MISMATCH: mean=${metrics.mean.toFixed(2)} but weighted clusters=${expectedByCluster.toFixed(2)} (diff=${diff.toFixed(2)}). You are not aggregating from one distribution.`, }; } return { ok: true, expectedByCluster, diff }; } /* ----------------------------------------------------- Example wiring (adapt variable names to your script) -------------------------------------------------------- const line = 21.5; const totals = totalsMC; // your 12000 match totals const m = summarizeTotals(totals, line); // Print these consistently: console.log(`[DEBUG TOTALS] Line=${line} | Min=${m.min} Max=${m.max} | P(T>=25)=${(m.pGe25*100).toFixed(1)}% | P(T>=30)=${(m.pGe30*100).toFixed(1)}% | P(T>=35)=${(m.pGe35*100).toFixed(1)}%`); console.log(`[TOTALS FAIR LINE] ${m.fairLine50.toFixed(1)} (50/50 from CDF)`); console.log(`[TOTALS RANGE] P10=${m.p10.toFixed(0)} | P50=${m.median.toFixed(0)} | P90=${m.p90.toFixed(0)}`); console.log(`[TOTALS EXPECTED] Match Expected Games (SIM) = ${m.mean.toFixed(1)}`); // Pick: const pick = getTotalsPick_v2(m, line, { edgeMin: 0.03, maxMeanMedianGap: 3.0 }); // Optional sanity check if you compute clusters: const chk = validateClusterConsistency(m, e2Mean, e3Mean, p3Set, 0.5); if (!chk.ok) console.warn(chk.message); ----------------------------------------------------- WHY YOUR CURRENT OUTPUT IS SELF-CONTRADICTORY ----------------------------------------------------- In the snippet you pasted, you simultaneously report: - mean 21.9 - Raw sim 21.1 - BUT "Match Expected Games (SIM) = 20.1" - AND you anchor to "Cluster projection: 18.0g (mode-pick)" That can only happen if: 1) You compute mean from one array (e.g. Markov/Dist or pre-filter), 2) then compute "expected games" from a different array (e.g. gated/cluster-only), 3) or you overwrite expected with a mode/median projection when p3Set < gate. This patch removes that entire class of bugs. ----------------------------------------------------- NEXT STEP (so I can deliver the full rewritten script) ----------------------------------------------------- Upload Tennis.God.JS (or paste the getTotalsPick + runMatchSimulation_Exact + whatever builds totalsMC). Then I can: - apply this refactor in-place, - run a syntax check, - scan for undefined variables / dead branches, - and give you a single downloadable updated script. """ out_path = Path("/mnt/data/Tennis.God.JS_totals_refactor_patch.txt") out_path.write_text(content, encoding="utf-8") str(out_path) '/mnt/data/Tennis.God.JS_totals_refactor_patch.txt'
You’re right to call it “fundamentally broken” based on that output alone — because it’s self-contradictory:
Those numbers cannot all be coming from the same distribution. That means your totals pipeline is almost certainly doing at least one of these:
That’s exactly how you end up with “preview says long-ish tail exists” while the final totals engine acts blind to it.
This is a drop-in totals aggregation + pick engine that forces one source of truth and removes the “median anchor / p3 gate” class of bugs, while letting you keep BTMS (tier model stays upstream; this only fixes totals aggregation/decision).
I do not have your Tennis.God.JS file in this chat, so I literally can’t “view the script myself” yet. If you upload the .js (or paste the relevant sections: getTotalsPick, runMatchSimulation_Exact, and the code that builds the totals array), I can:
.txt.Upload it and I’ll do the full rewrite immediately.