Trading Signals cover
MediumExperimentRoleResearcher & Engineer

Trading Signals

2025 · Live

A long-running research project I keep coming back to: building two options strategies (momentum on the long side, cash-secured puts for bear regimes, plus an intraday overlay), backtesting them honestly across years of market data, and shipping the result as a dashboard I actually trust. This is a research artifact, not investment advice, and no live returns are claimed here.

Single-trade inspector view. A faint indigo band marks an entry window on a candlestick chart, with entry and exit arrows. A side panel shows label rows with their values redacted. Below the chart, an abstract histogram strip.
Single-trade inspector. Numbers are redacted in the portfolio version; in the codebase they are columns in the trade log.
Architecture diagram: market data feeds two parallel strategy engines (momentum/CSP and intraday overlay), each producing scanned signals routed to a backtest harness and a position monitor, all read by a Streamlit dashboard.
Two strategies, one data spine, one dashboard. Backtest and live monitor share the signal scanner code path.

Overview

The question: can a disciplined retail account survive on a small set of mechanical rules that backtest without curve-fitting? Conditional yes. Momentum works in trending tape and gets shredded in chop. Cash-secured puts cushion the chop and earn premium in fear. The intraday overlay is a smaller, separate bet on a specific time-of-day window with stricter entries. The whole thing is a research artifact, not financial advice. The codebase is structured accordingly.

Approach

Two engines share a signal scanner so the backtest harness and live monitor run the same code path. Market data lands in Parquet shards nightly; signals scan against those shards; the backtest replays the log against historical option chains. The dashboard is Streamlit because the audience is one person. Every metric maps to a column in the trade log, nothing is computed live, so the analysis stays auditable. The hard call: refusing to add a recency filter after a string of losses. It would have boosted the headline return and made the strategy unfalsifiable.

Outcome

The system runs. Signals scan nightly, the dashboard renders, the trade log accumulates. Performance figures stay out of the portfolio on purpose: backtest results aren't live returns, the numbers aren't third-party verified, and the bar for publishing publicly is higher than the bar for using them privately. Happy to walk through the methodology and failure modes in person.

Reflections

The dashboard had to refuse to lie. Any visualization that smoothed a real loss out of view, or any metric that quietly capped its worst case, was a future debugging tax. The other lesson: about seventy percent of the commit history is plumbing. The strategy logic is small and stable. The plumbing is where the project lives.