Random + Probability Time-Series
Overview
The MCH TimeSeries module provides tools to simulate stochastic systems and probability-driven processes. It supports three main modeling patterns:
- Geometric Brownian Motion
- Martingales
- Markov Chains
These examples mirror the stories and models from Decision Superhero Vol. 2.
Geometric Brownian Motion (GBM)
Now we move from the general setup to a concrete continuous price process.
GBM is commonly used to simulate continuous price paths such as stock prices or any variable that evolves with compounding random variation.
MCHammer.GBMMfit — FunctionGBMMfit(HistoricalData, PeriodsToForecast; rng="none")GBMMfit uses a vector of historical data to calculate the log returns and use the mean and standard deviation to project a random walk. It the uses the last datapoint in the set as the starting point for the new forecast.
HistoricalData: Vector containing historical data
PeriodsToForecast: integer >1
rng is fully random when set to none. You can specify the rng you want to seed the results e.g. MersenneTwister(1234)
using MCHammer, Random, Distributions
rng = MersenneTwister(1)
Random.seed!(1)
historical = rand(Normal(10,2.5),1000)
GBMMfit(historical, 12; rng=rng)MCHammer.GBMM — FunctionGBMM(LastValue, ReturnsMean, ReturnsStd, PeriodsToForecast; rng::Any="none")GBMM produces a random walk using the last data point and requires a mean and standard deviation to be provided.
LastValue: The most recent data point on which to base your random walk.
ReturnsMean and ReturnsStd : Historical Mean and Standard Deviation of Returns
PeriodsToForecast is an integer >1
rng is fully random when set to none. You can specify the rng you want to seed the results e.g. MersenneTwister(1234)
using MCHammer, Random, Distributions
rng = MersenneTwister(1)
Random.seed!(1)
GBMM(100000, 0.05,0.05,12, rng=rng)MCHammer.GBMA_d — FunctionGBMA_d(price_0, t, rf, exp_vol; rng="none")GBMA_d allows you to forecast the stock price at a given day in the future. This function uses a multiplicative Geometric Brownian Motion to forecast but using the Black-Scholes approach
**price_0**: Stock Price at period 0
**t**: Number of days out to forecast
**rf**: Risk free rate (usually the country's 25 or 30 bond)
**exp_vol**: Expected volatility is the annual volatility
**rng**: Used seed the results to make the forecast reproducible. Set to none by default which means fully random but can use any of Julia's rngs to seed results. e.g. *GBMA_d(price_0, t, rf, exp_vol; rng=MersenneTwister(1))*using MCHammer, Random
rng = MersenneTwister(1)
Random.seed!(1)
GBMA_d(100, 504,0.03,.3, rng=rng)MCHammer.GBMM_Sim — FunctionGBMM_Sim(LastValue, ReturnsMean, ReturnsStd, PeriodsToForecast; trials=1)GBMM_Sim produces multiple random walks using the last data point and requires a mean and standard deviation to be provided.
LastValue: The most recent data point on which to base your random walk.
ReturnsMean and ReturnsStd : Historical Mean and Standard Deviation of Returns
PeriodsToForecast is an integer >1
trials: is to produce a simulated time-series array.
Example: Profit Random Walk
Now we combine several GBM paths into a simple business story.
using Dates, MCHammer, Random, Distributions
Random.seed!(1)
ts_trials = []
for i = 1:1000
Monthly_Sales = GBMM(100000, 0.05,0.05,12)
Monthly_Expenses = GBMM(50000, 0.03,0.02,12)
MonthlyCOGS = Monthly_Sales .* 0.3
MonthlyProfit = Monthly_Sales - Monthly_Expenses - MonthlyCOGS
push!(ts_trials, MonthlyProfit)
end
dr = collect(Date(2025,1,01):Dates.Month(1):Date(2025,12,31))
trend_chrt(ts_trials,dr)Martingales
Now we switch to processes that are fair in expectation.
A martingale describes a process where the expected value of the next step equals the current value. This creates a fair game in expectation, with no drift up or down.
MCHammer.marty — Functionmarty(Wager, GamesPlayed; GameWinProb = 0.5, CashInHand = Wager)In probability theory, a martingale is a sequence of random variables (i.e., a stochastic process) for which, at a particular time, the conditional expectation of the next value in the sequence, given all prior values, is equal to the present value. (Wikipedia)
The marty function is designed to simulate a Martigale such as that everytime a wager is lost, the next bet doubles the wagered amount to negate the previous loss. The resulting vector is the balance of cash the gambler has in hand at any given point in the Martingale process.
GameWinProb is the estimated probability of winning. CashInHand is the starting balance for the martigale. At times, this parameter can make a difference in whether your survive the process or go home broke.
Example: Simple Gambler
Here is the most compact form of the martingale idea in practice.
marty(50,10)10-element Vector{Any}:
0
100
150
100
0
100
50
-50
50
0Example: Adjusting for Known Odds
Now we introduce biased odds and extra cash to see how the behavior changes.
println(marty(50,10; GameWinProb=0.45, CashInHand=400))Any[450, 500, 550, 500, 400, 500, 550, 500, 600, 550]Comparing Different Win Probabilities
Next we compare different win probabilities side by side.
using Plots
Exp11 = scatter(marty(5, 100, GameWinProb = 0.25, CashInHand = 400), title="GameWinProb = 0.25", label="Bets")
Exp12 = scatter(marty(5, 100, GameWinProb = 0.33, CashInHand = 400), title="GameWinProb = 0.33", label="Bets")
Exp13 = scatter(marty(5, 100, GameWinProb = 0.5, CashInHand = 400), title="GameWinProb = 0.5", label="Bets")
Exp14 = scatter(marty(5, 100, GameWinProb = 0.55, CashInHand = 400), title="GameWinProb = 0.55", label="Bets")
combined = Plots.plot(Exp11, Exp12, Exp13, Exp14, layout=(2,2), legend=:topleft)Markov Chain Modeling Functions
Now we move to systems that evolve step by step between discrete states.
A Markov chain models a system moving step by step between states where the probability of the next state depends only on the current one.
What you can predict:
- Long-run equilibrium
- State distribution at each step
- Growth or decline of states
- Speed of convergence
- Absorbing outcomes
Core Markov Functions
MCHammer.cmatrix — Functioncmatrix(dimensions::Int64)cmatrix produces an N x N matrix of 0s where the last row are all 1s.
3×3 Array{Float64,2}: 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0
MCHammer.markov_solve — Functionmarkov_solve(T; state_names=nothing)Solve a Markov chain analytically. The function automatically detects whether the chain is ergodic (no absorbing states) or absorbing.
Arguments
T:: Square transition matrix (rows sum to 1).state_names:: Optional vector of state names. When provided, a DataFrame is returned; otherwise raw arrays are returned.
Behavior
Ergodic case
Computes the long-run steady-state distribution using the augmented matrix method. Returns a DataFrame with:
State | Probability | Type="Ergodic"Absorbing case
Reorders the matrix into the canonical form [Q R; 0 I], computes:
- Fundamental matrix:
N = (I - Q)⁻¹ - Absorption probabilities:
B = N * R - Expected steps to absorption:
t = N * 1
Returns a single DataFrame:
State | <one column per absorbing state> | ExpectedSteps | Typewith transient states first and absorbing states last.
Returns
- DataFrame with labeled results (if
state_namesprovided) - Raw arrays otherwise
MCHammer.markov_ts — Functionmarkov_ts(T, start_vec; trials=1, state_names=nothing)Simulate the evolution of a Markov chain over a given number of time steps.
Arguments
T:: n×n transition matrix (rows sum to 1).start_vec:: length-n probability vector for the starting state.trials:: number of transitions to simulate (default = 1).state_names:: optional vector of state names.
Behavior
Computes:
dist₁ = start_vec * T
dist₂ = dist₁ * T
...
distₖ = distₖ₋₁ * Tand stores each distᵢ as a column.
Returns
If
state_names === nothing: An n×trials matrix of Float64.If
state_namesare provided: A DataFrame with columns:State | Type | 1 | 2 | ... | trialswhere:
- Type = "Ergodic" if the chain has no absorbing states
- Type = "Transient" for non-absorbing states in an absorbing chain
- Type = "Absorbing" for absorbing states in an absorbing chain
MCHammer.markov_ts_plot — Functionmarkov_ts_plot(ms; title="State Probabilities Over Time",
xlabel="Time Step",
ylabel="Probability",
states=nothing)Plot the evolution of Markov chain state probabilities over time.
ms must be the DataFrame returned by markov_ts, with columns:
State | Type | 1 | 2 | … | trialsThe function:
• automatically detects numeric step columns (e.g. "1", "2", …, "10"), • reshapes the data into a long, tidy format, • converts step labels to integer time indices, • plots one line per State.
Use the states keyword to restrict the plot to a subset of state names. Keyword arguments allow customization of the plot title and axis labels.
MCHammer.markov_state_graph — Functionmarkov_state_graph(T, ms; state_names, title, size)Draw a state transition graph for a Markov chain.
T:: transition matrix (used as edge weights)ms:: DataFrame returned bymarkov_ts(State, Type, 1, 2, ..., n)
The function:
- takes the last numeric column in
msas the node weights (the final forecast step), - uses
state_namesas node labels, - and renders a directed graph with node size proportional to the final-period probability.
This works for both ergodic and absorbing chains. In absorbing chains, transient states naturally shrink as probability mass moves into the absorbing states.
Example A — Soft Drink Brand Switching (Ergodic Chain)
Now we see how these tools behave in a real consumer switching problem.
People switch cola brands all the time, and it feels random when you look at just one shopper, but across millions of decisions the switching pattern becomes predictable. A Markov chain captures those movements by tracking the chance that a Classic drinker moves to Zero, or a DietCola fan slides to CherryCola, or a CaffeineFree loyalist quietly disappears. Run those transitions forward and you can see where the market is heading, which brands gain ground, which ones fade and what the long run looks like. In short, if you want to understand how consumers drift between choices over time, a Markov chain gives you a simple way to model the flow and forecast the future.
Why this matters: Brand switching looks chaotic, but the long term trend is predictable. This model shows who wins and how fast.
A1. Analytic Solve
We start by understanding the long-run behavior directly from the transition matrix.
using MCHammer, DataFrames
drink_names = ["CherryCola", "DietCola", "CaffeineFree", "Classic", "Zero"]
brand_share = [0.10, 0.25, 0.05, 0.35, 0.25]
drink_preferences = [
0.60 0.03 0.15 0.20 0.02;
0.02 0.40 0.30 0.20 0.08;
0.15 0.25 0.30 0.25 0.05;
0.15 0.02 0.10 0.70 0.03;
0.15 0.30 0.05 0.05 0.45
]
markov_solve(drink_preferences; state_names = drink_names)| Row | State | Probability | Type |
|---|---|---|---|
| String | Float64 | String | |
| 1 | CherryCola | 0.242577 | Ergodic |
| 2 | DietCola | 0.127557 | Ergodic |
| 3 | CaffeineFree | 0.16803 | Ergodic |
| 4 | Classic | 0.397503 | Ergodic |
| 5 | Zero | 0.0643322 | Ergodic |
A2. Time-Series Forecast
Next we push the chain forward in time to see how shares evolve.
using MCHammer, Plots
drink_periods = 10
ms_drink = markov_ts(drink_preferences, brand_share;
trials = drink_periods,
state_names = drink_names)| Row | State | Type | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| String | String | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | |
| 1 | CherryCola | Ergodic | 0.1 | 0.1625 | 0.19745 | 0.216356 | 0.227091 | 0.233383 | 0.237122 | 0.239352 | 0.240679 | 0.241465 | 0.241929 |
| 2 | DietCola | Ergodic | 0.25 | 0.1975 | 0.17305 | 0.155916 | 0.144678 | 0.137692 | 0.133483 | 0.130995 | 0.129541 | 0.128699 | 0.128212 |
| 3 | CaffeineFree | Ergodic | 0.05 | 0.1525 | 0.17075 | 0.17347 | 0.172522 | 0.171087 | 0.169947 | 0.169183 | 0.168707 | 0.168422 | 0.168255 |
| 4 | Classic | Ergodic | 0.35 | 0.34 | 0.3555 | 0.3708 | 0.381555 | 0.38828 | 0.392261 | 0.39455 | 0.395846 | 0.396574 | 0.396982 |
| 5 | Zero | Ergodic | 0.25 | 0.1475 | 0.10325 | 0.083458 | 0.074154 | 0.0695581 | 0.0671869 | 0.0659204 | 0.0652264 | 0.0648395 | 0.0646213 |
A2.1 Plot
Now we convert the forecast into a simple time-series chart.
plt_drink_ts = markov_ts_plot(
ms_drink;
title = "Brand Share Over Time",
ylabel = "Market Share",
xlabel = "Years Out"
)
plt_drink_tsA3. State Graph
Finally we switch from time-series to structure and visualize the full chain.
plt_drink_state = markov_state_graph(
drink_preferences, ms_drink;
state_names = drink_names,
title = "Brand Switching\n(Node Size = Year 10 Share)",
size = (700, 700)
)
plt_drink_stateExample B — Marital Status Over Time (6-State Ergodic)
Now we move from brand choices to life events that evolve more slowly.
The time series plot for the marital example shows how each relationship category rises or falls over the years. Instead of only seeing the final state, the chart reveals the path the population takes to get there. You can watch Single shrink or grow, Married stabilize or decline, Divorced expand at a steady pace and Widowed increase slowly as the population ages. Each line tracks one state across every forecast step, making it easy to see momentum, turning points and long term trends. It is the unfolding story of the system, not just the ending.
B1. Time-Series Forecast
We start by forecasting how the distribution across marital states changes over time.
using MCHammer, Plots
marital_names = ["Single", "CommonLaw", "Married",
"Separated", "Divorced", "Widowed"]
initial_marital = [0.291, 0.126, 0.443, 0.024, 0.062, 0.054]
marital_T = [
0.65 0.15 0.12 0.03 0.01 0.04;
0.00 0.50 0.33 0.08 0.04 0.05;
0.00 0.00 0.13 0.40 0.42 0.05;
0.00 0.20 0.09 0.02 0.65 0.04;
0.00 0.15 0.10 0.00 0.70 0.05;
0.00 0.25 0.25 0.00 0.00 0.50
]
ms_marital = markov_ts(marital_T, initial_marital;
trials = 25,
state_names = marital_names)| Row | State | Type | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| String | String | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | |
| 1 | Single | Ergodic | 0.291 | 0.18915 | 0.122948 | 0.0799159 | 0.0519453 | 0.0337645 | 0.0219469 | 0.0142655 | 0.00927256 | 0.00602717 | 0.00391766 | 0.00254648 | 0.00165521 | 0.00107589 | 0.000699326 | 0.000454562 | 0.000295465 | 0.000192053 | 0.000124834 | 8.11422e-5 | 5.27424e-5 | 3.42826e-5 | 2.22837e-5 | 1.44844e-5 | 9.41485e-6 | 6.11965e-6 |
| 2 | CommonLaw | Ergodic | 0.126 | 0.13425 | 0.190534 | 0.206432 | 0.21031 | 0.211607 | 0.212143 | 0.212383 | 0.212501 | 0.212565 | 0.212603 | 0.212625 | 0.21264 | 0.212649 | 0.212654 | 0.212658 | 0.212661 | 0.212662 | 0.212663 | 0.212664 | 0.212664 | 0.212665 | 0.212665 | 0.212665 | 0.212665 | 0.212665 |
| 3 | Married | Ergodic | 0.443 | 0.15595 | 0.148047 | 0.16162 | 0.165596 | 0.166397 | 0.16656 | 0.166575 | 0.166553 | 0.166526 | 0.166504 | 0.166489 | 0.166478 | 0.166471 | 0.166466 | 0.166463 | 0.166461 | 0.16646 | 0.166459 | 0.166458 | 0.166458 | 0.166458 | 0.166458 | 0.166458 | 0.166458 | 0.166457 |
| 4 | Separated | Ergodic | 0.024 | 0.19649 | 0.0827243 | 0.0798043 | 0.0851562 | 0.0863246 | 0.0862269 | 0.0859784 | 0.0857682 | 0.0856147 | 0.0855087 | 0.0854377 | 0.0853908 | 0.08536 | 0.0853398 | 0.0853267 | 0.0853181 | 0.0853126 | 0.0853089 | 0.0853066 | 0.0853051 | 0.0853041 | 0.0853034 | 0.085303 | 0.0853027 | 0.0853025 |
| 5 | Divorced | Ergodic | 0.062 | 0.25301 | 0.377586 | 0.389111 | 0.401188 | 0.414665 | 0.425065 | 0.432254 | 0.437063 | 0.440238 | 0.44232 | 0.44368 | 0.444566 | 0.445143 | 0.445519 | 0.445763 | 0.445922 | 0.446025 | 0.446092 | 0.446136 | 0.446164 | 0.446183 | 0.446195 | 0.446203 | 0.446208 | 0.446211 |
| 6 | Widowed | Ergodic | 0.054 | 0.07115 | 0.0781611 | 0.0831158 | 0.0858049 | 0.0872412 | 0.0880576 | 0.0885442 | 0.0888425 | 0.0890287 | 0.0891465 | 0.0892217 | 0.0892699 | 0.089301 | 0.0893211 | 0.0893341 | 0.0893425 | 0.089348 | 0.0893516 | 0.0893539 | 0.0893554 | 0.0893563 | 0.089357 | 0.0893574 | 0.0893576 | 0.0893578 |
B1.1 Plot
Next we plot how each marital state moves over the forecast horizon.
plt_marital_ts = markov_ts_plot(
ms_marital;
title = "Marital Status Over Time",
ylabel = "Proportion of Population",
xlabel = "Years Out"
)
plt_marital_tsB2. State Graph
Finally we look at the structure of transitions and the long-run distribution.
plt_marital_state = markov_state_graph(
marital_T, ms_marital;
state_names = marital_names,
title = "Marital Status Transitions\n(Node Size = Year 25 Share)",
size = (700, 700)
)
plt_marital_stateExample C — Accounts Receivable & Default (Absorbing Chain)
Now we close with a financial example that has clear end states.
Receivables slide through aging buckets just like shoppers drift between cola brands, and even though each invoice feels like a one-off headache, the overall flow settles into a predictable pattern. A Markov chain captures how money moves from 30 days to 60, then 90, then either paid or default. Once you know those transition probabilities, you can jump straight to the end of the story. The model tells you the final state without tracking every month: how much cash will eventually be collected, how much will be lost and how long it takes for the portfolio to clear. If you want a simple shortcut to the endgame of your AR pipeline, a Markov chain gives you the final answer.
C1. Analytic Absorbing Solution
We begin by solving the absorbing chain to understand paid versus default outcomes.
using MCHammer, Plots
ar_names = ["30days", "30_60days", "60_90days",
"90daysPlus", "default", "paid"]
ar_current = [0.50, 0.30, 0.15, 0.05, 0.0, 0.0]
ar_rates = [
0.50 0.10 0.10 0.05 0.05 0.20;
0.00 0.30 0.15 0.25 0.075 0.225;
0.00 0.00 0.25 0.25 0.04 0.46;
0.00 0.00 0.00 0.30 0.20 0.50;
0.00 0.00 0.00 0.00 1.00 0.00;
0.00 0.00 0.00 0.00 0.00 1.00
]
ar_solve = markov_solve(ar_rates; state_names = ar_names)| Row | State | default | paid | ExpectedSteps | Type |
|---|---|---|---|---|---|
| String | Float64 | Float64 | Float64 | String | |
| 1 | 30days | 0.20649 | 0.79351 | 2.97007 | Transient |
| 2 | 30_60days | 0.24102 | 0.75898 | 2.32653 | Transient |
| 3 | 60_90days | 0.148571 | 0.851429 | 1.80952 | Transient |
| 4 | 90daysPlus | 0.285714 | 0.714286 | 1.42857 | Transient |
| 5 | default | 1.0 | 0.0 | 0.0 | Absorbing |
| 6 | paid | 0.0 | 1.0 | 0.0 | Absorbing |
C2. Time-Series Forecast
Next we track how the portfolio migrates between buckets over time.
ms_ar = markov_ts(ar_rates, ar_current;
trials = 12,
state_names = ar_names)| Row | State | Type | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| String | String | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | Float64 | |
| 1 | 30days | Transient | 0.5 | 0.25 | 0.125 | 0.0625 | 0.03125 | 0.015625 | 0.0078125 | 0.00390625 | 0.00195312 | 0.000976562 | 0.000488281 | 0.000244141 | 0.00012207 |
| 2 | 30_60days | Transient | 0.3 | 0.14 | 0.067 | 0.0326 | 0.01603 | 0.007934 | 0.0039427 | 0.00196406 | 0.000979843 | 0.000489265 | 0.000244436 | 0.000122159 | 6.10617e-5 |
| 3 | 60_90days | Transient | 0.15 | 0.1325 | 0.079125 | 0.0423313 | 0.0217228 | 0.0109602 | 0.00549265 | 0.00274582 | 0.00137169 | 0.000685211 | 0.000342349 | 0.000171081 | 8.55081e-5 |
| 4 | 90daysPlus | Transient | 0.05 | 0.1525 | 0.126375 | 0.0806938 | 0.0460659 | 0.0248205 | 0.0129509 | 0.00663475 | 0.00336321 | 0.0016945 | 0.000850798 | 0.000426349 | 0.000213422 |
| 5 | default | Absorbing | 0.0 | 0.0635 | 0.1223 | 0.162015 | 0.185417 | 0.198264 | 0.205043 | 0.208539 | 0.210318 | 0.211217 | 0.211669 | 0.211895 | 0.212009 |
| 6 | paid | Absorbing | 0.0 | 0.2615 | 0.4802 | 0.61986 | 0.699514 | 0.742396 | 0.764759 | 0.77621 | 0.782014 | 0.784938 | 0.786405 | 0.787141 | 0.787509 |
C2.1 Plot
Now we plot how much of the portfolio sits in each bucket at each period.
plt_ar_ts = markov_ts_plot(
ms_ar;
title = "Payment Status Over Time",
ylabel = "Proportion of Receivables",
xlabel = "Months Out"
)
plt_ar_tsC3. State Graph
Finally we summarize the structure and final distribution in a single picture.
plt_ar_state = markov_state_graph(
ar_rates, ms_ar;
state_names = ar_names,
title = "Receivables Transitions\n(Node Size = Month 12 Share)",
size = (700, 700)
)
plt_ar_stateSources & References
- Eric Torkia, Decision Superhero Vol. 2, chapter 10 : Beyond Monte Carlo — When Uncertainty HAS Memory, Technics Publishing, 2026
- Available on Amazon : https://a.co/d/4YlJFzY . Volumes 2 and 3 to be released in Fall 2025 and Summer 2026.
- Eric Torkia, Decision Superhero Vol. 3, Chapter 5