Random + Probability Time-Series

Overview

The MCH TimeSeries module provides tools to simulate stochastic systems and probability-driven processes. It supports three main modeling patterns:

  • Geometric Brownian Motion
  • Martingales
  • Markov Chains

These examples mirror the stories and models from Decision Superhero Vol. 2.


Geometric Brownian Motion (GBM)

Now we move from the general setup to a concrete continuous price process.

GBM is commonly used to simulate continuous price paths such as stock prices or any variable that evolves with compounding random variation.

MCHammer.GBMMfitFunction
GBMMfit(HistoricalData, PeriodsToForecast; rng="none")

GBMMfit uses a vector of historical data to calculate the log returns and use the mean and standard deviation to project a random walk. It the uses the last datapoint in the set as the starting point for the new forecast.

HistoricalData: Vector containing historical data

PeriodsToForecast: integer >1

rng is fully random when set to none. You can specify the rng you want to seed the results e.g. MersenneTwister(1234)

source
using MCHammer, Random, Distributions
rng = MersenneTwister(1)
Random.seed!(1)

historical = rand(Normal(10,2.5),1000)
GBMMfit(historical, 12; rng=rng)
MCHammer.GBMMFunction
GBMM(LastValue, ReturnsMean, ReturnsStd, PeriodsToForecast; rng::Any="none")

GBMM produces a random walk using the last data point and requires a mean and standard deviation to be provided.

LastValue: The most recent data point on which to base your random walk.

ReturnsMean and ReturnsStd : Historical Mean and Standard Deviation of Returns

PeriodsToForecast is an integer >1

rng is fully random when set to none. You can specify the rng you want to seed the results e.g. MersenneTwister(1234)

source
using MCHammer, Random, Distributions
rng = MersenneTwister(1)
Random.seed!(1)
GBMM(100000, 0.05,0.05,12, rng=rng)
MCHammer.GBMA_dFunction
GBMA_d(price_0, t, rf, exp_vol; rng="none")

GBMA_d allows you to forecast the stock price at a given day in the future. This function uses a multiplicative Geometric Brownian Motion to forecast but using the Black-Scholes approach

**price_0**: Stock Price at period 0
**t**: Number of days out to forecast
**rf**: Risk free rate (usually the country's 25 or 30 bond)
**exp_vol**: Expected volatility is the annual volatility
**rng**: Used seed the results to make the forecast reproducible. Set to none by default which means fully random  but can use any of Julia's rngs to seed results. e.g. *GBMA_d(price_0, t, rf, exp_vol; rng=MersenneTwister(1))*
source
using MCHammer, Random
rng = MersenneTwister(1)
Random.seed!(1)
GBMA_d(100, 504,0.03,.3, rng=rng)
MCHammer.GBMM_SimFunction
GBMM_Sim(LastValue, ReturnsMean, ReturnsStd, PeriodsToForecast; trials=1)

GBMM_Sim produces multiple random walks using the last data point and requires a mean and standard deviation to be provided.

LastValue: The most recent data point on which to base your random walk.

ReturnsMean and ReturnsStd : Historical Mean and Standard Deviation of Returns

PeriodsToForecast is an integer >1

trials: is to produce a simulated time-series array.

source

Example: Profit Random Walk

Now we combine several GBM paths into a simple business story.

using Dates, MCHammer, Random, Distributions
Random.seed!(1)

ts_trials = []

for i = 1:1000
     Monthly_Sales = GBMM(100000, 0.05,0.05,12)
     Monthly_Expenses = GBMM(50000, 0.03,0.02,12)
     MonthlyCOGS = Monthly_Sales .* 0.3
     MonthlyProfit = Monthly_Sales - Monthly_Expenses - MonthlyCOGS
     push!(ts_trials, MonthlyProfit)
end

dr = collect(Date(2025,1,01):Dates.Month(1):Date(2025,12,31))
trend_chrt(ts_trials,dr)

Martingales

Now we switch to processes that are fair in expectation.

A martingale describes a process where the expected value of the next step equals the current value. This creates a fair game in expectation, with no drift up or down.

MCHammer.martyFunction
marty(Wager, GamesPlayed; GameWinProb = 0.5, CashInHand = Wager)

In probability theory, a martingale is a sequence of random variables (i.e., a stochastic process) for which, at a particular time, the conditional expectation of the next value in the sequence, given all prior values, is equal to the present value. (Wikipedia)

The marty function is designed to simulate a Martigale such as that everytime a wager is lost, the next bet doubles the wagered amount to negate the previous loss. The resulting vector is the balance of cash the gambler has in hand at any given point in the Martingale process.

GameWinProb is the estimated probability of winning. CashInHand is the starting balance for the martigale. At times, this parameter can make a difference in whether your survive the process or go home broke.

source

Example: Simple Gambler

Here is the most compact form of the martingale idea in practice.

marty(50,10)
10-element Vector{Any}:
   0
 100
 150
 100
   0
 100
  50
 -50
  50
   0

Example: Adjusting for Known Odds

Now we introduce biased odds and extra cash to see how the behavior changes.

println(marty(50,10; GameWinProb=0.45, CashInHand=400))
Any[450, 500, 550, 500, 400, 500, 550, 500, 600, 550]

Comparing Different Win Probabilities

Next we compare different win probabilities side by side.

using Plots

Exp11 = scatter(marty(5, 100, GameWinProb = 0.25, CashInHand = 400), title="GameWinProb = 0.25", label="Bets")
Exp12 = scatter(marty(5, 100, GameWinProb = 0.33, CashInHand = 400), title="GameWinProb = 0.33", label="Bets")
Exp13 = scatter(marty(5, 100, GameWinProb = 0.5,  CashInHand = 400), title="GameWinProb = 0.5",  label="Bets")
Exp14 = scatter(marty(5, 100, GameWinProb = 0.55, CashInHand = 400), title="GameWinProb = 0.55", label="Bets")

combined = Plots.plot(Exp11, Exp12, Exp13, Exp14, layout=(2,2), legend=:topleft)

Markov Chain Modeling Functions

Now we move to systems that evolve step by step between discrete states.

A Markov chain models a system moving step by step between states where the probability of the next state depends only on the current one.

What you can predict:

  1. Long-run equilibrium
  2. State distribution at each step
  3. Growth or decline of states
  4. Speed of convergence
  5. Absorbing outcomes

Core Markov Functions

MCHammer.cmatrixFunction
cmatrix(dimensions::Int64)

cmatrix produces an N x N matrix of 0s where the last row are all 1s.

3×3 Array{Float64,2}: 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0

source
MCHammer.markov_solveFunction
markov_solve(T; state_names=nothing)

Solve a Markov chain analytically. The function automatically detects whether the chain is ergodic (no absorbing states) or absorbing.

Arguments

  • T :: Square transition matrix (rows sum to 1).
  • state_names :: Optional vector of state names. When provided, a DataFrame is returned; otherwise raw arrays are returned.

Behavior

Ergodic case

Computes the long-run steady-state distribution using the augmented matrix method. Returns a DataFrame with:

State | Probability | Type="Ergodic"

Absorbing case

Reorders the matrix into the canonical form [Q R; 0 I], computes:

  • Fundamental matrix: N = (I - Q)⁻¹
  • Absorption probabilities: B = N * R
  • Expected steps to absorption: t = N * 1

Returns a single DataFrame:

State | <one column per absorbing state> | ExpectedSteps | Type

with transient states first and absorbing states last.

Returns

  • DataFrame with labeled results (if state_names provided)
  • Raw arrays otherwise
source
MCHammer.markov_tsFunction
markov_ts(T, start_vec; trials=1, state_names=nothing)

Simulate the evolution of a Markov chain over a given number of time steps.

Arguments

  • T :: n×n transition matrix (rows sum to 1).
  • start_vec :: length-n probability vector for the starting state.
  • trials :: number of transitions to simulate (default = 1).
  • state_names :: optional vector of state names.

Behavior

Computes:

dist₁ = start_vec * T
dist₂ = dist₁    * T
...
distₖ = distₖ₋₁  * T

and stores each distᵢ as a column.

Returns

  • If state_names === nothing: An n×trials matrix of Float64.

  • If state_names are provided: A DataFrame with columns:

      State | Type | 1 | 2 | ... | trials

    where:

    • Type = "Ergodic" if the chain has no absorbing states
    • Type = "Transient" for non-absorbing states in an absorbing chain
    • Type = "Absorbing" for absorbing states in an absorbing chain
source
MCHammer.markov_ts_plotFunction
markov_ts_plot(ms; title="State Probabilities Over Time",
                  xlabel="Time Step",
                  ylabel="Probability",
                  states=nothing)

Plot the evolution of Markov chain state probabilities over time.

ms must be the DataFrame returned by markov_ts, with columns:

State | Type | 1 | 2 | … | trials

The function:

• automatically detects numeric step columns (e.g. "1", "2", …, "10"), • reshapes the data into a long, tidy format, • converts step labels to integer time indices, • plots one line per State.

Use the states keyword to restrict the plot to a subset of state names. Keyword arguments allow customization of the plot title and axis labels.

source
MCHammer.markov_state_graphFunction
markov_state_graph(T, ms; state_names, title, size)

Draw a state transition graph for a Markov chain.

  • T :: transition matrix (used as edge weights)
  • ms :: DataFrame returned by markov_ts (State, Type, 1, 2, ..., n)

The function:

  • takes the last numeric column in ms as the node weights (the final forecast step),
  • uses state_names as node labels,
  • and renders a directed graph with node size proportional to the final-period probability.

This works for both ergodic and absorbing chains. In absorbing chains, transient states naturally shrink as probability mass moves into the absorbing states.

source

Example A — Soft Drink Brand Switching (Ergodic Chain)


Now we see how these tools behave in a real consumer switching problem.

People switch cola brands all the time, and it feels random when you look at just one shopper, but across millions of decisions the switching pattern becomes predictable. A Markov chain captures those movements by tracking the chance that a Classic drinker moves to Zero, or a DietCola fan slides to CherryCola, or a CaffeineFree loyalist quietly disappears. Run those transitions forward and you can see where the market is heading, which brands gain ground, which ones fade and what the long run looks like. In short, if you want to understand how consumers drift between choices over time, a Markov chain gives you a simple way to model the flow and forecast the future.

Why this matters: Brand switching looks chaotic, but the long term trend is predictable. This model shows who wins and how fast.


A1. Analytic Solve

We start by understanding the long-run behavior directly from the transition matrix.

using MCHammer, DataFrames

drink_names  = ["CherryCola", "DietCola", "CaffeineFree", "Classic", "Zero"]
brand_share  = [0.10, 0.25, 0.05, 0.35, 0.25]

drink_preferences = [
    0.60  0.03  0.15  0.20  0.02;
    0.02  0.40  0.30  0.20  0.08;
    0.15  0.25  0.30  0.25  0.05;
    0.15  0.02  0.10  0.70  0.03;
    0.15  0.30  0.05  0.05  0.45
]

markov_solve(drink_preferences; state_names = drink_names)
5×3 DataFrame
RowStateProbabilityType
StringFloat64String
1CherryCola0.242577Ergodic
2DietCola0.127557Ergodic
3CaffeineFree0.16803Ergodic
4Classic0.397503Ergodic
5Zero0.0643322Ergodic

A2. Time-Series Forecast

Next we push the chain forward in time to see how shares evolve.

using MCHammer, Plots

drink_periods = 10

ms_drink = markov_ts(drink_preferences, brand_share;
                     trials      = drink_periods,
                     state_names = drink_names)
5×13 DataFrame
RowStateType012345678910
StringStringFloat64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64
1CherryColaErgodic0.10.16250.197450.2163560.2270910.2333830.2371220.2393520.2406790.2414650.241929
2DietColaErgodic0.250.19750.173050.1559160.1446780.1376920.1334830.1309950.1295410.1286990.128212
3CaffeineFreeErgodic0.050.15250.170750.173470.1725220.1710870.1699470.1691830.1687070.1684220.168255
4ClassicErgodic0.350.340.35550.37080.3815550.388280.3922610.394550.3958460.3965740.396982
5ZeroErgodic0.250.14750.103250.0834580.0741540.06955810.06718690.06592040.06522640.06483950.0646213

A2.1 Plot

Now we convert the forecast into a simple time-series chart.

plt_drink_ts = markov_ts_plot(
    ms_drink;
    title  = "Brand Share Over Time",
    ylabel = "Market Share",
    xlabel = "Years Out"
)
plt_drink_ts

A3. State Graph

Finally we switch from time-series to structure and visualize the full chain.

plt_drink_state = markov_state_graph(
    drink_preferences, ms_drink;
    state_names = drink_names,
    title       = "Brand Switching\n(Node Size = Year 10 Share)",
    size        = (700, 700)
)
plt_drink_state

Example B — Marital Status Over Time (6-State Ergodic)


Now we move from brand choices to life events that evolve more slowly.

The time series plot for the marital example shows how each relationship category rises or falls over the years. Instead of only seeing the final state, the chart reveals the path the population takes to get there. You can watch Single shrink or grow, Married stabilize or decline, Divorced expand at a steady pace and Widowed increase slowly as the population ages. Each line tracks one state across every forecast step, making it easy to see momentum, turning points and long term trends. It is the unfolding story of the system, not just the ending.


B1. Time-Series Forecast

We start by forecasting how the distribution across marital states changes over time.

using MCHammer, Plots

marital_names = ["Single", "CommonLaw", "Married",
                 "Separated", "Divorced", "Widowed"]

initial_marital = [0.291, 0.126, 0.443, 0.024, 0.062, 0.054]

marital_T = [
    0.65  0.15  0.12  0.03  0.01  0.04;
    0.00  0.50  0.33  0.08  0.04  0.05;
    0.00  0.00  0.13  0.40  0.42  0.05;
    0.00  0.20  0.09  0.02  0.65  0.04;
    0.00  0.15  0.10  0.00  0.70  0.05;
    0.00  0.25  0.25  0.00  0.00  0.50
]

ms_marital = markov_ts(marital_T, initial_marital;
                       trials      = 25,
                       state_names = marital_names)
6×28 DataFrame
RowStateType012345678910111213141516171819202122232425
StringStringFloat64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64
1SingleErgodic0.2910.189150.1229480.07991590.05194530.03376450.02194690.01426550.009272560.006027170.003917660.002546480.001655210.001075890.0006993260.0004545620.0002954650.0001920530.0001248348.11422e-55.27424e-53.42826e-52.22837e-51.44844e-59.41485e-66.11965e-6
2CommonLawErgodic0.1260.134250.1905340.2064320.210310.2116070.2121430.2123830.2125010.2125650.2126030.2126250.212640.2126490.2126540.2126580.2126610.2126620.2126630.2126640.2126640.2126650.2126650.2126650.2126650.212665
3MarriedErgodic0.4430.155950.1480470.161620.1655960.1663970.166560.1665750.1665530.1665260.1665040.1664890.1664780.1664710.1664660.1664630.1664610.166460.1664590.1664580.1664580.1664580.1664580.1664580.1664580.166457
4SeparatedErgodic0.0240.196490.08272430.07980430.08515620.08632460.08622690.08597840.08576820.08561470.08550870.08543770.08539080.085360.08533980.08532670.08531810.08531260.08530890.08530660.08530510.08530410.08530340.0853030.08530270.0853025
5DivorcedErgodic0.0620.253010.3775860.3891110.4011880.4146650.4250650.4322540.4370630.4402380.442320.443680.4445660.4451430.4455190.4457630.4459220.4460250.4460920.4461360.4461640.4461830.4461950.4462030.4462080.446211
6WidowedErgodic0.0540.071150.07816110.08311580.08580490.08724120.08805760.08854420.08884250.08902870.08914650.08922170.08926990.0893010.08932110.08933410.08934250.0893480.08935160.08935390.08935540.08935630.0893570.08935740.08935760.0893578

B1.1 Plot

Next we plot how each marital state moves over the forecast horizon.

plt_marital_ts = markov_ts_plot(
    ms_marital;
    title  = "Marital Status Over Time",
    ylabel = "Proportion of Population",
    xlabel = "Years Out"
)
plt_marital_ts

B2. State Graph

Finally we look at the structure of transitions and the long-run distribution.

plt_marital_state = markov_state_graph(
    marital_T, ms_marital;
    state_names = marital_names,
    title       = "Marital Status Transitions\n(Node Size = Year 25 Share)",
    size        = (700, 700)
)
plt_marital_state

Example C — Accounts Receivable & Default (Absorbing Chain)


Now we close with a financial example that has clear end states.

Receivables slide through aging buckets just like shoppers drift between cola brands, and even though each invoice feels like a one-off headache, the overall flow settles into a predictable pattern. A Markov chain captures how money moves from 30 days to 60, then 90, then either paid or default. Once you know those transition probabilities, you can jump straight to the end of the story. The model tells you the final state without tracking every month: how much cash will eventually be collected, how much will be lost and how long it takes for the portfolio to clear. If you want a simple shortcut to the endgame of your AR pipeline, a Markov chain gives you the final answer.


C1. Analytic Absorbing Solution

We begin by solving the absorbing chain to understand paid versus default outcomes.

using MCHammer, Plots

ar_names   = ["30days", "30_60days", "60_90days",
              "90daysPlus", "default", "paid"]

ar_current = [0.50, 0.30, 0.15, 0.05, 0.0, 0.0]

ar_rates = [
    0.50  0.10  0.10  0.05  0.05  0.20;
    0.00  0.30  0.15  0.25  0.075 0.225;
    0.00  0.00  0.25  0.25  0.04  0.46;
    0.00  0.00  0.00  0.30  0.20  0.50;
    0.00  0.00  0.00  0.00  1.00  0.00;
    0.00  0.00  0.00  0.00  0.00  1.00
]

ar_solve = markov_solve(ar_rates; state_names = ar_names)
6×5 DataFrame
RowStatedefaultpaidExpectedStepsType
StringFloat64Float64Float64String
130days0.206490.793512.97007Transient
230_60days0.241020.758982.32653Transient
360_90days0.1485710.8514291.80952Transient
490daysPlus0.2857140.7142861.42857Transient
5default1.00.00.0Absorbing
6paid0.01.00.0Absorbing

C2. Time-Series Forecast

Next we track how the portfolio migrates between buckets over time.

ms_ar = markov_ts(ar_rates, ar_current;
                  trials      = 12,
                  state_names = ar_names)
6×15 DataFrame
RowStateType0123456789101112
StringStringFloat64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64Float64
130daysTransient0.50.250.1250.06250.031250.0156250.00781250.003906250.001953120.0009765620.0004882810.0002441410.00012207
230_60daysTransient0.30.140.0670.03260.016030.0079340.00394270.001964060.0009798430.0004892650.0002444360.0001221596.10617e-5
360_90daysTransient0.150.13250.0791250.04233130.02172280.01096020.005492650.002745820.001371690.0006852110.0003423490.0001710818.55081e-5
490daysPlusTransient0.050.15250.1263750.08069380.04606590.02482050.01295090.006634750.003363210.00169450.0008507980.0004263490.000213422
5defaultAbsorbing0.00.06350.12230.1620150.1854170.1982640.2050430.2085390.2103180.2112170.2116690.2118950.212009
6paidAbsorbing0.00.26150.48020.619860.6995140.7423960.7647590.776210.7820140.7849380.7864050.7871410.787509

C2.1 Plot

Now we plot how much of the portfolio sits in each bucket at each period.

plt_ar_ts = markov_ts_plot(
    ms_ar;
    title  = "Payment Status Over Time",
    ylabel = "Proportion of Receivables",
    xlabel = "Months Out"
)
plt_ar_ts

C3. State Graph

Finally we summarize the structure and final distribution in a single picture.

plt_ar_state = markov_state_graph(
    ar_rates, ms_ar;
    state_names = ar_names,
    title       = "Receivables Transitions\n(Node Size = Month 12 Share)",
    size        = (700, 700)
)
plt_ar_state

Sources & References

  • Eric Torkia, Decision Superhero Vol. 2, chapter 10 : Beyond Monte Carlo — When Uncertainty HAS Memory, Technics Publishing, 2026
  • Available on Amazon : https://a.co/d/4YlJFzY . Volumes 2 and 3 to be released in Fall 2025 and Summer 2026.
  • Eric Torkia, Decision Superhero Vol. 3, Chapter 5