Structural Estimation via NFXP: A Step-by-Step Guide to the Rust (1987) Bus Engine Model
Econometrics Series 1
This post walks through the Rust (1987) bus engine replacement problem and explains, step by step, how the Nested Fixed-Point (NFXP) algorithm estimates structural parameters. This time, I attached Julia code for solving this problem. The goal is to build intuition for why each step is necessary and how the pieces connect.
Julia code
Companion script: rust_nfxp.jl. Requires Optim, ForwardDiff, and Distributions. Run in Julia: include("rust_nfxp.jl").
For more detailed guidance step by step, I attach a more detailed PDF version here.
Download PDF (NFXP guidance)Introduction: Structural vs. Reduced-Form
In empirical economics, there are broadly two approaches: reduced-form estimation and structural estimation.
Reduced-form econometrics usually revolves around causal inference, often with quasi-experimental designs. It aims to estimate how a treatment or policy causally affects outcome . Think of the growing difference-in-differences (DiD) literature, instrumental variables (IV), regression discontinuity designs (RDD), and so on. Reduced-form work does not explicitly model agents’ optimization problems or equilibrium conditions. Instead, it focuses on cleanly identifying causal effects: “If changes, what happens to ?”—with as little bias as possible, and with less reliance on heavy economic theory for behavior. In a fisheries context, a reduced-form question might be: “Did installing marine protected areas (MPAs) improve fish harvest in this region, comparing before and after?”
Structural econometrics goes further by explicitly modeling agents’ decision-making. It starts with a behavioral model grounded in economic theory and then estimates the model’s parameters from data. The question is “Why do people make these choices?” and the approach builds a theoretical structure around it.
The workflow looks like this. First, you build a model from economic theory—e.g. consumers maximize utility, or fishers maximize expected profit when choosing where and how much to harvest. Second, you derive equilibrium or first-order conditions. Third, you estimate the “deep parameters” in those conditions: preference parameters, technology parameters, cost parameters, and so on.
I’ll be honest: I didn’t really get “deep parameters” in my first year of the PhD. One of my econometrics professors, Aaron Smith—now an endowed professor at UC Berkeley—made it click in office hours. He said something like: “Hey Kyumin, think of parameters that have specific names from economic theory, like elasticities—parameters tied to theory-based behavior.” That helped a lot. Structural econometrics is often used in industrial organization (IO)–style problems with very explicit decision-making. In practice, random utility models (RUM) implemented via logit or mixed logit are common. A structural question in fisheries would be: “If we increased or decreased the size of MPAs, what would the effect of that size change be on harvest?”
There are many great guides and textbooks for reduced-form methods—e.g. Causal Inference: The Mixtape by Scott Cunningham and Mostly Harmless Econometrics by Angrist and Pischke. Practical, example-driven guides for structural methods seem rarer. So I decided to write about one of the core topics: the Rust (1987) bus engine model and the NFXP algorithm.
My First Structural Econometrics Journey: Dynamic Discrete Choice
My first deep dive into structural modeling was in a dynamic optimization course. For the midterm, I had to learn dynamic discrete choice models and, at the same time, the computational algorithms to solve them—in particular, the Nested Fixed-Point (NFXP) method.
The midterm was Harold Zurcher’s bus engine replacement problem—a classic dynamic discrete choice model in structural econometrics. The problem is whether to replace a bus engine or not: the optimal decision problem faced by Harold Zurcher, who was responsible for bus engine replacement in the real world. (John Rust later used Zurcher’s engine replacement records and applied his structural model to this data.)
I really struggled building this model in Julia at the time. Even understanding the concepts and the algorithm felt intimidating. I recently looked at the midterm code I wrote about four years ago. Now I feel like I can explain what’s going on to others. So I’m writing this as a guide for anyone encountering this class of models for the first time. The “deep structural parameters” we estimate are the replacement cost () and the marginal engine maintenance cost (). I’ll spell those out below.
The Problem: What Are We Trying to Do?
Harold Zurcher is a bus maintenance superintendent. Every period, he looks at a bus engine’s accumulated mileage and makes a binary decision:
- : Keep the current engine (pay maintenance costs that increase with mileage)
- : Replace the engine (pay a large one-time cost; mileage resets to zero)
We observe Zurcher’s decisions across many buses and many time periods. Our goal is to recover the cost parameters that rationalize his behavior:
- : replacement cost (how expensive is a new engine?)
- : marginal maintenance cost (how much does mileage hurt?)
Why “structural”? A reduced-form approach would simply regress replacement on mileage. Structural estimation goes further: we model Zurcher as a rational, forward-looking agent who solves a dynamic optimization problem. That lets us ask counterfactuals like “What would happen if engine prices doubled?”
The Model
State and Actions
The state variable is mileage, discretized into bins (we use ). The action is binary: .
Flow Utility
Each period, Zurcher receives utility (really, negative cost):
Transition Probabilities
After the action, mileage changes stochastically. If Zurcher keeps (), mileage increases by with probabilities . If he replaces (), mileage resets to 0 and then increases by the same random increment. This is captured by a transition matrix (e.g. in make_transition_matrix()).
Two kinds of probability
(1) : Transition probability—how the environment evolves. We estimate it in Step 1 by counting frequencies.
(2) : Choice probability—how Zurcher behaves. This is what the model predicts and what we match to data in Step 2. Don’t confuse them: is an input; is an output.
The Error Term: Why Gumbel?
Zurcher’s utility includes an unobserved shock :
where and are i.i.d. Type I Extreme Value (Gumbel). Rust (1987) chose this distribution because of a convenient analytical result (McFadden, 1978): the expectation of the max over the shocks has a closed form. If we write choice-specific values as , then
where is Euler’s constant. So the Gumbel assumption turns the operator into a smooth log-sum-exp, which keeps the Bellman equation differentiable and makes choice probabilities logit.
The Bellman Equation
Zurcher is forward-looking. Define the expected value function as the expected continuation value after the transition:
Applying the Gumbel result, the expectation over is
Substituting back gives the Bellman fixed-point equation:
appears on both sides—solving it is the job of the inner loop.
Where did go? The shocks were integrated out by taking the expectation. The Gumbel assumption gives a closed-form log-sum-exp instead of numerical simulation.
Choice Probabilities
Given , the probability that Zurcher replaces is logit:
The choice-specific values and have the explicit form (flow utility plus discounted expected continuation value):
Here is the expected value at next state (integrated over the Gumbel shocks), and is the transition probability from current state to next state given the action (so is the transition from state 0 after replacement). These choice-specific values are what we match to the data when we form the likelihood.
Why Is the Bellman Equation Necessary?
In a static discrete choice model, you would just plug flow utilities into the logit formula:
No Bellman needed—get choice probabilities, form the likelihood, done. But Zurcher is forward-looking. At mileage 30, deciding whether to replace depends on what happens at 40, 50, 60, and so on. The value of keeping today depends on the value of keeping tomorrow, and so on. That infinite recursion is captured by the Bellman equation. The Bellman is a computational bottleneck between the parameters and the likelihood: we need to solve it to evaluate “how good is this ?”
The NFXP Algorithm
NFXP (Rust, 1987) estimates by maximum likelihood, with the Bellman equation solved as an intermediate step. The name “Nested Fixed-Point” means: fixed-point iteration (solving the Bellman) is nested inside the optimization loop.
Overview
Step 1 (pre-estimation): Estimate from data by frequency counting.
Step 2 (NFXP): Fix and solve:
Each likelihood evaluation requires solving the Bellman equation.
Step 1: Transition Probability Estimation
Look at all periods where Zurcher did not replace (), and count how much mileage increased. The frequency estimator is
This works under Rust’s conditional independence assumption: the transition probabilities are independent of the unobserved . In the code this is estimate_transition().
Step 2: The Nested Loops
Outer loop (optimizer): An optimizer (e.g. Nelder-Mead) searches over to maximize the log-likelihood. At each candidate:
Inner loop (Bellman solver): Given , solve the Bellman by contraction mapping iteration:
- Initialize (vector of zeros, one per state).
- Compute from using the Bellman equation.
- Check convergence: .
- If converged, return ; otherwise go to step 2.
This is solve_EV() in the code.
The inner loop always converges. The Bellman operator is a contraction (because ). So we get a fixed point for whatever we plug in. Convergence of the inner loop does not mean we found the right parameters—only “given these parameters, this is what rational behavior looks like.” Finding the right parameters is the outer loop’s job.
From to likelihood: Compute choice probabilities from , then evaluate the log-likelihood:
The optimizer receives this scalar and updates . This is neg_log_likelihood().
Why solve the Bellman every time? When changes, the whole value function changes. A higher makes maintenance more expensive at every mileage, which changes the optimal replacement threshold and the future value of every state. The old is no longer valid—we solve from scratch.
Inference: Standard Errors
After finding , we need standard errors. Under standard conditions, the MLE is asymptotically normal:
where is the information matrix. In practice we estimate the variance as the inverse Hessian of the negative log-likelihood at :
Standard errors are . For testing we use the -statistic:
In the code we use ForwardDiff.hessian() to compute via automatic differentiation (compute_se(), print_inference()).
Optimal Policy
Once we have , the replacement probability at each mileage state describes Zurcher’s optimal behavior. With Gumbel errors the policy is probabilistic rather than a sharp threshold. In practice you typically see:
- At low mileage: (maintenance is cheap)
- Around some threshold: rises steeply
- At high mileage: (maintenance is so costly that replacement is almost certain)
The threshold depends on the ratio : a higher (expensive replacement) delays replacement; a higher (expensive maintenance) accelerates it.
NFXP vs. MPEC
NFXP solves the Bellman to completion at every parameter guess. An alternative is MPEC (Su and Judd, 2012), which reformulates the problem as
where is the Bellman operator. Instead of an inner loop that solves the Bellman, MPEC treats as a decision variable and the Bellman equation as a constraint. NFXP has a clear inner loop (full convergence each time); MPEC often avoids that and can be faster for large state spaces, but the coding is more involved (constrained optimizer, gradients, sparsity). The key insight of MPEC is that intermediate iterations don’t need an exactly solved Bellman—only the final solution must satisfy both optimality of and the Bellman equation.
References
- Angrist, J. and Pischke, J.-S. (2009). Mostly Harmless Econometrics: An Empiricist’s Companion. Princeton University Press.
- Aguirregabiria, V. and Mira, P. (2010). “Dynamic Discrete Choice Structural Models: A Survey.” Journal of Econometrics, 156(1), 38–67.
- Cunningham, S. (2021). Causal Inference: The Mixtape. Yale University Press.
- McFadden, D. (1978). “Modelling the Choice of Residential Location.” In Spatial Interaction Theory and Residential Location, ed. A. Karlqvist et al.
- Rust, J. (1987). “Optimal Replacement of GMC Bus Engines: An Empirical Model of Harold Zurcher.” Econometrica, 55(5), 999–1033.
- Su, C.-L. and Judd, K. (2012). “Constrained Optimization Approaches to Estimation of Structural Models.” Econometrica, 80(5), 2213–2230.