JP / EN
Numerical Analysis

Monte Carlo Method

"Evaluating Uncertainty" and "Global Optimization" pioneered by random numbers (dice).

1. Introduction: From Determinism to Probability

Algorithms like differential equations, Newton's method, or the Simplex method always derive "a single correct answer (a deterministic solution)" once the input is fixed. However, unpredictable noise (variance) invariably exists in real-world physical phenomena and electronic components.

How does a "theoretically correct design" withstand real-world noise, or how do we solve equations "too complex for derivative calculations"? Enter the Monte Carlo method, which forcefully executes thousands or tens of thousands of simulations using random numbers, deriving approximate solutions through the power of probability and statistics.
This article explains the practical applications of this powerful method from two perspectives: Engineering (Yield Evaluation) and Mathematics (Global Optimization).

2. Practice I: Component Tolerance & Yield Simulation

Consider a circuit that divides an input voltage $V_{in} = 10\text{V}$ with two resistors $R_1, R_2$ (theoretical value $1000\Omega$). The deterministic output by Ohm's law is exactly $5.0\text{V}$, but real resistors have a tolerance (manufacturing variation) of about $\pm 5\%$.

Assuming the acceptable output specification is $4.85\text{V} \sim 5.15\text{V}$, let's simulate "how many defective products occur" due to this component variation using the Monte Carlo method. We will virtually generate resistors using random numbers and test (calculate) 20 products.

Visualizing Uncertainty:
Looking at the graph, the results scatter around the ideal $5.0\text{V}$ (green dashed line), and some points mercilessly break through the red limit lines.
Without performing complex probability density integrals, the strength of the Monte Carlo method lies in its ability to directly evaluate design robustness (yield) through a simple loop process of "rolling dice, substituting them into equations, and counting the results."

3. Practice II: Solving Nonlinear Equations via Global Optimization

Next, let's use the Monte Carlo method as a "mathematical solver." We will find the solution (intersection) of the following system of two nonlinear equations.

$$ f_1(x, y) = x^2 + y^2 - 4 = 0 \quad \text{(Circle of radius 2)} $$ $$ f_2(x, y) = x - y - 1 = 0 \quad \text{(Straight line)} $$

Newton's method is powerful, but if the initial value is wrong, it gets trapped in false valleys (local minima). Therefore, we abandon derivative calculations entirely and replace the problem with the "minimization problem of the objective function $E(x,y)$".

$$ E(x, y) = (x^2 + y^2 - 4)^2 + (x - y - 1)^2 $$

We indiscriminately scatter 2,000 random numbers (darts) across the space and forcefully search for the location where $E(x,y)$ is closest to zero. The graph below visualizes this search process.

Dr. WataWata Insight: The Aesthetics of Hybrid Methods

Looking at the graph above, among the gray points driven into the space, excellent points satisfying the equations (orange) are extracted, and the best approximate solution closest to the answer (large red circle) brilliantly locates the intersection of the circle and the line.

Aiming for high-precision solutions using the Monte Carlo method alone causes computational explosion, but by "first making a global guess with random numbers, and then triggering Newton's method using the found approximate solution as an 'initial value'," even ill-conditioned equations can strike the truth in an instant. The hybrid of muddy random numbers and refined derivatives. This is the ultimate algorithm design in numerical computation.