# News

# The Lognormal Bridge

The lognormal process is the default assumption used to model the distribution of asset prices in most asset pricing models, and for the distribution of most of the underlying risk factors in finance. Although a lognormal process precludes negative values, this fact did not seem to present any practical concern because such values were assumed to be out of range. After all, stock prices, FX rates, real estate values, commodities prices, and almost every asset class one can think of couldn’t have a negative price, and interest rates would always be positive (obviously!).

That all changed after the financial crisis of 2008, when a number of benchmark rates in many countries turned negative and stayed there. More recently we saw oil prices drop below zero for a short time, as supply far exceeded demand, storage capacity was nonexistent and holders of expiring futures contracts did not want to take delivery of barrels of oil and thus were willing to pay to get out of those contracts.

Even when interest rates are positive but near zero, the assumption of a lognormal price distribution can produce astronomical volatilities that severely skew a number of risk measurements, rendering them unusable.

Subsequently if market conditions revert to “normalcy”, meaning benchmark interest rates return to positive territory and stay there, the negative rate values will persist in the historical time series that are needed to compute HVAR (Historical VAR), and to calculate volatilities and correlations. So the problem cannot be ignored now, or in the future.

This note discusses the use of a “lognormal bridge” to estimate interest rate statistics and to calculate HVAR. It can also be applied in the case of negative asset prices to produce an option pricing model that seamlessly transitions from a lognormal process to a normal process as prices fall, an application to be discussed in a separate note.

**Low/Negative Interest Rates**

When interest rates are positive but near zero, small movements can produce enormous lognormal volatility calculations; a two basis point move is inconsequential when the level of interest rates is 5.0%, but when rates are only 0.05%, a 2 bp change in one day is a 40% movement and so quite significant. Suppose that rates are now safely above zero, say at 3.0%. When an HVAR calculation comes across that day in the past, it will apply a 40% shock to the current level of 3.0 %, for a change of 120 basis points. Clearly, the HVAR calculation will jump, especially if the historical rates had remained at that low level for a few weeks! Chances are that every date from that period would produce values that end up in the tail of the HVAR P&L distribution.

But in a 3% rate environment, we know that a two basis point move is probably not significant. So how should we deal with this scenario?

**Ad hoc methods**

** #1 Ignore the points you don’t like**One approach is to simply ignore such dates. So, in your HVAR simulations low and negative rates are treated as “holidays”. That is, if rates drop below some threshold – say, 0.10% – those dates are ignored. That might be acceptable if interest rates made only a brief sojourn into low/negative territory, but we know that certain benchmark rates (for example, on Japanese and Swiss Treasuries) that have spent long stretches below zero. In this case, ignoring them is impractical.

** #2 Apply a linear shift**This method shifts interest rates upward by an amount that is sufficient to ensure the shifted value is safely above zero. Then, all risk calculations such as volatility and historical shocks are computed using the shifted rates.

However, this approach has a number of drawbacks:

- Since all of the rates in the historical series are shifted by this same amount, you will be underestimating the risk of ordinary, nonnegative interest rate moves. This can be easily seen by considering two calculations

where *R _{0}* is the fixed amount of the shift.

*R*drops out of the numerator, but not the denominator.

_{0}- Also, if rates decline further and the fixed shift
*R*needs to be increased, what value should be used? It’s clearly arbitrary, and arbitrary has no business in risk management._{0}

** #3 Switch to a Normal Process**Of course, if we assume a normal process, any difficulty arising from low/negative rates vanishes. However, you are then stuck with that assumption of normality even for higher rates, which will understate risk. For a global portfolio, it is easy to envision a situation where some of the interest rate exposures in the portfolio can be safely modeled by assuming lognormality, while others are better handled using a normal process. This approach is more justifiable than the other two in that there is a

*bona fide*mathematical model and logic underpinning it; however, it may be more intrusive than one would like because you’re

*locked in*, “Now that rates are well above zero, I don’t want to keep using a normal model for this currency.”

**The Lognormal Bridge**

This new method is closest to the S*witch to a Normal Process* approach, but seamlessly handles the transition back-and-forth from normal to lognormal, and can be implemented for all your interest rates at once. If rates are safely above zero, there is no effect; it only kicks in when it needs to. And, unlike the *Shift Method, *it has no arbitrary parameters.

Consider the nonlinear mapping (transform) and its inverse

This has the property that it is everywhere continuous to first order: at the breakpoint *r _{0}*, we have

*g(r _{0}) = r_{0}* and

*g'(r*. Below, we graph both

_{0}) = 1*y = r*and

*y = g(r)*with the breakpoint set at

*r*

_{0}= 1Note that we always have *g(r) > 0*, so we can require the following assumption to hold everywhere:

**Assumption:** *g = g(r)*is lognormal. If we also choose *r _{0} = 1*, we immediately obtain that

Thus if *g = g(r)* is lognormal, *r* is lognormal above *r = 1*, and normal below *r = 1*. *This is exactly what we want*.

**Application I: ***Computing volatility.
*Given a time series

*{r*, we replace it with the “mapped” one

_{1}, r_{2}, … r_{N}}*{g*. From the equation above, we see that the estimated standard deviation of

_{1}, g_{2}, … g_{N}}*g*can be interpreted as both the normal volatility and lognormal volatility in the respective regions

*r≤1*and

*r≥1*. Similarly, we use the transformed time series to compute

*correlations*with other factors, once again treating the

*g*-series as lognormal.

In practice, we don’t really replace the time series, but instead only modify the return calculation:

**Example:** *Comparing volatilities computed the standard way, and by using the transform*

In this example, historical interest rates never fell below zero so we could still compute the lognormal volatility. However that volatility becomes artificially inflated when rates get very low. The green line is the interest rate level (reading off of the right axis), and the horizontal red line is the breakpoint, *r = 1*. The red columns represent the rolling volatility of the transformed interest rate time series, and the blue columns are the volatilities computed from the untransformed rates. We can see how the volatility calculated from the untransformed rates persists at very high levels once interest rates drops below 1.0, even though this is not reflected in the actual movements of the interest rate; it is simply an artifact of lognormality for low rates. When rates remain at or above 1.0, the two volatility calculations are the same, but they begin to differ once rates drop below 1.0.

**Comparison of Standard vs. Transformed**

Of course, had rates dropped below zero the lognormal calculation would have failed completely.

**Application II:** *HVAR
*HVAR requires an historical time series of returns, which is then applied to the current interest rate. However, now the returns are extracted from the

*g*-series, which are applied to the transformed rate, and then inverse transformed.

**Conclusion**

The lognormal bridge is a simple and natural approach to the challenges presented by low and negative interest rates. Using this transform, all of the difficulties of the *ad hoc* approaches simply *vanish*. There is no need to ignore or massage historical data; only the way in which returns are calculated from that data is adjusted.

More generally, this approach can be applied to other situations in which asset prices turn negative, so that volatilities and correlations can still be estimated from the actual time series. Moreover, we can leverage this approach to create an options model that similarly transitions between the lognormal and normal processes (to be described in a future note).

## Related News

## Calculating Risk Under Market Uncertainty:

There is No Right Answer

When newcomers to the field of quantitative finance are assigned the task of writing up an analysis, they will often show numbers using five, six or even more digits to the right of the decimal point. This may be driven by the part of the brain that craves precision and exactness. Given the unstable nature of financial data, does that mean it is a fool’s errand to try to estimate risk?

## Risk Monitoring Through Turbulent Markets

Markets change frequently and often erratically, presenting risk managers, asset managers, and investors with enormous uncertainty. Given the market volatility in recent months, it is especially important to monitor real-time risks and to test a portfolio’s resilience under future possible market fluctuations.