Menu Home

Volatility Cones - A Secret Weapon in Options Trading?

This post was originally published here

Taking inspiration from this old paper published by the Montreal Exchange, Below is a simple model to demonstrate how you can calculate volatility cones in Python.

Volatility cones can be used to forecast volatility over different time horizons, the idea being that when implied volatility on an option is outside of the bounds of our cones, it is either high (overpriced) or low (under-priced) relative to our historical norms. Volatility cones also demonstrate the mean reverting nature of volatility on options over time. Whereby options closer to expiry exhibit greater swings in volatility then those expiring further in the future.

We start by providing a historical volatility calculation. To keep the model simple, we use a rudimentary time measure of 20 business days per month (rather then the exact number of business days in the month), and the following implementation using population variance for historical volatility.

def calc_sigma(N, X):
    return sqrt(sum((X - X.mean())**2) / float(N - 1)) * sqrt(252.0)

Once we have our volatility calculations, we then calculate our historical volatilities over different time periods. These time periods cover 1, 3, 6, 9 and 12 month periods.

days_to_expiry = [20, 60, 120, 180, 240]

For each of these time periods we calculate 3 values to generate our cones - an average, and upper and lower bounds.

for expiry in days_to_expiry:
        np_lower, np_mean, np_upper = calc_sigmas(expiry, close_data)
        lower.append(np_lower)
        means.append(np_mean)
        upper.append(np_upper)

In the paper, the minimum and maximum values for our different expiries have been taken. However, it would be more statistically rigorous to take 95% upper and lower confidence limits, i.e. a z-score of 2 to represent our upper and lower bounds. This would help to ensure that your signals are not as significantly affected by single outliers in our calculations. However, with our chosen data set the historical minimum and maximum volatilities provide a better illustration of how to use volatility cones, so they will remain.

Should you wish to experiment with confidence limits instead, the code is commented out in the following section of the calc_sigmas function.

# Uncomment the following three lines to use z scores instead of minimum
# and maximum sigma values
#
# z_score=2.0
# interval = sigmas.std() * z_score
# return mean - interval, mean, mean + interval
#
return sigmas.min(), mean, sigmas.max()

Then all that is left is to obtain our daily historical volatility values for the stock, and plot these along with the average implied volatility on the ATM options we are interested in. The implied volatility is obtained from a vendor in this example, as to calculate this would be a task worthy of a post in itself.

plot(days_to_expiry, mins,
         days_to_expiry, means,
         days_to_expiry, maxes,
         x, historical_sigma[-limit:],
         x, imp_vol_data[-limit:])
show()

In the paper a single chart is shown containing two distinct sets of data - the historical and implied volatility over our time period, alongside the upper, lower and mean historic volatilities over different durations. This may be misleading for some readers, so I present them individually.

This gives us the below charts.

AAPL_cones

The first chart displays the cones themselves. The cone shape is created by plotting our upper and lower volatility bounds over different time periods. This illustrates that over the longer term historical volatility mean reverts to ~22.5%, and that option volatility (the funnel affect) decreases as time to expiry increases.

The second and third charts shows the monthly historical versus implied volatility of our options over monthly and annual periods. You can see how significantly the historical and implied volatility differ from one another during our observation period. The bounds of our cones have been included in these plots to provide perspective from the cones.

In our second chart, as the historical volatility uses a simple 20 day equally weighted moving average calculation, whenever a there is a significant change in volatility it affects the subsequent volatility values over the subsequent 20 days, resulting in the two column shapes in the historical volatility plot.

Conversely in our annual volatility calculation, the implied and historical volatilities are heading in completely different directions!

To correctly leverage the power of the cones, one would be looking to enter into short positions when the volatility is touching the upper band, and long positions where it is touching the lower band (in a similar manner to Bollinger bands). In our specific sample there are limited opportunities for this, due to AAPL being such a liquid stock, but therein lies the challenge with the search for profitable opportunities.

It is also important to pay attention to what implied volatility is doing relative to historical. As implied volatility is derived from the market price of an option, when it is high, an option is going to be more expensive then when it is low. To put it into context, say we decide that historic volatility is too high (i.e. exceeds the upper cone boundary) at around day 80 in our 20 day volatilities plot. As implied volatility is somewhat lower at this point (~10% less), and is hovering around our historical average, it means that the price of options at this point will probably be lower then our model says it should be. Thus, we will not be able to profit as much from going short volatility here (if at all), as the market is pricing the options lower then we believe they should be.

The opposite is true around 120 days if we were to try to go long volatility here - implied volatility is ~15% greater then historical resulting in the actual option price being more expensive then your model says it should be, thus making it potentially difficult to make a profitable trade.

If you found an opportunity where you did wish to take advantage of these mispricings, when you consider the implied volatility is low for the ATM options, you could purchase a straddle (a put and call option) at, or on either side of the ATM price (depending on the available option strikes). Thus you are taking a position that suggests the options are under-priced compared with their historic norm. Conversely, when the volatility is too high you could write both out of the money put and call options to profit from the options being more expensive then normal. These are just a couple of a number of possible strategies you could use to trade volatility.

The full source code for the model is listed below, and available along with test data here.

from numpy import log, sqrt
import numpy as np
import pandas as pd
from pylab import axhline, figure, legend, plot, show

def main():
    prices = pd.read_csv('AAPL.csv', index_col=0, parse_dates=True)
    prices.sort_index(inplace=True)

    imp_vol = pd.read_csv('AAPL_IMP_VOL.csv', index_col=2, parse_dates=True)
    imp_vol.sort_index(inplace=True)

    prices['Adj Returns'] = \
        calculate_log_returns(prices['Adj Close'].values)
    close_data = prices['Adj Returns'][-300:].values
    imp_vol_data_30d = imp_vol['30d iv mean'][-300:].values
    imp_vol_data_360d = imp_vol['360d iv mean'][-300:].values

    days_to_expiry = [20, 60, 120, 180, 240]

    lower = []
    means = []
    upper = []

    for expiry in days_to_expiry:
        np_lower, np_mean, np_upper = calc_sigmas(expiry, close_data)
        lower.append(np_lower)
        means.append(np_mean)
        upper.append(np_upper)

    historical_sigma_20d = calc_daily_sigma(20, close_data)
    historical_sigma_240d = calc_daily_sigma(240, close_data)

    limit = max(days_to_expiry)
    x = range(0, limit)

    fig = figure()
    ax1 = fig.add_subplot(3, 1, 1)
    plot(days_to_expiry, lower, color='red', label='Lower')
    plot(days_to_expiry, means, color='grey', label='Average')
    plot(days_to_expiry, upper, color='blue', label='Upper')
    axhline(lower[0], linestyle='dashed', color='red')
    axhline(lower[-1], linestyle='dashed', color='red')
    axhline(upper[0], linestyle='dashed', color='blue')
    axhline(upper[-1], linestyle='dashed', color='blue')
    ax1.set_title('Volatility Cones')
    legend(bbox_to_anchor=(1., 1.), loc=2)

    ax2 = fig.add_subplot(3, 1, 2)
    plot(x, historical_sigma_20d[-limit:], label='Historical')
    plot(x, imp_vol_data_30d[-limit:], label='Implied')
    axhline(lower[0], linestyle='dashed', color='red')
    axhline(upper[0], linestyle='dashed', color='blue')
    ax2.set_title('20 Day Volatilities')
    ax2.set_xlim(ax1.get_xlim())
    ax2.set_ylim(ax1.get_ylim())
    legend(bbox_to_anchor=(1., 1.), loc=2)

    # We only want to plot implied vol. where we have a value for historical
    imp_vol_data_360d[np.where(np.isnan(historical_sigma_240d))] = np.nan

    ax3 = fig.add_subplot(3, 1, 3)
    plot(x, historical_sigma_240d[-limit:], label='Historical')
    plot(x, imp_vol_data_360d[-limit:], label='Implied')
    axhline(lower[-1], linestyle='dashed', color='red')
    axhline(upper[-1], linestyle='dashed', color='blue')
    ax3.set_title('240 Day Volatilities')
    ax3.set_xlim(ax1.get_xlim())
    ax3.set_ylim(ax1.get_ylim())
    legend(bbox_to_anchor=(1., 1.), loc=2)
    show()

def calc_sigmas(N, X, period=20):
    start = 0
    end = N

    results = []

    while end <= len(X):
        sigma = calc_sigma(N, X[start:end])
        results.append(sigma)
        # print('N: {}, sigma: {}'.format(N, sigma))
        start += period
        end += period

    sigmas = np.array(results)
    mean = sigmas.mean()

    # Uncomment the following three lines to use z scores instead of minimum
    # and maximum sigma values
    #
    # z_score=2.0
    # interval = sigmas.std() * z_score
    # return mean - interval, mean, mean + interval
    #
    return sigmas.min(), mean, sigmas.max()

def calc_daily_sigma(lookback, data):
    results = np.zeros(len(data))
    start = 0
    end = lookback
    results[start:end] = np.nan
    while end < len(data):
        results[end] = calc_sigma(lookback, data[start:end])
        start += 1
        end += 1
    return results

def calc_sigma(N, X):
    return sqrt(sum((X)**2) / float(N - 1)) * sqrt(252.0)

def calculate_log_returns(pnl):
    lagged_pnl = lag(pnl)
    returns = log(pnl / lagged_pnl)

    # All values prior to our position opening in pnl will have a
    # value of inf. This is due to division by 0.0
    returns[np.isinf(returns)] = 0.
    # Additionally, any values of 0 / 0 will produce NaN
    returns[np.isnan(returns)] = 0.
    return returns

def lag(data):
    lagged = np.roll(data, 1)
    lagged[0] = 0.
    return lagged

if __name__ == '__main__':
    main()

Categories: Options Python

conor

3 replies

  1. I think real vs implied vol is like comparing apples and oranges.

    Obviously what you're describing here is pretty orthodox, and you know the pitfalls of these types of comparisons like anyone else.

    Nevertheless makes me uneasy when making these types of comparisons.

    Need to write up something on this sometime...

  2. Hi, this is a great article!
    Thank you for the script.

    When I run it I am getting an IndexError: list index out of range.
    I am guessing it has something to do with parsing AAPL_IMP_VOL.

    How could I make it work?
    Thanks!

    1. Have you pulled all of the files out of the git repo? If so, you should be able to run it using:

      python volatility_cones.py

      This is using Python 2.7.7.

Leave a Reply

Your email address will not be published. Required fields are marked *