The Kelly Criterion in Applied Portfolio Selection

The Kelly Criterion

Derived by John L. Kelly (1956) the criterion recommends a certain fraction of a bankroll to be put on a bet with positive expectations. Kelly showed that $$\frac{p \cdot (b+1) - 1}{b}$$ optimizes the growth rate of wealth if the game to bet on is repeated for many times, where p is the probability to win the bet and b is the net odds, i.e. what you would get back in excess of amount wagered. For example if you are offered to get your wager tripled for a correct coin flip guess, the Kelly criterion would advice you to bet $$\frac{0.5 * (2+1) - 1}{2} = 0.25$$.

While the concept is popular in the betting universe and sometimes in options trading, it seems not to be a common concept in portfolio selection. However, the concept is rather easy to transfer into portfolio selection as all it does is optimizing the long term growth rate. What we would like to find is the fraction that optimizes the following function:

1
2
3
sum_logx <- function(f,x) {
-sum(log(1+f*x))
}

x is a vector of expected future returns of a stock (which we yet have to conceptualize) and f is the fraction to be calculated. First of all we need to make assumptions about our expected return. As an example lets assume we expect 3% annualized return for a stock (e.g. because that is approximately in line with economic growth expectations). Second we need to make assumptions about the underlying risk/volatility. I derive the risk from the history of returns. At this point I often get criticized from the value investors camp because it has a flavor of efficient market hypothesis, beta, Black-Scholes, capital asset pricing model (CAPM), LTCM crash, etc.

Here is my few arguments why basing future risk on past risk seems legit to me:

  1. Past and present risk (absolute values of daily returns) are highly correlated
  2. The risky stocks from yesterday usually are the risky stocks of tomorrow
  3. NOT using historical information of volatility at all cannot be better
  4. If you believe that volatility is not the whole picture of risk, there are ways to incorporate additional measures of risk (as we will see later)

By looking at the sum_logx function, one can see that the Kelly criterion relaxes the strong assumptions that are implicit in the CAPM (normally distributed independent returns, constant correlation between assets) because it recognizes each data point “as is”, and NOT averaging out outliers by calculating a standard deviation. Therefore, more realistically, the Kelly criterion is capable of covering a huge part of the fat tails of the return distribution.

An example

I use the Amazon stock as a mere example. This is not a recommendation to trade the stock.

1
2
3
4
library(quantmod)
stock <- "AMZN" # Stock of Amazon
getSymbols(stock, from="2007-01-01")

Using the very early days of Amazon is probably not representative for the future risk, as Amazon matured. However the time span should ideally include all aspects of economic cycles, so I used about 10 years, including the financial crisis 2008.

Now I calculate weekly and monthly returns based on the adjusted closing price.

1
2
3
4
stock.m <- to.monthly(get(stock))[,6]
stock.w <- to.weekly(get(stock))[,6]
d.stock.m <- as.numeric(na.omit(diff(stock.m)/lag(stock.m)))
d.stock.w <- as.numeric(na.omit(diff(stock.w)/lag(stock.w)))

Now to incorporate my expected return I center the time series around that value. One might argue about the procedure but I find it more plausible than e.g. putting past returns into my models (as some people do) since past returns are not a predictor of future returns. So this is my best guestimate.

1
2
3
4
5
6
7
r <- 0.03 # annualized return
r.m <- (1+r)^(1/12)-1 # monthly
r.w <- (1+r)^(1/52)-1 # weekly
# now subtract mean of past return and add expected return.
stock.future.m <- d.stock.m - mean(d.stock.m) + r.m
stock.future.w <- d.stock.w - mean(d.stock.w) + r.w

With this I basically keep the empiric distribution of returns to represent risk and input my personal expectation.

Then I optimize the function sumlog_x using the defined vectors of expected returns for both, weekly and monthly returns.

1
2
3
4
optim(par=0.2,fn=sum_logx,x=stock.future.m)$par # par being the starting value of the optimizer
# [1] 0.2282031
optim(par=0.2,fn=sum_logx,x=stock.future.w)$par
# [1] 0.2045313

This comes up with numbers around 20% of bankroll. Keep in mind however, that this relies on the assumptions, that the expected return is 3% and the return distribution of the next year is expected to be of the same source as was the return distribution in the last decade. Furthermore it is important, that overbetting does more harm than underbetting. Therefore many people recommend betting “half Kelly” to be rather on the save side that is slightly underperforming than on the risky side, that waits for the big wave to be wiped out.

Now what if I wanted an even more conservative estimate? Lets assume I expect the risk of bankruptcy for the next twelve month to be 0.4% higher than it is reflected by the market. I went for this very simple solution:

1
2
3
4
d.stock.m <- c(d.stock.m,rep(-1,10))
d.stock.w <- c(d.stock.w,rep(-1,10))
stock.future.m <- d.stock.m - mean(d.stock.m) + r.m
stock.future.w <- d.stock.w - mean(d.stock.w) + r.w

For each of the 10 years, I include one (~0.4%) total loss (-1). In my opinion it is always advisable to think about total loss probability because listed stocks are per definition the ones that are not (yet) bankrupt (survivorship bias). The empiric distribution (my approach) has much fatter tails than the normal distribution (conventional approach) but it does not cover the events that did not take place so far.

1
2
3
4
optim(par=0.2,fn=sum_logx,x=stock.future.m)$par # par being the starting value of the optimizer
# [1] 0.02796875
optim(par=0.2,fn=sum_logx,x=stock.future.w)$par
# [1] 0.02576172

Summary

As can be seen, the Kelly criterion is rather optimistic in general but very sensitive to total losses. I use the procedure as a tool to get a first impression of a stock I am thinking to invest in. However there is more to be looked at, e.g. the diversification benefit of the stock within an existing portfolio. I will cover this topic - how to optimize the Kelly criterion for a portfolio of multiple stocks - in a subsequent post.

Appendix: Kelly Criterion as an R-function

For convenience here is the code of the function I use:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
sum_logx <- function(f,x) {
-sum(log(1+f*x))
}
library(quantmod)
kelly <- function(stock,r=0.03,from="1970-01-01",blowups=0,par=0.2) {
getSymbols(stock,from=from,env=parent.frame(2))
stock.m <- to.monthly(get(stock))[,6]
stock.w <- to.weekly(get(stock))[,6]
d.stock.m <- as.numeric(na.omit(diff(stock.m)/lag(stock.m)))
d.stock.w <- as.numeric(na.omit(diff(stock.w)/lag(stock.w)))
r.m <- (1+r)^(1/12)-1
r.w <- (1+r)^(1/52)-1
if(blowups>0) {
d.stock.m <- c(d.stock.m,rep(-1,blowups))
d.stock.w <- c(d.stock.w,rep(-1,blowups))
}
stock.future.m <- d.stock.m-mean(d.stock.m)+r.m
stock.future.w <- d.stock.w-mean(d.stock.w)+r.w
return(list(
"Monthly"=optim(par=par,fn=sum_logx,x=stock.future.m)$par,
"Weekly"=optim(par=par,fn=sum_logx,x=stock.future.w)$par,
"Timeframe"=nrow(get(stock))))
}
# use like this:
kelly("AAPL")
kelly("MSFT",blowups=2)

Appendix: Fat tails

As mentioned above, stock returns are not normally distributed, therefore the CAPM compared to advise from the Kelly criterion tends to “overbet”.

1
2
3
4
5
6
library(quantmod)
library(ggplot2)
getSymbols("AMZN", from="2007-01-01")
empiric_returns <- na.omit(diff(log(AMZN[,6])))
ggplot(data.frame(empiric_returns), aes(sample=empiric_returns)) + stat_qq()

Plotting the empiric quantiles vs. the theoretical quantiles we see that returns outside of the range of two standard deviations left and right is much more frequent than the normal distribution would suggest. This combined with high leverage was the central reason for the collapse of LTCM.