The Kelly Criterion in Applied Portfolio Selection - Part 2

Previous blog post on the Kelly Criterion

As pointed out in a previous blog post, the Kelly Criterion is an interesting option to decide on position sizing in portfolio selection. While the previous post looked at single stocks, I will today show how to optimize position sizes for a portfolio with multiple stocks.

The core function

At the core of my portfolio optimization is this function:

1
2
3
4
5
6
7
8
9
10
11
12
13
opt_portfolio <-function(shares, dpf, maxshare) {
# calculate portfolios return vector
exp_returns <- dpf%*%shares
obj = -sum(log(1+exp_returns))
weight.penalty = 1000*(1-sum(shares))^2
# max share penalty:
maxpen <- sum(shares[shares>maxshare])*1000
return(obj + weight.penalty + maxpen)
}

The function has three parameters. shares is the vector of shares (position size for each stock) that is going to be estimated by optimization. dpf is a matrix where each stock is a column and each line contains the daily stock price movements like e.g.: diff(stock)/lag(stock). The maxshare option can be used to restrict the position size of a single stock to a maximum.

exp_returns <- dpf%*%shares calculates the daily (or weekly) portfolio returns. obj is the Kelly Criterion. The higher the volatility, the larger values obj will take. We are going to minimize the function so low values, i.e. low volatility is preferred. The next line is a trick to restrict the optimizer to values that sum to 1 (100%). If the sum of all position sizes is 1, weight.penalty is 0 (the minimum possible value). Would the sum deviate from 1, the weight.penalty would quickly increase. The second penalty term is similar. If a position is sized above the maximum allowed share, the penalty scores. Finally the function returns a value which the optimizer will try to minimize.

The wrapper function

Now to feed stock data into this function I wrote a wrapper function that prepares the matrix of returns and calls optim().

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
library(quantmod)
library(dplyr)
opt_portfolio_wrapper <- function(stocks,r=rep(0.03,length(stocks)),maxshare=1,daily=FALSE,short=FALSE) {
# test which stocks are already in workspace, else download data
for(stock in stocks) {
if(!exists(stock)) getSymbols(stock,from="1970-01-01",env=parent.frame(2))
}
# transform return vector from yearly to daily or weekly
if(daily==TRUE) {
r <- (1+r)^(1/250)-1
} else {
r <- (1+r)^(1/52)-1
}
portfolio <- NULL
for(stock in stocks) {
if(daily) assign(stock,get(stock)[,6])
if(!daily) assign(stock,to.weekly(get(stock))[,6])
}
# merge all stocks together
portfolio <- Reduce(function(...) merge(..., all=TRUE), mget(stocks))
# build returns by diff()/lag()
d.portfolio <- data.frame(na.omit(diff(portfolio)/stats::lag(portfolio)))
# center around the mean to eliminate past performance as an information for portfolio selection
d.portfolio.future <- scale(d.portfolio, scale=F)
# add the (personally) expected return for each stock
for(i in 1:ncol(d.portfolio.future)) {
d.portfolio.future[,i] <- d.portfolio.future[,i] + r[i]
}
# define lower und upper bounds for the parameters. 0 to 1 or -1 to 1 if short positions are allowed
lower <- rep(ifelse(short==TRUE,-1,0),length(stocks))
upper <- rep(1,length(stocks))
# starting values for optimizer
start <- rep(1/length(stocks),length(stocks))
res <- optim(start, opt_portfolio, dpf=d.portfolio.future, maxshare=maxshare, method="Nelder-Mead")
Portfolio <- data.frame("Share"=res$par,"Stock"=stocks) %>% arrange(desc(Share))
return(Portfolio)
}

Most of the function is commented between the lines. I introduced two options: daily and short. The default values are, that short positions are not allowed and the returns are calculated on a weekly, not daily basis. One thing thats important for me is, that I dont use the stock returns as they are but I center them around the mean return that I expect. If I would not do this, the algorithm would always pick the historically best performing stock(s). However past performance is not a good predictor of future performance and one has to do his homework and build own (and realistic) expectations.

Testing

To test the function I select some random stocks (not a recommendation to trade these stocks).

1
2
3
4
5
6
7
8
9
rstocks <- c("AAPL","GOOG","AMZN","GIS")
opt_portfolio_wrapper(rstocks)
# Share Stock
# 1 0.80851815 GIS
# 2 0.09979312 AAPL
# 3 0.06343397 GOOG
# 4 0.02829082 AMZN

This tells the following story: Assuming all stocks have the same expected yearly return of 3%, the long term growth rate of wealth (Kelly Criterion) is achieved by investing in the stock with a lowest volatility (here GIS). At the same time this is a hint, that the assumption that all returns are to be expected as equal is too strong. One could now play with the function to find out how high the expected return of a risky stock has to be, to be included into an existing portfolio.

Summary

I personally use this tool as a rough orientation, how stocks from my watchlist would contribute to the risk-return-profile of my portfolio. I usually use the maxshare option and rather “flat” and moderate return expectations, because the Kelly Criterion highly favors stocks with better return expectations. E.g. if I add 1% additional expected return for GOOG, the numbers change drastically:

1
2
3
4
5
6
7
opt_portfolio_wrapper(rstocks, r=c(0.03,0.04,0.03,0.03))
# Share Stock
# 1 0.749115293 GIS
# 2 0.200639803 GOOG
# 3 0.051410711 AAPL
# 4 -0.001125542 AMZN

If find the tool to be useful because it keeps me from overbetting on very volatile stocks and gives hints, if and when risky stocks are worth to bet on.

Other

There are a few additional points worth mentioning:

  • while opt_portfolio_wrapper() gives the shares to achieve the Kelly-optimal portfolio selection, it does not tell, how much one should bet on the whole portfolio. Approximately this be calculated by dividing the expected portfolio return by the annualized variance. One could also adapt the kelly() function from the previous blog post to get a number that incorporates the fact, that the return distribution is fat-tailed and non-normal.
  • I excluded some of the options that my function originally had for the sake of readability. Options that I found interesting are:
    • transaction costs
    • “fixed shares” (e.g. my pension account is invested in index funds and bonds. When picking stocks, I might want to incorporate the fact, that I am already exposed to these securities)
    • a from option for getSymbols() if the historic returns should be restricted to a certain range

The Kelly Criterion in Applied Portfolio Selection

The Kelly Criterion

Derived by John L. Kelly (1956) the criterion recommends a certain fraction of a bankroll to be put on a bet with positive expectations. Kelly showed that $$\frac{p \cdot (b+1) - 1}{b}$$ optimizes the growth rate of wealth if the game to bet on is repeated for many times, where p is the probability to win the bet and b is the net odds, i.e. what you would get back in excess of amount wagered. For example if you are offered to get your wager tripled for a correct coin flip guess, the Kelly criterion would advice you to bet $$\frac{0.5 * (2+1) - 1}{2} = 0.25$$.

While the concept is popular in the betting universe and sometimes in options trading, it seems not to be a common concept in portfolio selection. However, the concept is rather easy to transfer into portfolio selection as all it does is optimizing the long term growth rate. What we would like to find is the fraction that optimizes the following function:

1
2
3
sum_logx <- function(f,x) {
-sum(log(1+f*x))
}

x is a vector of expected future returns of a stock (which we yet have to conceptualize) and f is the fraction to be calculated. First of all we need to make assumptions about our expected return. As an example lets assume we expect 3% annualized return for a stock (e.g. because that is approximately in line with economic growth expectations). Second we need to make assumptions about the underlying risk/volatility. I derive the risk from the history of returns. At this point I often get criticized from the value investors camp because it has a flavor of efficient market hypothesis, beta, Black-Scholes, capital asset pricing model (CAPM), LTCM crash, etc.

Here is my few arguments why basing future risk on past risk seems legit to me:

  1. Past and present risk (absolute values of daily returns) are highly correlated
  2. The risky stocks from yesterday usually are the risky stocks of tomorrow
  3. NOT using historical information of volatility at all cannot be better
  4. If you believe that volatility is not the whole picture of risk, there are ways to incorporate additional measures of risk (as we will see later)

By looking at the sum_logx function, one can see that the Kelly criterion relaxes the strong assumptions that are implicit in the CAPM (normally distributed independent returns, constant correlation between assets) because it recognizes each data point “as is”, and NOT averaging out outliers by calculating a standard deviation. Therefore, more realistically, the Kelly criterion is capable of covering a huge part of the fat tails of the return distribution.

An example

I use the Amazon stock as a mere example. This is not a recommendation to trade the stock.

1
2
3
4
library(quantmod)
stock <- "AMZN" # Stock of Amazon
getSymbols(stock, from="2007-01-01")

Using the very early days of Amazon is probably not representative for the future risk, as Amazon matured. However the time span should ideally include all aspects of economic cycles, so I used about 10 years, including the financial crisis 2008.

Now I calculate weekly and monthly returns based on the adjusted closing price.

1
2
3
4
stock.m <- to.monthly(get(stock))[,6]
stock.w <- to.weekly(get(stock))[,6]
d.stock.m <- as.numeric(na.omit(diff(stock.m)/lag(stock.m)))
d.stock.w <- as.numeric(na.omit(diff(stock.w)/lag(stock.w)))

Now to incorporate my expected return I center the time series around that value. One might argue about the procedure but I find it more plausible than e.g. putting past returns into my models (as some people do) since past returns are not a predictor of future returns. So this is my best guestimate.

1
2
3
4
5
6
7
r <- 0.03 # annualized return
r.m <- (1+r)^(1/12)-1 # monthly
r.w <- (1+r)^(1/52)-1 # weekly
# now subtract mean of past return and add expected return.
stock.future.m <- d.stock.m - mean(d.stock.m) + r.m
stock.future.w <- d.stock.w - mean(d.stock.w) + r.w

With this I basically keep the empiric distribution of returns to represent risk and input my personal expectation.

Then I optimize the function sumlog_x using the defined vectors of expected returns for both, weekly and monthly returns.

1
2
3
4
optim(par=0.2,fn=sum_logx,x=stock.future.m)$par # par being the starting value of the optimizer
# [1] 0.2282031
optim(par=0.2,fn=sum_logx,x=stock.future.w)$par
# [1] 0.2045313

This comes up with numbers around 20% of bankroll. Keep in mind however, that this relies on the assumptions, that the expected return is 3% and the return distribution of the next year is expected to be of the same source as was the return distribution in the last decade. Furthermore it is important, that overbetting does more harm than underbetting. Therefore many people recommend betting “half Kelly” to be rather on the save side that is slightly underperforming than on the risky side, that waits for the big wave to be wiped out.

Now what if I wanted an even more conservative estimate? Lets assume I expect the risk of bankruptcy for the next twelve month to be 0.4% higher than it is reflected by the market. I went for this very simple solution:

1
2
3
4
d.stock.m <- c(d.stock.m,rep(-1,10))
d.stock.w <- c(d.stock.w,rep(-1,10))
stock.future.m <- d.stock.m - mean(d.stock.m) + r.m
stock.future.w <- d.stock.w - mean(d.stock.w) + r.w

For each of the 10 years, I include one (~0.4%) total loss (-1). In my opinion it is always advisable to think about total loss probability because listed stocks are per definition the ones that are not (yet) bankrupt (survivorship bias). The empiric distribution (my approach) has much fatter tails than the normal distribution (conventional approach) but it does not cover the events that did not take place so far.

1
2
3
4
optim(par=0.2,fn=sum_logx,x=stock.future.m)$par # par being the starting value of the optimizer
# [1] 0.02796875
optim(par=0.2,fn=sum_logx,x=stock.future.w)$par
# [1] 0.02576172

Summary

As can be seen, the Kelly criterion is rather optimistic in general but very sensitive to total losses. I use the procedure as a tool to get a first impression of a stock I am thinking to invest in. However there is more to be looked at, e.g. the diversification benefit of the stock within an existing portfolio. I will cover this topic - how to optimize the Kelly criterion for a portfolio of multiple stocks - in a subsequent post.

Appendix: Kelly Criterion as an R-function

For convenience here is the code of the function I use:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
sum_logx <- function(f,x) {
-sum(log(1+f*x))
}
library(quantmod)
kelly <- function(stock,r=0.03,from="1970-01-01",blowups=0,par=0.2) {
getSymbols(stock,from=from,env=parent.frame(2))
stock.m <- to.monthly(get(stock))[,6]
stock.w <- to.weekly(get(stock))[,6]
d.stock.m <- as.numeric(na.omit(diff(stock.m)/lag(stock.m)))
d.stock.w <- as.numeric(na.omit(diff(stock.w)/lag(stock.w)))
r.m <- (1+r)^(1/12)-1
r.w <- (1+r)^(1/52)-1
if(blowups>0) {
d.stock.m <- c(d.stock.m,rep(-1,blowups))
d.stock.w <- c(d.stock.w,rep(-1,blowups))
}
stock.future.m <- d.stock.m-mean(d.stock.m)+r.m
stock.future.w <- d.stock.w-mean(d.stock.w)+r.w
return(list(
"Monthly"=optim(par=par,fn=sum_logx,x=stock.future.m)$par,
"Weekly"=optim(par=par,fn=sum_logx,x=stock.future.w)$par,
"Timeframe"=nrow(get(stock))))
}
# use like this:
kelly("AAPL")
kelly("MSFT",blowups=2)

Appendix: Fat tails

As mentioned above, stock returns are not normally distributed, therefore the CAPM compared to advise from the Kelly criterion tends to “overbet”.

1
2
3
4
5
6
library(quantmod)
library(ggplot2)
getSymbols("AMZN", from="2007-01-01")
empiric_returns <- na.omit(diff(log(AMZN[,6])))
ggplot(data.frame(empiric_returns), aes(sample=empiric_returns)) + stat_qq()

Plotting the empiric quantiles vs. the theoretical quantiles we see that returns outside of the range of two standard deviations left and right is much more frequent than the normal distribution would suggest. This combined with high leverage was the central reason for the collapse of LTCM.

Webscraping with R using a Raspberry Pi

Setting up the Raspberry Pi

After the basic setup, i.e.

  • bought a Raspberry Pi Starter Kit
  • flashed the SD Card with Raspbian
  • ran raspi-config
  • installed R with apt-get install R, which installed R 3.1.1

I started to install the R packages usually needed for my cron-job tasks (mostly webscraping). I ran into problems with the rvest package because several packages could not be installed. Maybe there is a more efficient way but I did the following steps:

Install packages for webscraping

To install xml and related R packages (rvest), I needed the libxml2 on the system although apt-get had it, so I manually installed it:

1
2
3
wget ftp://xmlsoft.org/libxml2/libxml2-2.9.2.tar.gz
tar -xzvf libxml2-2.9.2.tar.gz
cd libxml2-2.9.2/

I also needed python-dev to make libxml2 compile.

1
2
sudo apt-get update
sudo apt-get install python-dev

Then built libxml2:

1
2
./configure --prefix=/usr --disable-static --with-history && make
sudo make install

I also had problems with the curl Package. Installation suggested to install libcurl4-openssl-dev therefore:

1
sudo apt-get install libcurl4-openssl-dev

Last problem was the openssl package. Again, I followed the suggestions from the failed R-package installation and installed libssl-dev:

1
sudo apt-get install libssl-dev

After that, rvest installed nicely. However, it took quite a while for the Pi to install all dependencies.

Webscraping Example – A simple frost warning for my plants

A simple Task, my Raspberry Pi is doing for me is sending a frost warning to my email if at 6 pm the weather forecast for the night goes below 3 °C. For this I got an API Key at openweathermap.org. Mind, that openweathermap.org does not like frequent requests (less than 1 per 10 minutes). At the beginning I got blocked.

You can then request some JSON for your city ID using your APPID (API Key):

1
2
library(jsonlite)
wd_json <- fromJSON("http://api.openweathermap.org/data/2.5/forecast/city?id=CITY_ID_GOES_HERE&APPID=YOUR_API_KEY_GOES_HERE")

Then tidy and extract the values needed. Temperatures are in degrees kelvin so we need to convert to celsius. The date I transform to POSIX.

1
2
3
4
5
wd <- wd_json$list
wd$Datum <- as.character(as.POSIXct(wd$dt, origin="1970-01-01", tz="Europe/Berlin"))
wd$Celsius_min <- wd$main$temp_min-273.15
wd$Celsius_max <- wd$main$temp_max-273.15
wd$Celsius_mean <- wd$main$temp-273.15

Sending results via email

Now for the part sending a mail:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
library(sendmailR)
library(xtable)
wd <- wd[as.POSIXct(Sys.time()+86400)>wd$Datum,]
if(any(wd$Celsius_min < 3)) {
dispatch <- print(xtable(wd[wd$Celsius_min<3,c("Datum","Celsius_min","Celsius_mean","Celsius_max")]),type="html")
msg <- mime_part(paste0('<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0
Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
<title>HTML demo</title>
<style type="text/css">
</style>
</head>
<body><h2>Frostwarnung</h2>',
dispatch,
'</body>
</html>'))
## Override content type.
msg[["headers"]][["Content-Type"]] <- "text/html"
from <- sprintf("<sendmailR@%s>", Sys.info()[4])
to <- "<YOUR@EMAIL_GOES_HERE.COM>"
subject <- paste("Frostwarnung",date())
body <- list(msg)
sendmail(from, to, subject, body,control=list(smtpServer="ASPMX.L.GOOGLE.COM"))

Finally we have to tell the Raspberry Pi to schedule the script to run daily at early evening. Save the .R file and add it to your crontab:

1
crontab -e

The first time you use crontab you are asked to choose an editor. Easiest (at least for me) to use is nano.
Add the following line:

1
00 18 * * * Rscript ~/path_to_your/script.R

Which will add the script to your cronjobs scheduling it at 18:00 every day and month.