Beating up the bookies: why it was no fluke

The AFL model was absolutely dominant this year. Science says it can happen again.

Test word on white keyboard

Two statisticians are sitting in an airport departure lounge.

Statsman A: “What’s that strange looking device in your hand luggage mate?”

Statsman B: “Oh that, it’s a bomb.”

Statsman A: “Jesus! A bomb? Why are you taking a bomb on our flight?”

Statsman B: “Well I calculated the chances of me being unfortunate enough to be on a flight with a bomb as about 1 in 100,000.  But the chance of me being on a flight with two bombs is basically zero.”

… drum roll… cheap cymbal crash.

Anyway where were we?

Ah yes… digging out useful statistical tools for the sports investor.

OK, here is a knockout easy-to-apply tool which, if you haven’t seen it, will make you wonder how you got this far in your betting life without it.

It’s called the chi-squared goodness of fit test (who names this stuff?), and χ2 is its symbol.

Where you want to determine whether your model is significantly different to the bookmaker’s model (and hopefully superior) we can apply the chi-squared test for independence. Basically you have a tally of wins and losses, and you simply compare that tally to the wins and losses that would be expected by the bookmaker’s odds (corrected for the vig).

It helps answer this sort of question:

“I’ve had a good season, how confident can I be in upping my stakes for next season?”

Best illustrated by example, let’s look at the profitable Champion Bets model from the AFL season just gone.

Step 1: Invert the rated odds, taking them back to probability, e.g. $2 becomes 0.5

Step 2: Add up the probabilities

Step 3: Do the same for the bookmaker’s odds, BUT correct the total of probabilities for an expected vig of 4% to 5% for sports betting – ie, divide the total probabilities by approximately 1.05

Step 4: Tally up the wins and losses actually achieved

w1

The first thing to note is the indisputable alignment between the model’s expected performance and what actually happened: 24 expected against 23 achieved for Totals bets, whereas the bookmaker would have expected only 19 wins.

This may be enough for some of you: there is evidence of a superior model for the season across the three bet types, so let’s go down to the pub.

But did the model just get lucky? What confidence have we that it is indeed a superior assessment method to that held by the bookmakers?

Enter the chi-squared test for independence.

Set out results like this:

w2

Now in Excel, use the function =CHITEST(actual range, expected range), which here is =CHITEST(orange range, blue range)

How easy is this?

Interpretation

With a stats result of 0.05 being the traditional bar to get beneath for meaningful significance, if I was advising an unhappy bookmaker after the season it would go like this:

  1. At odds bets: 0.369.  Not significant, don’t lose any sleep for now.

2. Totals bets: 0.194.  Not significant, but watch next season.

3. Line bets:  They have you on the ropes, at 0.026 there is only a 2% to 3% chance that they are lucky guessers.  It’s almost certain they have a superior algorithm driving their model.

Outstanding effort by the AFL guys, bring on next season.