A football match is a complicated process. There is an enormous amount of different factors that affect the result of a football match. Furthermore, these factors are given in various combinations and mixed in the blender called chance; making this process a totality that is almost impossible for the human cognitive ability to perceive objectively.

During a football match there is a quantity of incidents on the pitch that even the most educated football professional does not observe. I have joyously greeted the recent development of statistics that record every single event in a football match, along with the open distribution of this data for stats nerds over the last five years. Historical data is a great help in building mathematical models for probability estimations for the results of future football matches. Information and high-class analysis deriving from that information are available in vast amounts. Can the constant increase of information be at some point useless or even harmful for analysing sports events? I believe it can. The question is not about the quality of information but it’s the quantity and our ability to process it. Less can be more.

 

“The question is not about the quality of information but it’s the quantity and our ability to process it.”

 

Excellent statistic services, such as Opta and StatsZone, deconstruct football matches into small, single events. Every movement of the ball by every single player on the field is being transferred into statistical figures. Different kinds of meters for the performance of teams and players for one single match or during the whole season can be set in huge numbers. In analysing a specific sport this is a big help in pointing out and fixing problems in a player’s game. The purpose in sports betting and in probability estimations, however, is to find the core essence behind complicated phenomena and to model this essence in as reliable and sustainable manner as possible.

It feels difficult not to take advantage of first class, publicly available information. An easy solution feels difficult, because we – sports bettors – feel that we are giving advantages to the betting market if we settle for less. There are algorithms in the betting market that consider huge amounts of specific and historical data for every single match. An algorithm that processes this vast amount of data will inevitably take irrelevant information into account as well. This can result to ”tårta på tårta”, as they say in Sweden, meaning an equation that is over burdened in excess and therefore difficult to perceive and examine.

 

“After making thousands of bets, a sports bettor gathers valuable experience and develops an intuitive talent in understanding movement in the market to advise his betting decision.”

 

It is important for a sports bettor to know why he is winning. The movements of the betting market are difficult to predict nowadays. After making thousands of bets, a sports bettor gathers valuable experience and develops an intuitive talent in understanding movement in the market to advise his betting decision. In the long run it is just as valuable to hit the brakes when intuitive thinking raises warning signals, as it is to make a bet based on a positive profit estimate. If the process pulls the sports bettor inside its whirlpool, and he sinks into too much information, then the probability estimation concluded from analysis is a result of so a many factors that the mind of the analyst is no longer able to generate those valuable warning signals.

I have also drowned in the information swamp, and the result was a messy and complicated equation. The constant analysis of over scaled data is a strain and it disturbs concentration. When you are focused and absorbed in the work, it is painful to accept that you can’t observe, know or understand everything. It has paid off to concentrate on what is relevant and to eliminate excessive information – the warning signals are on again.

 

This writing was originally published in the fall of 2014.