Soccer at Sloan 2021: A summary

Greetings readers, it’s time for some analytics.

Last month was the MIT Sloan Sports Analytics Conference. In pre-pandemic times, this was one of the big ones of the sports analytics conference circuit. If you’re into tennis, think of the Slams — it was one of those. (Also, if you’re into tennis, which of the Slams was it most like? Twee, traditional Wimbledon? Fun, ‘Happy Slam’ Aussie Open?).

Although it’s an American conference with US sports taking prominence, soccer has long been represented among the research papers submitted. This year, soccer-related papers were judged first- and second-best in the research paper competition too. I’m going to be summarising one of them, and two other soccer analytics papers, here. First will come a brief summary for those just interested in the headlines from researchers at the forefront of the field, and then I’ll tackle each in a bit more detail for those who want it.

The papers I’m looking at are:

  • ‘Routine Inspection: A playbook for corner kicks’ by Laurie Shaw (Harvard University) and Sudarshan Gopaladesikan (Benfica) [link here]
  • ‘Making Offensive Play Predictable: Using a Graph Convolutional Network to understand defensive performance in soccer’ by Michael Stöckl, Thomas Seidl, Daniel Marley, and Paul Power (Stats Perform) [link here]
  • ‘Leaving Goals On The Pitch: Evaluating decision-making in soccer’ by Maaike Van Roy, Pieter Robberechts, Wen-Chi Yang, Luc De Raedt, and Jesse Davis (KU Leuven) [link here]

I’ll note here that there may be a chance I’ve misunderstood something in the papers. In that case, check the online version of this for updates. (If I make any really bad errors I may send a clarification email out)

The lowdown

I can’t think of many unifying features of these three papers so this lowdown will be slightly fractured. Each paper uses looks at different things, using different techniques, and seeking to do different things as well.

Making Offensive Play Predictable from the folks at Stats Perform is, as a research paper, kind of a ‘foundational’ work. Using tracking data, it creates three models — two fairly familiar to the analytics field (expected pass completion, expected threat), one quite original (expected receiver) — and tantilises us with their combined uses in analysis. It also introduces the detection of ‘active runs’, where a player makes a burst to notably increase their likelihood of being the player who’ll receive the next pass. In terms of defending, you can see when defences are forcing attackers to make active runs away from goal (i.e. dropping deep) to receive; when players are forced to change their mind about who to pass to; and whether a defending team is forcing their opponents to pass to different areas or make less safe passes than they usually do.

Routine Inspection is also somewhat foundational, but in a much more specific way. Instead of creating models or metrics that are used in open play, it creates a kind of dictionary of run types used at attacking corner kicks. On the defensive end of things, the paper also identifies whether players are marking zonally or player-to-player.

Meanwhile, Leaving Goals On The Pitch is much more focused on application, specifically on whether teams should actually shoot more from distance. Since the rise of expected goals it’s been analytics orthodoxy that teams should shoot less from range, but this paper sought to see which situations it was actually better to take the shot. Spoilers: if you’re a bad team, it’s probably not worth hanging onto the ball if you’re already in range; and, in general, the benefit of shooting more is quite small, but there.

Together, these three papers really do give a good thematic indication of what ‘analytics’ can do. It can poke an assumption about the game and seek to confirm or refute it; it can run tasks to free up time for analysts to do other work; and it can create models and metrics to help further analyse the game.

And now for a research paper minute-by-minute liveblog

Well, not really, but a more thorough summary. I’ll tackle them one by one, and hopefully this won’t run longer than newsletters are permitted to go.

A playbook for corner kicks

‘Routine Inspection: A playbook for corner kicks’ by Laurie Shaw (Harvard University) and Sudarshan Gopaladesikan (Benfica)

[link here]

Although I skimmed over this paper in the summary above, it was the deserved winner of the research paper competition. What it does doesn’t sound like a sparkly headline, but it seems like really meaningful work.

To start with, Shaw and Gopaladesikan identified the ‘target locations’ of players’ attacking runs at corners. ‘Target location’ was defined as where the player was either one second after the first on-ball action of the corner, or two seconds after the corner was taken, whichever came first. The researchers split these locations into seven clusters of ‘active’ players, with other locations outside the box not considered as part of the analysis.

Once they’d identified the players who were actually involved in making runs at the corner, they used them to identify starting location clusters too (they found 6). Both of these used a Gaussian Mixture Model.

There were two things I found interesting about this. The first is that they were computationally finding these zones, rather than using a coach-led system. One isn’t necessarily better than the other, but I think it’s always worth noting when these decisions are made. The second is that, despite using tracking data, Shaw and Gopaladesikan weren’t using the paths of player runs, as some NBA research has done. I imagine that doing it in this method makes the computation easier, but corners have always seemed to me like the part of the game that would most likely cause problems in the tracking data (and I imagine it could depend a lot on your provider).

Those six starting and seven target zones gave Shaw and Gopaladesikan 42 possible runs (hello to Douglas Adams). However, runs usually occur together in regular patterns. The researchers used non-negative matrix factorisation to help create 30 run combinations, or ‘features’. Each of the 1723 corners in their sample could be constructed using combinations of these 30 features.

The next part of their paper does use domain experts alongside pure computerising. Shaw and Gopaladesikan decided that, for coding defensive corners, they wouldn’t focus on deciding whether the system as a whole was ‘zonal’ or not — after all, as they point out, systems are rarely wholly zonal or wholly player-to-player. Instead, they sought to work on the individual player-level.

Analysts at Benfica worked with the researchers, doing two things:

  1. come up with metrics that could be used to predict the role of a defender at a corner, to be used as parameters for a model
  2. watch 500 corners and tag whether the defenders were marking zonally or player-to-player (resulting in 3907 defenders) to form a training set for the model

Shaw and Gopaladesikan used XGBoost (the coincidentally named implementation of gradient boosted decision trees) and found they could determine whether defenders were marking zonally or player-to-player with a classification accuracy of 83.4% (±2.1%).

This kind of methodology allowed them to show an example in the paper, comparing four-zonal player systems and two-zonal player systems and the shots they conceded. The four-zonal player systems conceded more shots, but of worse quality.

As the two authors of the paper note, the next step would be to determine which types of attacking corner are more effective against which types of defensive corner. I also wonder whether something as simple as outswinger vs inswinger could have an impact on success of defensive systems.

Making Offensive Play Predictable

‘Making Offensive Play Predictable: Using a Graph Convolutional Network to understand defensive performance in soccer’ by Michael Stöckl, Thomas Seidl, Daniel Marley, and Paul Power (Stats Perform)

[link here]

It’s worth noting at the start here the team also made a presentation at the Stats Perform Pro Forum on this work (I wrote about that here). Their presentation at the Forum covered some different ground to the paper; a video of that presentation is here and there’s a tweet thread from Paul Power about the paper and the wider work here.

Although this paper, and the associated presentations, introduce a bunch of models and applications for them, the centrepiece of the paper itself appears to be the methodology. As the title says, the researchers use a Graph Convolutional Neural Network to create their models, and there’s a section of the paper that explains why.

Explaining what a Graph Convolutional Neural Network is, and the reasons for using one, is slightly beyond my level of understanding, but this is what I gather:

  • Tracking data is ‘unstructured’ — you can’t stick it in a table, which means you can’t use machine learning techniques that are based on tabular datasets
  • Graphs* are a way of dealing with this unstructured data
  • As well as this, some previous work has used tracking data frames as images. Using graphs reduces the amount of computational power required

*For those reading this who, like me, only know of one type of ‘graph’ (things like line charts) this type of ‘graph’ is simply connecting nodes with lines. In the case of this paper’s methodology, the nodes for defensive players were masked out for some of the models.

Being Stats Perform, their sample was large, 1200 matches of tracking data. That’d be a hell of a lot of frames of data, but they don’t need all of them. For the xThreat model they just used “the frames relating to the moment of passing events”, while the xPass and xReceiver models additionally included frames taken a half-second and one second prior to passes.

The reason for this is because, at the very start of a passing action, “players’ movements already indicate where the ball will be played to some degree”. Taking the half- and full-second prior to the action helps to prevent ‘overfitting’ the model.

There’s more detail on the modelling process in the paper, but they show a table with accuracy and logloss figures in comparison to similar metrics created through different methods. “The loss and accuracy of all three GNN models were better than or the same as the metrics of the respective baseline model,” they write.

After creating the models, the paper then discusses a ‘disruption map’ — essentially a heat map of the models’ results for a team that, for example, can be compared between single games and a full-season sample. Through that, you can get a sense of how well a defence performed against their opponent.

There’s a section that expands on that idea and gives examples of applications, but I’ll skip over that for the sake of space. The authors of the paper then wanted to look at whether defensive players were player-oriented or ball-oriented in their defending. Instead of trying to train a model, they got domain experts to give definitions which they could work from. This is also where they got the definition for ‘active off-ball runs’ (“an attacker moves at high speed to increase their probability of being a receiver”), which the paper then illustrates some uses of.

In terms of the methods used to deal with tracking data, this paper’s a really interesting one to look at. It offers a different approach to ones that it appears others have used in the past, while discussing some of the previous methods as part of the paper. This discussion, and the references section, make for a tremendous starting point for anyone looking to get up to speed with the field.

Leaving Goals On The Pitch

‘Leaving Goals On The Pitch: Evaluating decision-making in soccer’ by Maaike Van Roy, Pieter Robberechts, Wen-Chi Yang, Luc De Raedt, and Jesse Davis (KU Leuven)

[link here]

The third and final paper on this newsletter’s list is the one most geared towards actual findings, with some pretty interesting headlines. It features a fascinating stat early on, that high-volume long-range shooters like Christian Eriksen, Paul Pogba, Harry Kane, Kevin de Bruyne, Heung-Min Son, Eden Hazard, and Gylfi Sigurdsson combined for a long-distance conversion rate of 6.5% across the 2017/18 and 2018/19 seasons. Meanwhile, the possessions where these players made a touch in that range but didn’t shoot only resulted in a goal 2.1% of the time.

This leaves open the question about whether teams should be making more shots from distance. Doing so would potentially forego better-quality shots later in the move, but who knows if those chances would actually arrive? That’s the problem with tackling this question, you’re dealing with counterfactuals.

To address this, the researchers model how a team plays generally by taking two seasons’ worth of data and training a Markov Decision Process model to see what their tendencies in possession were.

By my understanding, this approach looks at the likelihood of a player/team either shooting from a location or moving the ball to another location, or losing possession. These ‘locations’ are, in this case, a 22x17 grid in the attacking half of the field (given that this model is interesting in scoring patterns, the defensive half is assigned as one singular zone). The paper designates ‘long-distance shooting locations’ to be more or less the width of the 18-yard box and from 18-to-30 yards from the byline, although this’ll be most important later on when we’re thinking about the counterfactual shots.

So, the researchers ran this model on 17 teams’ data (the ones who were present in the Premier League in both 2017/18 and 2018/19) so they had a 76-match sample of how each team moves the ball around the field. The first obstacle in the problem, tackled.

(Interesingly, they slip in a method for determining the intended end location of incomplete passes, using Gradient Boosted Trees Ensembles and “the characteristics of the actions and what has happened prior to the actions”).

But there’s another obstacle that the paper attempts to overcome too. If a team decides that it wants to shoot more often from long-range, there’s no guaranteeing that they’ll keep up their current quality of long-distance shots. They might just start swinging whenever they get the chance, and that might mean worse-quality attempts. What should you take as the goal probability of shots that never happened?

For each zone, the researchers looked at the distribution of xG values for shots in that zone and used it as the basis of assigning xG to ‘new’ shots. For example, if a team is increasing their number of long-range attempts, the researchers reason that the shots will likely be of lower quality and so the xG assigned is at the lower end of the distribution. If a team is decreasing the number of long-distance shots, they’ll take away ones from the lower end of the xG distribution.

And so with the model of teams’ play tendencies, and the method for adding or taking away counterfactual shots, all you need to do now is run the code (with which the researchers used PRISM, a probabilistic model checker).

The results are interesting. Yes, teams do seem to be leaving goals on the table, but not many. A uniform increase in shots across the entire ‘long-distance’ area of 20% would result in just an extra 0.5-1.0 goals per season, and only for the top-half of teams in that table. This gets better if teams are focusing on just the long-distance areas they do comparatively well in, but only increasing to 1.6 at the very top.

Interesting as well is the comparison of different teams. The paper displays a heat map of the long-distance zone for four different teams (Chelsea, Everton, Huddersfield, Man Utd) with the probability that they’ll create a better shooting chance later in a sequence of play. For Chelsea and United, there are large patches where they could have something like a 20% chance of getting a better shot in the same sequence. For Huddersfield, it looks like they barely crack 15% anywhere.

As the paper points out, an extra goal or two in a season is not nothing, even if it isn’t a lot. But, I think, this is a pretty thorough approach to a very interesting question, and one which challenges analytics orthodoxy as well, which is worth doing every once in a while.

There are questions I’d have, like what happens if a team changed tactics partway through that two season sample used in the Markov model and how that affects the tendencies. But I also feel like one has to cut some slack to people looking to investigate this kind of analytical counterfactual.


If you’re reading this text, thanks very much for (presumably) taking the time to read all of this. I truly hope that it has been useful and/or informative to some of you (although my greater hope is simply that it’s an accurate reflection of the work).