• JohnWallStreet
  • Posts
  • Final Four Demonstrates Systemic Expert Prediction Problems

Final Four Demonstrates Systemic Expert Prediction Problems

Final Four Demonstrates Systemic Expert Prediction Problems

March 31, 2023

Editor Note

: Our goal at JohnWallStreet is to make the reader a little smarter each day.

Our new Friday columnist will help us to achieve that objective.

Adam Grossman (VP, business insights and analytics, Excel Sports Management) is going to be publishing his weekly

Revenue Over Replacement

column on Friday mornings under the JWS banner.

Readers can expect insights at the intersection of sports, partnerships, strategy, data, and technology from Adam’s unique perspective as an industry leader, entrepreneur, professor, and author. 

I hope you enjoy our latest addition. I know you'll be smarter for it. 

-JohnWallStreet

Final Four Demonstrates Systemic Expert Prediction Problems

March Madness lived up to its name this year with the NCAA Men’s basketball tournament culminating with a collection of teams that had a

probability of making it to the Final Four. While millions of fans are lamenting seeing their brackets busted, it is likely that many experts fared no better with their selections.

That was the case at CBS Sports. An 

 entitled “2023 NCAA Tournament bracket predictions: March Madness expert picks, upsets, winners, Final Four” indicates that none of the outlet's experts’ picks for winning the national championship made it past the Sweet Sixteen, and just one of the ten experts correctly picked a single team to make the Final Four.

Gary Parrish had the University of Connecticut losing to the University of Houston in the National Semifinals.

That is not to pick on the CBS Sports team specifically, but to highlight a problem with expert predictions more generally.

In a

titled, “Insights Into the Accuracy of Social Scientists’ Forecasts of Societal Change”, three professors asked 100 teams of social scientists “with access to historical data of month-by-month forecasts” to make predictions about the COVID 19 pandemic. They found these experts' predictions were:

  • Frequently less accurate than those made by the public

  • Often worse than those created by simple statistical models

The study, one of the largest conducted to date, replicates the findings described in multiple books, papers, and studies including 

Superforecasting: The Art and Science of Prediction

by Philip Tetlock,

Thinking, Fast and Slow

by Daniel Kahneman

, The Signal and the Noise

by Nate Silver, and

The Wisdom of Crowds

by James Surowiecki.

Yogi Berra, Niels Bohr, and/or Sam Goldwyn

the main insight from all these works best when they said, “It's tough to make predictions, especially about the future.”

The challenge for sports industry professionals is that making predictions about the future is often a core element of their job description. Sports betting aside, leaders are often asked to make strategic decisions on where to invest an organization’s financial, relationship, and human capital.

If experts are not successful making predictions, then how can anyone expect these organizations to make the right choices?

The authors of the most recent study on experts have three recommendations for how to improve predictions.

The first is that experts should limit prediction making to their specific field of expertise. That seems to run counter to the CBS Sports example highlighted earlier.

However, a closer examination shows a more general problem with journalists making forecasts. Specifically, journalists are typically experts in storytelling, narrative creation, and reporting. They are not experts in predicting the outcome of games (i.e., they are not professional sports bettors).

The second recommendation is to take predictions from experts from different fields to improve accuracy for complex problems. There is a reason we highlighted a group of published works on expert predictions rather than a single study. The collected work includes input from scientists, economists, psychologists, and journalists (among many others) that have all converged on the problem.

The third recommendation is to rely on simple algorithms and models, which often work better than expert predictions. “Simple” algorithms can work better because they follow a set of instructions that are used to make predictions without falling prey to biases that lead to worse predictions. 

What often works best in both predictive and descriptive analysis, however, is having algorithms and humans work together to solve problems. There is arguably no hotter topic in the business world right now than

and no company in the space has received more attention than OpenAI.

While OpenAI’s algorithms are complex, the core of the company’s ChatGPT platform essentially leverages large language models to make text predictions to user queries. In layman’s terms, ChatGPT has trained its models on millions of pieces of text collected from the internet and fine-tuned its algorithms (think: correcting/minimizing errors in responses) based on feedback from both machines and people.

This approach enables the platform to “answer” questions from a variety of topics. The “answer” comes from text predictions made by ChatGPT based on the training it has received from many similar questions and use cases. The platform generates responses more

and frequently more accurately than humans alone, but would not be possible without human input.

I recently spoke to Open AI’s Head of Azure OpenAI Enablement Adam Goldberg about the need for human interaction alongside machines on the

podcast. Goldberg said, “You need to know what you're asking the model to do. Any need to evaluate the output requires humans in the loop. It's not all about AI. It's AI with humans.”

Focusing on domain expertise, sourcing information from multiple experts, and working algorithms (whether simple or complex) may not save your NCAA bracket from being busted next year. However, integrating these recommendations should help sports industry professionals tasked with making predictions about the future.

About the Author:

Adam Grossman is the Vice President of Business Insights & Analytics at Excel Sports Management. He works with companies, sports properties, media rights holders, athletes, agencies, and events to determine the value of their most important assets. He is also a professor at Northwestern University Master’s In Sports Administration program and the co-author of The Sports Strategist: Developing Leaders for a High-Performance Industry. He can be reached at

.