Complexity and Prediction, Part 1 of 2: The SFI Time Series Prediction Competition
[A]lthough the past is never completely knowable, it is more knowable than the future.[1]
…John Lewis Gaddis, Historian[T]he world’s most prominent specialists are rarely held accountable for their predictions, so we continue to rely on them even when their track records make clear that we should not. One study compiled a decade of annual dollar-to-euro exchange-rate predictions made by 22 international banks…The banks missed every single change of direction in the exchange rate. In six of the 10 years, the true exchange rate fell outside the entire range of all 22 bank forecasts.[2] …David Epstein
While researching my upcoming book I took an enjoyable side trip into predictions of future behavior with a Santa Fe Institute (SFI) sponsored contest for predicting future results based on past data.
In 1991 SFI sponsored a Time Series Prediction and Analysis Competition to investigate the state of the art in scientific data analysis — especially its ability to predict future events. This competition matters for those of us in business because it was able to judge the effectiveness of teams as they attempted to predict the future based on past data.
Of course this competition was held over 30 years ago so that algorithms, data sets, and computer power in the intervening years have become more sophisticated. Yet having trained as a mathematician, applied numerical analysis in aerospace and supercomputing, and having now spent decades working with prediction in business, the fundamental challenges of prediction remain as difficult today as they were in 1991.
The Competition
While we know a great deal about what happened in the past, predicting the future is a serious challenge — one so difficult that predictions of the future can never be fully trusted. Each business, then, must decide:
To what degree they will attempt to predict the future
To what degree they will take risks based on those predictions.
In this competition, teams were given data in form of time series — data points changing with the passing of time. Creating a valid test, SFI only gave teams the first part of the data and kept the rest of the data hidden. With this half of the data, teams had to develop predictions for what they thought would happen — all based on the past data they had in hand. In other words, while SFI knew how the story ended the teams didn’t.
As described in the deeply technical book Time Series Prediction by Weigend and Gershenfeld, six sets of data were provided from experiments and other real world data sources. One, for example, had results from a laboratory physics experiment, another offered observed data for sleep apnea patients, and another looked at currency rates. All data sets were good for the early 1990s as each set included as many as 30,000 to 100,000 data points.
While the challenge of attempting to predict the future based on existing data is common in business, businesses never have opportunity to effectively consider their prediction methods in isolation. So what did SFI find in this competition?
“There are strong bounds on what can be known”
Attempting to predict the future, each team also described their approaches, estimated prediction errors, characterized the systems, and attempted to determine which data, if any, had been generated by equations.[3] [4] The conclusions should be sobering for anyone demanding business predictions of the future.
- No algorithm predicted well for all data sets. The best algorithm for predicting forward in time depended on the specific situation and data. This is consistent with the truth from complexity that there are no universal answers — best answers depend on unique situations.
- An algorithm best predicting the short term was not best for the long-term. Similarly, those best for the long term weren’t often best for the short-term.
- All successful teams explored the data before attempting to find good algorithms for prediction.[5] Researchers found that patterns in the data hinted at prediction methods which could work best.
- Black box approaches did NOT work. Offered to businesses by many vendors, “Black Box” systems are given data and predict the future without companies knowing how they arrived at that prediction — they do their work in the dark using pre-defined assumptions of “best methods.” In this test, as in business, the accuracy of black boxes was quite poor.
- The best predictions were by algorithms. These algorithms were more accurate than humans visually inspecting the data.
- The worst predictions were also from algorithms — humans visually inspecting the data were more accurate. Businesses tend to ignore how rapidly algorithms propagate errors.
- A better fit to past data did NOT mean an algorithm predicted accurately. Byron Sharp notes that modelers tend to tweak models “trying to get better and better fits to historic data…. This sounds sensible but there is a very high risk of ‘over fitting’ … so the fit to historic data is better but it’s even worse at predicting the future.”[6]
Predicting the future turns out to be exceptionally difficult.
What Business Should Learn
It should be striking to businesses that this competition shows a type of thoroughness we never have in business. Remember, all predictions of future behavior in business are made without a safety net for critiquing the methods used for prediction. As a result, we can rarely test and challenge our prediction methods despite processes only improving if they can be challenged.
Also, since a business is a complex adaptive system it will adapt as we proceed forward. Further, businesses adapt in response to results in a feedback loop affecting future results. Worse, businesses also adapt to perceptions of results (right or wrong) through feedback loops. Thus, a business which believes it has performed well will adapt based on that belief — even if what they see is a false-positive.
Businesses also face high risk from analysts “fitting equations to the past.” Many executives want to rely on a heuristic assuming any method which better fits the past also predicts more accurately. This is NOT true. The more accurately an algorithm fit the past, the more we should question its value for the future. All predictions of the future are wrong — fortunately some are less wrong than others. The problem is we must sort out which are less wrong in ways which are valuable for us.
This competition involved situations where whole results were numerical — could be accurately represented by numbers. Within businesses and other whole adaptive systems, numbers capture only a part of any result. How much? It is rare that an accumulation of measures captures more than 50% of a situation. So while incredibly important, numbers do not ensure a whole understanding.
One critical lesson is that reality is always messy — far messier than can ever be reflected in the precise, neat numbers companies prefer to believe. The future is always uncertain in business. Success in business, then, requires that we learn to choose numbers carefully, evaluate them while aware of how evaluations can easily mislead, then process what we learn with with human intelligence, instinct, and judgement.
Fortunately, humans evolved to thrive within such situations of uncertainty. And that should give us confidence as we go forth and do business.
Stay tuned for Part 2 of this post where I consider, among other issues, the difference between anticipating possible futures and trying to predict.
Footnotes:
[1] The Landscape of History: How Historians Map the Past, John Lewis Gaddis
[2] https://www.theatlantic.com/magazine/archive/2019/06/how-to-predict-the-future/588040/
[3] Teams were given 6 sets data covering a laboratory physics experiment, an example with physiological data, a currency exchange rate example, a 4-dimensional series created for the competition, astrophysical data from a variable star, and an unfinished Bach Fugue.
[4] Time Series Prediction, Weigend and Gershenfeld. Addison Wesley ©1994. P7
[5] Time Series Prediction, Weigend and Gershenfeld. Addison Wesley ©1994. P7, 14
[6] https://medium.com/@ProfByron/how-i-changed-my-mind-about-global-warming-f603a8aca3da
©2026 Doug Garnett — All Rights Reserved
Doug’s book about the value of complexity science and business success will be published by Columbia Business Press later in 2026. Through his company, Protonik LLC, Doug Garnett consults with companies as they design and bring to market new and innovative products. He has taught marketing, consumer behavior, and advertising at Portland State University since 2001.
You can read more about Doug’s unusual background (math, aerospace, supercomputers, consumer goods & national TV ads) at www.Protonik.net. Doug is a member of the RetailWire.com braintrust where he engages discussions of retail challenges. And, together with his podcast partner Shahin Khan, current issues in marketing and business are discussed on The Marketing (And Everything Else) Podcast — available on Google, Spotify, the OrionX website, and Apple Podcast.
Categories: Complexity in Business