Scenario Analysis Misconceptions (3 of 3): AI is the Answer

When operating at the tail of the distribution, scenario analysis can be extremely helpful in identifying vulnerabilities. As we noted earlier this week, applying scenario analysis effectively to the growing number of public policy risks embedded in portfolios requires risk managers and quantitative analysts to stop approaching scenario analysis as a random, event-based exogenous event (Misconception 2) that resembles a game of darts (Misconception 1).


In the search for a more robust approach, the temptation is great to off-load scenario analysis to artificial intelligence (AI).


Resist.


It is very true that AI processes can identify correlations faster and better than mere humans. It is also true that AI can combine a large number of potential public policy events in a nearly infinite set of combinations to assess possible outcomes. It is also very true that AI processes can deliver significant advances in “enhanced cognition” that can help risk managers and scenario analysis architects assess risks faster and better than before.


Having just written an entire chapter for The AI Book on the future of work and how AI will deliver dramatic enhancements and efficiencies for knowledge workers, the comparative advantage of using AI processes to amplify the reach and utility of scenario analysis is clear to me personally. But it is important not to over-state the opportunity.


Significant limitations impair the potential effectiveness of AI when used for scenario analysis purposes. These limitations are amplified during the current pandemic era, as described below. The key to using AI appropriately in the scenario analysis context requires an understanding of the technology’s limitations as well as its promise.



Data Deficits


As discussed last week, the pandemic creates a “when bad data happens to good models” situation. To catch up, see:


Two Data Deficits


and


Why Data Deficits Matter.




Specifically, missing data generates gaps that interfere with modeling processes and render risk measurements based on low or bad quality information sub-par at best and misleading at worst.


The problem expands exponentially in the AI context.

Artificial intelligence is neither a crystal ball nor is it merely industrial-strength algorithmic calculation.

Most processes currently collected under the heading “AI” consist of programming a machine to draw conclusions from vast amounts of data. These “deep learning” processes enable AI to solve problems and deliver answers beyond their original programming.


Operating through neural networks, the AI process learns much like the human brain learns: by creating new connections that did not previously exist.

Early successes have involved finding new solutions to games with rigid rules, as scientists recently illustrated with the now-famous Go game in which DeepMind’s AlphaGo computer defeated human professional players and created new moves in the process. Similar previous examples have occurred chess.


How did the computer devise entirely new moves?

The system effectively writes its own code based on the data sets provided.

HOW those new connections are created remains a mystery. The problem for low data environments becomes immediately obvious when viewed from this perspective.


AI systems deployed to conduct scenario analysis in the public policy context do not have enough data to learn, much less operate, optimally.

We will save for another day the deep dive into appropriate data sets for use when quantifying public policy risk and running robust scenario analysis. For now, the main point is that since the majority of public policy data is unstructured verbal data, translating that data into useable inputs for scenario parameterization is a far from trivial exercise. AI might be able to win a game of GO (where the rules are very structured), but it still has a hard time distinguishing between a cat and a leopard print sofa.


These are solvable problems. But even if we restrict the challenge only to structured data and traditional economic aggregates used in financial modeling (e.g., loss given default rates, leverage ratios, turnover, default probabilities, and other proxies for credit risk), delays in regulatory reporting due to pandemic-have created data gaps. The actual regulatory definition of default has changed, at least temporarily.


These data gaps generate significant challenges for machines and people alike since pre-pandemic data may not generate a solid foundation for risk estimation. In the AI context, the data situation increases the risk that AI-driven scenario analyses generate spurious results because deep learning processes do not provide visibility into leaps of logic were made.


Decision Rules


AI systems generally come in two flavors: supervised and unsupervised. In either instance, risk managers must be cautious when configuring the processes in order to avoid skewing the results.


In the supervised context, configuration options can constrain input data (see the discussion above) or decision rules or both. Good arguments exist to use supervised AI for scenario analysis since certain theoretically available policy options ideally should be available in real life for most decision-makers. But the sad history of the 20th century alone illustrates that many otherwise unimaginable policy choices in fact were made. The first two decades of the 21st century also provide a wide range of innovative policy responses to economic distress, most recently in response to the pandemic.


The mere act of restricting decision rules and/or data inputs can introduce implicit bias into the system and skew the outputs.


For example, if the AI system has previously been trained on game theory data that prioritizes zero-sum-game outcomes, the scenarios generated by the system will progressively become more dystopian with each iteration.

Or consider some real life examples. An AI system trained on the Maastricht Treaty might never deliver scenarios in which Member States violated the debt-to-GDP ratio and/or the deficit limits. Prior to March 2020, an AI system trained on the U.S. Congress might never deliver scenarios involving direct cash payouts from the U.S. federal government to individuals and businesses in response to a health crisis.


Unsupervised scenario analysis conducted by AI systems generate a different set of challenges. The pandemic era by definition is a low data environment at the tail of the distribution.

Training an AI system based on current activity risks redefining the tail as the center of the distribution.

People joke about the “new normal,” but adjusting to a new reality for a period of years is very different from automatically drawing conclusions about future policy trajectories based only on worst case or stress scenarios.


Conclusion


Efficiency gains notwithstanding, AI processes deployed in scenario analysis regarding public policy risks are far more likely to generate misleading outputs if AI processes are told to treat all potential future policy actions as equally likely. Efforts to constrain outputs and modelling processes through external parameterization create the risk of generating spurious accuracy by effectively putting a human finger on the automated scale.


AI technology is not yet sufficiently mature to be used effectively and accurately for scenario analysis purposes. Thoughtful human engagement is required to curate the inputs and evaluate the outputs.

AI does not create any shortcuts to measuring and managing public policy risks. Unless and until we have robust, transparent, and objective data regarding public policy activity, most AI-powered scenario analysis outputs at a minimum need to be subjected to serious scrutiny by real, live human beings.

We at BCMstrategy, Inc. are working hard to generate more robust data suitable for measuring public policy risks. Stay tuned!

BCMstrategy, Inc. is a start-up company that is bringing the data revolution to the policy intelligence business through patented technology. The company measures and visualizes publicly policy risks using a web-based platform. Individuals can access the PolicyScope platform and the daily PolicyScope Risk Monitor. Subscribe today from our website. Customized, enterprise-level data delivering using APIs are available as well. To schedule a demo and to explore enterprise-level deployments, please contact us.

  • BCMstrategy Inc on LinkedIn
  • BCMstrategy Inc on Facebook
  • BCMstrategy Inc. on Twitter
  • BCMstrategy Inc. on YouTube

(c) 2020 by BCMstrategy, Inc.