Earlier this week, I had the honor of being included as a speaker at the IMF's first Artificial Intelligence and Machine Learning Symposium in Washington DC. It was an impressive and energizing to spend two days discussing with colleagues the innovation underway within the IMF, academia, and leading global companies.
My prepared remarks appear below. They provide a high level overview of what we are working on at BCM.
It is a pleasure and an honor to contribute to this important session at the IMF. I approach our task today with a fair bit of humility since the IMF has been “doing big data” since it was created during World War II!
Let me also thank the IMF for including a start-up. This is the first speech discussing our pioneering work on policy risk quantification.
My role today is to focus on one new frontier in big data analysis, ML and AI relevant to the macroeconomics profession: policy risk quantification.
I know what many of you are thinking, particularly in this age of polarized politics, unreliable polls, unpredictable electoral results, and rising nationalism. You are thinking that policy trend quantification and projection is akin to throwing darts at a dartboard. Results include wild outcomes even when the target is hit.
Your role here at the IMF is to provide policymakers with reliable economic projections so that reasonable decisions can be made regarding fiscal, monetary, and macroprudential policy choices. Your challenge is to provide reasonable estimates based on solid data. Your hope is that science-based analysis will constrain choices to a limited set of rational outcomes that maximize utility for society as a whole. As policymakers, you additionally need to be sensitive to how your words and actions are being read by markets, analysts, and other policymakers.
Wouldn’t it be nice to have a way to chart momentum and trends in thinking in economic policy? If you had such a chart, you could conduct “information triage” and more realistic scenario analysis. You could make more efficient decisions about which speeches or research papers or regulatory pronouncements you need to read on any given day.
Welcome to the toolkit we have been pioneering since 2010. Last year, the system with a 5 year track record of success in anticipating policy moves globablly was awarded a patent for translating words into numbers for the purpose of identifying patterns and correlations among those words and accurately anticipating outcomes based on that quantification. We will in 2018 be making the system and its data available to a broader range of clients.
Welcome to the world of “enhanced cognition,” where advanced technology accelerates human analytical process. This is a universe where technology amplifies rather than replaces human engagement. New jobs are being created in the process, particularly as the automation wave extends to new areas like policy risk analysis. We are operating at the innovation frontier as well, creating new kinds of jobs at our small company.
Let’s start with a definition of political risk. Risk, in its purest form, is the uncertainty regarding a shift away from the status quo. In other words, an unexpectedly good decision can feel as risky as a negative decision even when the impacts differ widely.
When looking to quantify risks in this area, it is crucial to distinguish between political risk and policy risk.
· Political risk, properly defined, focuses on election outcomes and leadership changes.
· Policy risk, in contrast, refers to the shift away from existing policies executed by individuals with authority to make decisions.
After the last two years, I think we can all agree that political risk is high globally and that populist political trends render electoral outcomes increasingly unpredictable.
But even when polling outcomes resemble a random walk, policy formation AFTER an election is far more stable than the screaming headlines would lead you to believe. Policy formation follows a prescribed – albeit dynamic -- process. This is true in all governments, regardless of how the leadership has been selected.
Consider in this context the observation from the Nobel-winning father of behavioral economics, Richard Thaler, in which he expresses the view that systematic risk has been significantly under-estimated. His seminal research of course showed just how irrational economic actors can be. But the most overlooked aspect of Dr. Thaler’s work, in my humble opinion, is that irrational choices made by human beings can be anticipated. His popular book Nudge suggested provocatively that choices can be manipulated by narrowing the field of options available to individual consumers through targeted incentives.
Let me suggest to you that policymakers present an easier population to assess that individual economic actors. Policymakers face clear and significant restrictions on the kinds of decisions they can make and how they can make those decisions. In addition, they must signal their intent in advance order to convince constituents, pundits, analysts, markets, and other policymakers at home and abroad to validate their choice.
In other words: policy risk is not limited to polling outcomes. The tail of the distribution for decision-making is significantly more narrow when compared with individual economic actors.
A range of components on any given day will constrain or amplify policymaker choices, thus impacting the relative amount of policy risk, with spillovers into economic policy that will impact economic policy.
In other words: today’s decisions establish parameters for tomorrow’s choices. This is not about using the past to anticipate outcomes. It is about measuring the feedback loop in the policymaking process in a dynamic manner.
Let me suggest to you one reason for pervasive under-estimates of systematic risk observed by Thaler: until recently we did not have the tools to measure the amount of risk associated with policy shifts. We could not generate metrics keyed to the policy lexicon, the language of policy. We could only measure the hypothetical economic impact of proposed policy shifts. Modern technology makes it possible to quantify the risks associated with that language and scale the analysis dramatically.
Some of you may still be skeptical about the possibility of translating the language of policy into numbers. Let me encourage you to think expansively about how concepts can be expressed and communicated.
Does any material difference exist among the ideas expressed in hieroglyphics, in characters, in letters, in a chart, or in an algorithmic code? Let me suggest the answer to that question is NO. These are all different methods of communicating complex concepts. Natural Language Processing, Machine Learning, and Artificial Intelligence together make it possible to quantify the policy lexicon and identify previously under-appreciated correlations in usage that signal potential conceptual divergences and convergences. These technologies make it possible to translate “unstructured” language (words and images) into machine-readable language (numbers and code) to which one can apply the latest, greatest suite of computational innovations. The application of these technologies to the policy arena and related trend measurement analytics is the area addressed by our patented system.
Let’s take a case study in economic policy. On a daily basis, you must make complex, nuanced decisions about the likely path policy will take regarding GDP growth rates, inflation dynamics, wealth creation, monetary policy, and the role of legal tender in a society. Closer to home, shifts in thinking about the role of the SDR – especially a digital SDR – will have a material impact on global growth rates as well as the IMF’s internal staffing growth rate.
Perhaps more importantly, you are on the cusp of creating new fields in economic research as your processes adapt to reflect new economic interactions and new methods of value creation in the digital age. The Director of the Fiscal Affairs Department here at the IMF is encouraging you to think about new ways to measure GDP. Among other things, re-thinking how to measure economic activity will require you to analyze new data sets and operate in new ways. It’s an exciting time to be an economist.
Rather than plow through a mountain of speeches and official statements, it is much faster to measure the relative momentum of different ideas in order to determine which issue requires your attention on any given day.
To answer the question posed in a subsequent session at this symposium: YES, you DO need to read all that. But you will no longer need to read in a linear, sequential manner. Enhanced cognition will make your reading and analytical processes more efficient and effective. This in turn will make your actions and strategy formation more effective. Our proven policy quantification process facilitates the pattern identification process so that you can see how policy concepts are evolving globally as you prepare the next Global Financial Stability Review, the next Article 4 review, or even labor on drafting communiques and other official statements.
We can also measure the composition of the momentum. Remember the old saying about politicians being full of hot air, full of sound and fury, signifying nothing? Since 2010, we have been successfully quantifying this dynamic and disaggregating rhetoric from concrete developments in order to accelerate the analytical process and facilitate decisions based on facts rather than emotion.
Beyond information triage, the system makes it possible to conduct advanced scenario analysis by modeling market movements and shifts in economic aggregates in relation to specific decisions. Quantification also makes it possible to anticipate with greater accuracy – and less emotion – the range of likely decisions possible at any given moment in time based on the relationship between language used in speeches and decisions as well as underlying economic data.
The result is a more accurate assessment of economic policy trends which can lay the foundation for better decisions going forward.
I would welcome your questions.