top of page

#AI Training Data & Scenario Analysis-- A Geopolitics Challenge

Today (Wednesday, 14 October 2020), we are halfway through the annual autumn meetings of the global economy. Central bank governors and finance ministers meet under the headings of various G and F Groups (G20, G7, Financial Stability Board) and I Groups (IMF, IBRD) as well as non-government side sessions (IIF, IIB, IIE, Bretton Woods Committee, Toronto Centre, among others) to set the tone and policy focus for the next six months.

The pandemic may interfere with the valuable in-person elements of these meetings, but it has not slowed the deluge of reports, data, recommendations, and signalling that point the way forward. The pandemic has also famously accelerated reliance on a range of digital tools in the workspace, turbo-charging the transition towards The Distributed Age.

The situation creates significant challenges and opportunities for those seeking to generate automated scenario analysis and forward trend projections powered by artificial intelligence. The "magic" of AI-driven systems is that they provide accelerated (if not always transparent) ways to analyze large amounts of data to identify previously unappreciated correlations among disparate elements.

Even when AI systems are instructed to avoid letting precedent prioritize anticipated outcomes, the AI systems still remain bound by the structured data used as a training set.

So AI systems being used for scenario analysis regarding finance and economics miss important elements when they fail to incorporate data from the annual spring and autumn meetings.

Two specific challenges greet AI engineers during these meetings.

Challenge 1: Knowing that the data exists

First, and most importantly, the people that program AI systems are rarely, if ever, subject matter experts. Many AI engineers literally do not know that these meetings exist. Fortunately, at BCMstrategy, Inc. we do not have that problem. Consider yesterday morning's Momentum Measurement generated by our patented technology that powers the PolicyScope Platform:

It is rare to see FinTech issues generate more activity than COVID-19 in particular. During this week in particular, before we even start delving into the details we can see that international policymakers are devoting more time, attention, and decision-making attention to next-generation financial innovation issues than to battling the economic fall-out associated with the pandemic.

We knew that FinTech issues in general and central bank digital currency/stablecoin issues specifically would generate increased momentum because on Friday the ECB announced its formal public exploration of CBDC issuance and on Monday China provided additional details regarding its ongoing and numerous CBDC pilot projects.

The PolicyScope Platform Momentum Measurement above picked up more than the expected flurry of releases from the Financial Stability Board. It also picked up this hardline statement from Group of Seven Finance Ministers and Central Bank Governors:

A deeper dive into PolicyScope Platform data delivers additional insight into the specific focus of activity: central bank digital currency, payment systems, LIBOR reform, and stablecoin regulation.

Challenge 2: Knowing How To Use the Data

Second, the articulated shifts in policy priorities during annual meetings literally change the range of the possible. Even if technical AI engineers are aware of the meetings, they have no foundation for curating data to generate maximize the value of training data because they lack the knowledge and experience to interpret the highly technical and often subtle shifts embedded within the public policy language.

Specifically, the autumn international policy meetings generate a large amount of unstructured data (i.e., words) which normally are not easily incorporated into AI systems. Even AI systems configured to incorporate unstructured data often rely on sentiment analysis mechanisms to convert unstructured verbal data into structured integers. But funneling the policy language can corrupt the data by ignoring important technical signals in order to identify emotion or intention.

For example, consider stablecoin regulation and central bank digital currency policy. Since Friday of last week, a steady stream of announcements and research from the official sector have made clear that policymakers actively seek to expand the regulatory perimeter and accelerate existing initiatives in order to bring payment systems into the 21st century as well as respond effectively to challenges from private sector payments solutions.

But all the activity in this area tends to crowd out attention to the COVID-19 initiatives which are fundamentally changing the financial system from the inside out. Consider:

Are you surprised that the count is so high? We are not. As our PolicyScope Platform data shows, the vast majority of policy initiatives are hiding in plain sight.

Or consider the transition towards market-based benchmarks. The pandemic has actually strengthened policymaker commitments to meeting the end-2021 deadline for eliminating the LIBOR reference rate.

Again, we are not surprised. Policy activity regarding the LIBOR transition remained elevated throughout the pandemic and is picking up momentum now that autumn has arrived. The only real lull was during summer vacation season.


One under-appreciated consequence of the pandemic is that it renders much historical data irrelevant. Policymakers make different kinds of decisions when faced with this situation.

Everything from monetary policy targeting to non-performing loan policy to stablecoin regulation generate breaks in time series data for traditional data sets even as they create the foundation for reliance on a range of alternative data, including data derived from language.

AI-powered scenario analysis makes it possible to implement nowcasting approaches regarding public policy trajectories. But for scenario analysis to generate meaningful, valuable insights, it is crucial to train the AI systems on relevant, transparent data from credible sources.

Our patented process provides a way to translate quickly, easily, automatically, and consistently the language of the public policy process without importing bias or sentiment into the data. We create credible and reliable alternative data that takes AI-powered scenario analysis to the next level.

You and your team can escape the reaction function and declare independence from information overload. We can help you take charge of information flows to spot the signal in the noise of the public policy process. If you would like to explore how this alternative data can help you acquire actionable, next-generation insights that deliver alpha as well as superior risk management options, please contact us today.

bottom of page