Updated: Sep 25, 2019
Occasionally, someone expresses skepticism that policy language can be quantified. Let's debunk that perspective.
Policymakers make public their intentions for a range of reasons. Sometimes they want to smoke out the opposition or find the weak points in a policy. Sometimes they want to warn private sector actors to change their evil ways if they want to avoid additional compliance burdens. Sometimes they want to send signals to their counterparts at home and abroad in order to nudge into action complementary activities that can amplify a trend. And sometimes they are just thinking aloud.
The ability to convert words into numbers means that all these words can be quantified and visualized. Once visualized, trends in activity can be spotted more easily. The result delivers information triage and strategic advantages to users, particularly compared with the time-consuming task of scrolling through voluminous text in multiple emails often cluttered with duplicative content.
Translating words into numbers is far from straightforward. As discussed HERE, word clouds and sentiment analysis provide deeply flawed and potentially biased mechanisms for discerning policy intention from policy words. A quick illustration makes the point elegantly.
Consider the two word clouds from the September monetary policy decisions at the Federal Reserve and the ECB:
Counting words (a core function of many NLP programs) tells the reader nothing about the direction of policy. In fact, it tells the reader nothing. Is anyone surprised that the most used words in a monetary policy statement have something to do with monetary policy? Were you even able to discern which word cloud corresponds to which policymaker? Spoiler alert: the only clue lies within the TLTRO reference.
One core mechanism to glean sentiment from text fcuses on measuring the distance between words, with weightings often provided to the most frequently appearing words or emotion-heavy words. So now consider the word clouds regarding two recent speeches on stress testing, one from the United States and one from Europe:
Subject matter experts will draw many interesting insights from word count data visualization word clouds. But measuring the distance between the main words would generate little to no insight regarding the direction of stress testing macroprudential policy either in the United States or in Europe, not to mention at the global level.
These word clouds illustrate a core truth: sentiment analysis based on emotion is not well-suited to delivering predictive analytics in the public policy context. Why? Because as discussed HERE, political speech is sovereign action always, not sentiment. And when policymakers use sentiment, it is not always indicative of policy direction. For example, a legislator can express concern or disapproval with a policy just before voting in favor of it.
Political science, jurisprudence, and concrete experience in the policy process point towards a better way for quantifying the language of policy than counting words or inferring emotion from the words: focusing on the action implied by specific words and phrases within very specific policy contexts.
Advanced technology (for which we hold a patent) now makes it possible to deliver automated analysis and quantification of the action in policy language . When aggregated, those daily measurements also provide profound insights into the global policy reaction function for individual issues.
Consider, for example, the noise regarding trade wars this year. If you are a portfolio manager, you want to be sure that your investments are not unduly buffeted by policy-related volatility in October when global policymakers convene for their annual meetings. How volatile has policy really been?
Anyone familiar with the policy process will not be surprised to see that policymakers talk far more than they act about trade wars in general. And low levels of action are not indicative of importance or impact since a single tariff increase holds significant adverse economic consequences.
The aggregate chart confirms an intuitive outcome: trade war activity has increased throughout 2019, taking an early summer breather as policymakers shifted the G20 agenda to focus on areas where greater agreement existed: the digital economy. The interactive chart above provides immediate access to source documents, delivering unparalleled transparency as well as objectivity for journalists, advocates and investors to see -- and prepare for -- pending policy shifts as they occur.
Whether you call it superforecasting or nowcasting, the outcome is the same: measuring sovereign language (not counting words or sentiment) delivers to users a range of strategic advantages based on concrete, objective transparent data from the policy cycle.
Interested in participating in targeted Pilot Projects or the Early Adopter Program?
We actively seek a diverse user base even at this early stage.