Last week the International Monetary Fund (IMF) and the World Bank held their spring meetings in Washington, DC. Despite worries of rising geopolitical threats, populist views on trade protectionism, and pressures of increased migration to Europe, both organizations have released generally optimistic forecasts of global economic growth for 2017.1
I decided to examine how the IMF and World Bank produces these forecasts. Since the IMF is one of the only entities in Washington that is required to monitor the health of the global financial system, one assumes that there is an actual or virtual “financial war-room” that provides early warning about the next financial crisis or bubble. Due to the IMF’s and the World Bank’s unique responsibilities, it stands to reason that twenty-first century software and forecasting tools are used by their employees on a regular basis.
More specifically, one would also expect that the IMF and the World Bank utilize data science, machine learning, artificial intelligence and other newfangled analysis methods. FutureSource found that both organizations are highly adept at making their data open source for outside researchers and that they both have some of the best economists and researchers in the world. Their well-trained experts travel the globe, personally gather data, and effectively use appropriate research methods.
But both organizations appear to still rely on Microsoft Excel and other traditional and dated software. Neither has completely embraced modern data science techniques or machine intelligence algorithms for their predictions and forecasts.
The implications for the IMF’s and World Bank’s less evolved research and analysis are first speed and timeliness. Currently, the IMF’s World Economic Outlook and the World Bank’s Global Economic Prospects are issued and updated only twice a year. If machine learning and artificial intelligence were used, these forecasts could be updated quarterly, monthly, or even weekly.
The second implication is competition. Traditional methods of forecasting and prediction are undergoing rapid change. Alternative academic concepts that eschew mathematical modeling such as behavioral finance and complexity theory burst on the scene after the 2008-2009 global financial crisis. Certain authors are calling for economic forecasting to include techniques practiced in physics and meteorology. Some professors are using alternative data like Google search statistics to create real-time measures of inflation. Startup entrepreneurs are releasing daily gross domestic product growth statistics and incorporating satellite imagery to report on “economic intelligence.” Crowdsourced predictions from “non-experts” have been around for years. Shouldn’t IMF and World Bank keep up with these trends?
Current Forecasting Processes
The IMF is required in its mandate via Article IV of the IMF’s Articles of Agreement to conduct “surveillance” of the global economy. This means it “highlights possible risks to stability and advises on needed policy adjustments.” Analysis and forecasts are conducted at the micro-level and macro-level with at least one annual visit to each country by IMF economic teams. Teams interview numerous stakeholders such as government leaders, entrepreneurs, and labor unions for the latest insights and trends.
Around 1,200 IMF economists work in six functional departments, such as fiscal affairs, research, and statistics. Economists also staff the geographical area departments, such as Europe and Sub-saharan Africa. The IMF has 189 member countries and each country is assigned at least one economist. Typically, a four- or five-member team, including various functional experts and research assistants, analyzes each country.
How does the IMF currently conduct its forecasts for its “World Economic Outlook”?
It uses a “bottom-up” approach. Economic teams create a separate and distinct macroeconomic model for each country. Meanwhile, economists in Washington devise a macro or global model with assumptions about interest rates, exchange rates, unemployment, and inflation. Each country forecast is then aggregated, and “through a series of iterations,” statistical analysis at the micro-level is integrated with the macro-level. The result is the forecast of the IMF World Economic Outlook for economic growth that is often mentioned in the media.
What type of software does the IMF use?
It does not appear that the IMF is fully utilizing modern data science languages, tools, frameworks, and integrated development environments such as R, R Studio, Python, NumPy, Anaconda, SQL, Hadoop, Spark, or MongoDB. According to IMF job listings, research assistants are expected to collaborate with economists using Microsoft Excel, in addition to other traditional econometric analysis software tools. These are comprised of established software languages and dashboards for forecasting and simulations such as TSP, Eviews, AREMOS, and RATS.
Many of these programs are several decades old. TSP was first conceived in the 1960’s and was last updated in 2009. Windows-based Eviews is popular, versatile, robust, well-supported, and updated often by its developers. But it was originally introduced in 1965. AREMOS is considered by many as old-hat; its heyday was mostly in the 1980s. RATS specializes in time-series analysis and it has grown into a more powerful program, but it dates back to the halcyon days of Fortran.
However, in order to update its research methodologies, the IMF has held “big data” symposiums and statistical forums to “raise awareness among Fund staff about the potential use of data science to supplement work processes.”
The IMF also deserves high marks for making its economic data open source for journalists, academics and the public to analyze. The IMF has a “Free Data Portal” with numerous data sets from each member country. It also has a web app for visualizing data geographically and a mobile app with data features. These initiatives allow independent data scientists to share software scripts on IMF analysis projects through code repositories such as GitHub.
How Does the World Bank Compare?
The World Bank is more concerned with poverty reduction rather than global macroeconomic surveillance, but it does engage in analysis and research due to its constant interaction and collaboration with its member governments. Like the IMF, it goes the extra mile to make its data open source. The World Bank’s Open Government Data Toolkit has a wealth of options and tools, including a very transparent description about the accuracy, reliability, strengths, and weaknesses of its data, in addition to how various data is collected.
The World Bank’s economic forecast, the 2017 Global Economic Prospects, also uses country-specific models, but the Bank relies on vector autoregressions (VARs), which are time-series econometric models familiar to economists. These regressions utilize Cholesky ordering, meaning that exogenous variables are placed in the model in a decreasing order of exogenity. In other words, the World Bank is appropriately taking into consideration “outside the model” risks such as “global financial market uncertainty;” “domestic financial market or political uncertainty;” “short-term interest rates;” and other hazards. Many forecasters were criticized during the 2008-2009 financial crisis for not including these “Black Swan” risks into their models.
Thus, the World Bank uses straightforward and traditional methods of modeling and forecasting. All of the econometric forecasting software listed previously can perform these estimations, even more general analytic tools such as Stata and SPSS, favored by many social scientists, could be utilized to estimate these regressions. Indeed, according to a World Bank career listing for new economists, the Bank requests that its applicants be proficient in Microsoft Excel and Stata.
The World Bank may be moving a bit faster than the IMF in integrating data science and machine learning into its forecasts. A recent hiring notice from the Bank calls for applications for a Data Scientist position. The advertisement reveals that the Bank has a “big data” group in its Global Operations Knowledge Management unit. The Data Science candidate would use techniques in machine learning and natural language processing, according to the listing.
A recent World Bank blog post also posited that machine learning techniques and satellite data have advantages when analyzing agriculture and activity in villages for signs of economic growth. The post says these tools are more efficient than “an army of enumerators carrying clipboards and pencils out in the field, interviewing people about their changing fortunes.”
Is it a problem that analysts at the IMF and World Bank are still using Excel spreadsheets and not as many data science tools and machine intelligence algorithms?
IMF/World Bank research methods for forecasting are mostly behind the times. Today’s data scientists use languages such as R, SQL, and Python. Scripts written in these languages are quicker and more efficient than Excel or Stata. Just a few lines of codes can run regressions. It is easy to add machine learning algorithms that can render unlimited numbers of predictions and simulations. Moreover, using R, SQL, and Python would better “join” data from disparate sources – perfect for aggregating data from 189 countries. These software languages make it easier for economists to collaborate, which is important since the IMF alone has 1,200 economists.
Using Python has another advantage when wrangling big data processing frameworks such as Hadoop, Spark, and MongoDB. The IMF and World Bank could use these platforms to analyze social media from various countries or to integrate satellite imagery for real-time economic development analysis.
Adopting these data science techniques can help with speed and timeliness. This would enable the release of monthly reports instead of bi-annual forecasts. Holding a “contest” each month between predictions from human analysts and predictions from machine intelligence is another idea to generate content quickly.
To keep up with competition, the IMF and World Bank could recruit economists from different schools of thought who adhere to alternative theories. It could also bring in insights from researchers who represent the hard sciences, psychology, anthropology, and sociology. Partnering with startups is another way to bring in the latest technologies and ideas.
Relevance is another challenge that should be addressed. Market research conducted via reader surveys could yield insights into what people want from forecasts. The IMF does indeed have a survey available online, but it could be better targeted toward policymakers. The reports are currently very long. The World Bank’s last issue was 276 pages. The IMF’s last update has five different downloads. It may be easier to advertise and disseminate shorter documents.
The good news is that it is not difficult to build data science teams that would modernize forecasting and predictions. The IMF and World Bank are elite organizations with ample resources. They already have an existing pool of talent and a reputation for accuracy. Therefore, a virtual or actual “financial war-room” that can predict the next economic crisis is an attainable goal.
Brent M. Eastwood, PhD is the Founder and CEO of GovBrain Inc that predicts world events using machine learning, artificial intelligence, natural language processing, and data science. He is a former military officer and award-winning economic forecaster. Brent has founded and led companies in sectors such as biometrics and immersive video. He is also a Professorial Lecturer at The George Washington University’s Elliott School of International Affairs.