Problems with economic models (Ofer Abarbanel online library)

Most economic models rest on a number of assumptions that are not entirely realistic. For example, agents are often assumed to have perfect information, and markets are often assumed to clear without friction. Or, the model may omit issues that are important to the question being considered, such as externalities. Any analysis of the results of an economic model must therefore consider the extent to which these results may be compromised by inaccuracies in these assumptions, and there is a growing literature debunking economics and economic models.

Restrictive, unrealistic assumptions

Provably unrealistic assumptions are pervasive in neoclassical economic theory (also called the “standard theory” or “neoclassical paradigm”), and those assumptions are inherited by simplified models for that theory. (Any model based on a flawed theory, cannot transcend the limitations of that theory.) Joseph Stiglitz’ 2001 Nobel Prize lecture reviews his work on information asymmetries,[1] which contrasts with the assumption, in standard models, of “perfect information”. Stiglitz surveys many aspects of these faulty standard models, and the faulty policy implications and recommendations that arise from their unrealistic assumptions.

Economic models can be such powerful tools in understanding some economic relationships that it is easy to ignore their limitations. One tangible example where the limits of economic models allegedly collided with reality, but were nevertheless accepted as “evidence” in public policy debates, involved models to simulate the effects of NAFTA, the North American Free Trade Agreement. James Stanford published his examination of 10 of these models.[2][3]

The fundamental issue is circular reasoning: embedding one’s assumptions as foundational “input” axioms in a model, then proceeding to “prove” that, indeed, the model’s “output” supports the validity of those assumptions. Such a model is consistent with similar models that have adopted those same assumptions. But is it consistent with reality? As with any scientific theory, empirical validation is needed, if we are to have any confidence in its predictive ability.

If those assumptions are, in fact, fundamental aspects of empirical reality, then the model’s output will correctly describe reality (if it is properly “tuned”, and if it is not missing any crucial assumptions). But if those assumptions are not valid for the particular aspect of reality one attempts to simulate, then it becomes a case of “GIGO” – Garbage In, Garbage Out”.

James Stanford outlines this issue for the specific Computable General Equilibrium (“CGE”) models that were introduced as evidence into the public policy debate, by advocates for NAFTA.[4][5]

Despite the prominence of Stiglitz’ 2001 Nobel prize lecture, the use of arguably misleading neoclassical models persisted in 2007, according to these authors:[6]

The working paper, “Debunking the Myths of Computable General Equilibrium Models”,[7] provides both a history, and a readable theoretical analysis of what CGE models are, and are not. In particular, despite their name, CGE models use neither the Walrass general equilibrium, nor the Arrow-Debreus General Equilibrium frameworks. Thus, CGE models are highly distorted simplifications of theoretical frameworks—collectively called “the neoclassical economic paradigm”—which—themselves—were largely discredited by Joseph Stiglitz.

In the “Concluding Remarks” (p. 524) of his 2001 Nobel Prize lecture, Stiglitz examined why the neoclassical paradigm—and models based on it—persists, despite his publication, over a decade earlier, of some of his seminal results showing that Information Asymmetries invalidated core Assumptions of that paradigm and its models:

In the aftermath of the 2007–2009 global economic meltdown, the profession’s alleged attachment to unrealistic models is increasingly being questioned and criticized. After a weeklong workshop, one group of economists released a paper highly critical of their own profession’s allegedly unethical use of unrealistic models. Their Abstract offers an indictment of fundamental practices.[8]

Omitted details

A great danger inherent in the simplification required to fit the entire economy into a model is omitting critical elements. Some economists believe that making the model as simple as possible is an art form, but the details left out are often contentious. For instance:

  • Market models often exclude externalities such as pollution. Such models are the basis for many environmentalist attacks on mainstream economists. It is said that if the social costs of externalities were included in the models their conclusions would be very different, and models are often accused of leaving out these terms because of economist’s pro-free market bias.
  • In turn, environmental economics has been accused of omitting key financial considerations from its models. For example, the returns to solar power investments are sometimes modelled without a discount factor, so that the present utility of solar energy delivered in a century’s time is precisely equal to gas-power station energy today.
  • Financial models can be oversimplified by relying on historically unprecedented arbitrage-free markets, probably underestimating the chance of crises, and under-pricing or under-planning for risk.
  • It is possible that any missing variable as well as errors in values of included variables can lead to erroneous results.
  • Model risk: There is a significant amount of model risk inherent in the current mathematical modeling approaches to economics that one must take into account when using them. A good economic theory should be built on sound economic principles tested on many free markets, and proven to be valid. However, empirical facts have been alleged to indicate that the principles of economics hold only under very limited conditions that are rarely met in real life, and there is no scientific testing methodology available to validate hypotheses. Decisions based on economic theories that are not scientifically possible to test can give people a false sense of precision, and that could be misleading, leading to build up logical errors.
  • Natural economics: Economics is concerned with both ‘normal’ and ‘abnormal’ economic conditions. In an objective scientific study one is not restricted by the normality assumption in describing actual economies, as much empirical evidence shows that some “anomalous” behavior can persist for a long time in real markets e.g., in market “bubbles” and market “herding”.

References

  1. ^Joseph E. Stiglitz. 2001 Nobel Prize lecture: “Information and the change in the paradigm of economics” (PDF).
  2. ^James Stanford. “Continental Economic Integration: Modeling the Impact on Labor,” Annals of the American Academy of Political and Social Science, Mar 1993, V526 pp. 92–110
  3. ^James Stanford. 1993. “Free Trade and the Imaginary Worlds of Economic Modelers”.
  4. ^Aponte, Robert. “NAFTA and Mexican Migration to Michigan and the U.S.” (PDF).
  5. ^Rick Crawford. 1996. Gerbner, George; Mowlana, Hamid; Schiller, Herbert I (1996), Computer-assisted Crises, ISBN 978-0-8133-2072-4 in “Invisible Crises: What Conglomerate Control of Media Means for America and the World”. Ed. Herbert Schiller, Hamid Mowlana, George Gerbner. Westview. 1996.     Free, authorized version viewable at: Computer-assisted Crises
  6. ^“Projected Benefits of the Doha Round Hinge on Misleading Trade Models” (PDF).[dead link]
  7. ^“Debunking the Myths of Computable General Equilibrium Models” (PDF). Archived from the original (PDF) on March 25, 2009. SCEPA Working Paper 01-2008.
  8. ^Colander, D.; Goldberg, M.; Haas, A.; Juselius, K.; Kirman, A.; Lux, T.; Sloth, B. (2009). “The Financial Crisis and the Systemic Failure of the Economics Profession”. Critical Review. 21 (2–3): 249. doi:10.1080/08913810902934109.

 

 

Ofer Abarbanel online library