The identification problem means that to get results, an econometrician has tovfeed in something other than data on the variables in the simultaneous system. Ivwill refer to things that get fed in as facts with unknown truth value (FWUTV) to emphasize that although the estimation process treats the FWUTV’s as if they were facts known to be true, the process of estimating the model reveals nothing about the actual truth value. The current practice in DSGE econometrics is feed in some FWUTV’s by "calibrating" the values of some parameters and to feed in others tight Bayesian priors. As Olivier Blanchard (2016) observes with his typical understatement, "in many cases, the justification for the tight prior is weak at best, and what is estimated reflects more the prior of the researcher than the likelihood function."
This is more problematic than it sounds. The prior specified for one parameter can have a decisive influence on the results for others. This means that the econometrician can search for priors on seemingly unimportant parameters.--Paul Romer (2016) "The Trouble With Macroeconomics." [HT: LOTS OF PEOPLE]
Two, relatively recent methodological pieces by Paul Romer, an influential economist, have received a lot of media attention. The one quoted above and another (about mathiness). "The Trouble with Macroeconomics" argues that Macroeconomics has regressed "into pseudoscience." It partially explains this regress in terms of the sociology of the discipline with (i) too much "respect for highly regarded leaders evolves into a deference to authority," and (ii) a research culture that values certain kind of technical sophistication (e.g. 20). This part of the paper made a splash because it did not pull punches and named names. I am going to ignore this side of the paper, although it reminded me a lot of bits of professional philosophy at various times. (Romer explicitly compares the state of affairs with the situation in String Theory as presented by Lee Smolin.)* Here I focus on the more technical and evidential criticisms by Romer of macro-economics.
The key technical issues are related to an issue that is known as the 'identification problem.' To put the point informally and in very traditional terms (familiar from the long history of science before the development of econometrics): observation (data) cannot decide among observationally equivalent models (even with the very best statistical techniques). The research community cannot figure out the parameters of an equation or model that track causes in the world (or the model-world). This is especially difficult when, as Romer notes, variables are part of a simultaneous system. And "when the number of variables in a model increases, the identification problem gets much worse." (6) So far so good.
Now, leaving aside the sociological issues noted above, Romer identifies two further problematic features of the research practices of modeling community: (iii) the most sophisticated models introduce what Romer calls "imaginary forcing variables" that (a) do not refer to any underlying economic behaviors (or emergent properties of such behavior) -- a kind of fudge factor -- and that (b) are used to make sure that the model gives unique answers (and so -- Romer does not mention this -- can be used in a policy context). These imaginary forcing variables are treated in satirical fashion by Romer and also allow Romer to refer to phlogiston. This is actually very unfair to phlogiston theory because phlogiston was, in fact, taken to be a real entity that could be measured and explained observations (and in some conceptual schemes was measured, but leave that aside).
And, as the quoted passage above notes, (iv) the abuse of Bayesianism, which makes it too easy for scholars to produce results. One problem here is that the priors are not being set by well-confirmed background theory (there is none, so the likelihood function is pretty much meaningless), but by the interests of the researcher to produce results (for career advancement, policy status, etc.). This kind of abuse of Bayesianism is entirely foreseeable, and one wishes that philosophers of science, who as a community have grown ever more enchanted with Bayesianism, would recognize that in application Bayesianism is liable to a certain sort of systematic abuse and, recall, expert overconfidence. (There are actually multiple abuses here: fake precision, too much permissiveness, and it focuses folk on confirmation rather than on stress-testing the concepts). This is not to deny the utility of Bayesianism as one of many research instruments, but its advocates tend to ignore the down-side risks of adopting it in non-idealized scientific communities.
Both (iii-iv) are connected to and facilitate a more general problem with the research culture in macroeconomics. There is a general bias toward confirming models than in stress-testing the particular variables/concepts that enter into the model (something I have been saying for over a decade now [see also this piece by Abe Stone for more general point]). With the demise of Popper's philosophy of science, it's been very difficult to get people to see that stress-testing of concepts and measures is important. How to design theory and models that makes such stress testing possible and to turn date into high quality evidence about parameters is for another time (although I warmly recommend folk to read George Smith's work on Isaac Newton, see, e.g., Closing the Loop.)**