Jump to ContentJump to Main Navigation
Rethinking ExpectationsThe Way Forward for Macroeconomics$

Roman Frydman and Edmund S. Phelps

Print publication date: 2013

Print ISBN-13: 9780691155234

Published to Princeton Scholarship Online: October 2017

DOI: 10.23943/princeton/9780691155234.001.0001

Show Summary Details
Page of

PRINTED FROM PRINCETON SCHOLARSHIP ONLINE (www.princeton.universitypressscholarship.com). (c) Copyright Princeton University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in HSO for personal use (for details see http://www.universitypressscholarship.com/page/privacy-policy). Subscriber: null; date: 17 August 2018

Principled Policymaking in an Uncertain World

Principled Policymaking in an Uncertain World

Chapter:
(p.389) Chapter Thirteen Principled Policymaking in an Uncertain World
Source:
Rethinking Expectations
Author(s):

Michael Woodford

Publisher:
Princeton University Press
DOI:10.23943/princeton/9780691155234.003.0014

Abstract and Keywords

This chapter examines the reasons for the focus on the analysis of monetary policy rules rather than on decisions about individual policy actions, as well as the extent to which such a focus continues to be appropriate in the light of subsequent events—changes in central banks' approach to monetary policy in the decades since the publication of the Phelps microfoundations volume, along with the reconsideration of macroeconomic theory and policy that is necessary in the wake of the global financial crisis. The chapter first explains why recognition of the importance of expectations led to an emphasis on policy rules in the theoretical literature before discussing possible types of policy commitments in the case of monetary policy. It then analyzes the theory of monetary policy after the global financial crisis and reframes the debate over policy rules versus discretionary policy by proposing an approach it calls “principled policymaking.”

Keywords:   monetary policy, central banks, macroeconomic theory, financial crisis, expectations, policy rules, policy commitments, discretionary policy, principled policymaking

13.1 Introduction

A crucial legacy of Phelps et al. (1970) has been recognition of the importance of economic agents’ anticipations as a determinant of macroeconomic outcomes. This has had many profound consequences for macroeconomic analysis. Among them is that the subsequent theoretical literature on monetary policy has focused on the analysis of monetary policy rules rather than on decisions about individual policy actions. The present chapter considers the reasons for this development and the extent to which such a focus continues to be appropriate in the light of subsequent events—changes in central banks’ approach to monetary policy in the decades since the publication of the Phelps volume, and even more crucially the reconsideration of macroeconomic theory and policy that is necessary in the wake of the global financial crisis.

13.2 Rule-Based Policy or Discretion?

13.2.1 Policy Rules as an Object of Study

There are at least two important reasons why recognition of the importance of expectations led to an emphasis on policy rules in the theoretical literature (for further discussion, see Woodford 2003: 14–24). First, if one is to specify agents’ anticipations in an economic model using the common hypothesis of rational expectations (RE), one cannot answer questions about (p.390) the predicted effect of a given policy action, even in the context of a particular economic model, unless one also specifies expected future policy over an indefinite future and for all potential future contingencies. One cannot solve for the consequences of a given action (say, purchases of a certain quantity of Treasury securities by the Fed this month) without specifying what people expect about future outcomes (e.g., future inflation), both with and without the action in question. Under the hypothesis of “rational” (or, more properly, model-consistent) expectations, what people expect in either case should be what the model predicts will occur; but that will depend on what is assumed about future policy, and in what respects it does or does not change as a result of the policy change that one wishes to analyze.

Hence the object of analysis must always be a complete specification of current and future policy, including how policy can be expected to respond to all possible future developments. In other words, the only possible object of analysis is a complete policy strategy (e.g., see Sargent 1993). This does not mean that such an approach cannot be used to analyze the consequences of approaches to policy other than ones for which policymakers consciously follow a rule; the analyst might postulate a systematic pattern of conduct—which it is furthermore assumed that the public should also be able to predict—even if it is neither announced nor consciously formulated by policymakers themselves. But once the object of study is defined as the comparative advantages of alternative systematic patterns of conduct, the only goal of normative policy analysis must be to propose a systematic pattern of conduct that it would be desirable for the policy authority to follow, in a sufficiently faithful way for the pattern to be predictable. In other words, even if positive analyses of policy in particular times and places do not necessarily assume that policymakers consciously follow a rule, a normative analysis must recommend a rule that should be followed systematically, rather than an individual action that is appropriate to some particular situation.

There is a second reason for the recent literature’s focus on policy rules. In this case “rule” has a more specific meaning, namely, a prescription that constrains the policymaker to behave in a way other than what would be judged desirable using a sequential optimization procedure of the kind criticized by Kydland and Prescott (1977). Kydland and Prescott criticize “discretionary” policy, by which they mean sequential optimization each time an action must be taken, with no constraints resulting from prior commitments. This sequence of optimizing decisions—in which no more is decided at any one time than is necessary to determine the action that must be taken at that time—can be contrasted with an alternative form of optimization, in which an overall pattern of conduct that will be followed forever after is chosen once and for all. Note that the complaint about discretionary policy is not that it is unsystematic or unpredictable—as conceived by Kydland and Prescott, it involves a clear objective that is pursued consistently overtime, (p.391) and in their analysis of the consequences of such behavior, they assume (following RE methodology) that it is completely predictable.

The Kydland-Prescott critique of such a sequential approach to decisionmaking is rather that it fails to internalize the consequences of people’s anticipation of systematic patterns in the policymaker’s conduct. Each time that a decision must be made about a specific action (say, the level at which the federal funds rate should be maintained for the next six weeks), people’s prior expectations about that action are a fact about the past that can no longer be affected. Thus, an analysis of the consequences of the action for the policymaker’s objectives assumes no possibility of influencing those expectations, even though a different systematic approach to choice regarding this action could have given people a reason to have had different expectations, and shaping expectations is relevant to the achievement of the policymaker’s objectives. In general, a superior outcome can be achieved (according to the RE analysis) through commitment by the policy authority to behave in a systematically different way than a discretionary policymaker would wish to behave ex post; this requires commitment to follow a rule. The key feature of the Kydland-Prescott conception of a policy rule is thus the element of advance commitment, which is contrasted with ad hoc decisionmaking at the time when action is necessary.

Even a brief review of these familiar arguments raises an important question. Does not a recognition of the possibility (indeed, the inevitability, eventually) of nonroutine change undermine the desirability of commitment to a policy rule? In a theoretical exposition of the advantages of policy commitment—such as the examples presented by Kydland and Prescott—it is easy to assume that possible future states in which the policymaker may find herself can be enumerated in advance, and that a commitment can be chosen ex ante that specifies what will be done in each state if it is reached. In practice, this will not be possible, for reasons that go beyond a mere assertion that the number of possible future states is very large (the elements of some infinite-dimensional space). There are often developments that are not simply elements in a large space of possibilities the dimensions of which were conceptualized in advance, but that instead were inconceivable previously. Policymakers are then confronted not simply with the question of whether it is now desirable to behave differently than they were expected to behave in such a situation, but with a need to think afresh about a type of situation to which they have given little prior thought.1 The experience of policymakers after the unexpected eruption of the global financial crisis in the summer of 2007 underlines the relevance of this possibility, if further proof were needed.

(p.392) It is fairly obvious that the existence of nonroutine change of this sort undermines the desirability of a certain conception of a policy rule: one where a rule is understood to mean a fully explicit formula that prescribes a precise action for any possible circumstance. Nonetheless, it does little to reduce the relevance of the abovementioned reasons that the recent literature on monetary policy has focused on the evaluation of policy rules. It does not eliminate the need to assess policy strategies, rather than individual decisions considered in isolation, even if such strategies cannot realistically be supposed to represent complete specifications of behavior in all possible circumstances. Nor does it eliminate the potential benefits from requiring policy decisions to be based on general principles, rather than making an ad hoc decision about what will achieve the best outcome under current circumstances.

13.2.2 Policy Analysis without RE

Strategies and principles would be irrelevant only if one were to view decisionmakers as responding mechanically to the current economic environment and not on the basis of anticipations that can be influenced by the announced policy commitments of a central bank; that is, only if one were to deny the relevance of the “modern” turn advocated by Phelps et al. (1970). In fact, a variety of approaches to dynamic economic analysis has been proposed that still allow a role for anticipations that should take into account what is known about central bank policy commitments, without imposing the strong form of expectational coordination implied by the postulate of RE.

One example is the concept of “calculation equilibrium” proposed by Evans and Ramey (1992). Evans and Ramey propose that individuals make decisions that are optimal for a particular anticipated future evolution of the economy (extending, in principle, indefinitely into the future); they also propose that individuals possess a correct model of the economy, in the sense that they are able to correctly predict the evolution of the variables they wish to forecast under a particular conjecture about the way that others expect the economy to evolve. People’s expectations can then be disciplined by requiring them to result from a calculation using the economic model, starting from an expectation about others’ expectations. Evans and Ramey relax, however, the RE assumption that everyone must forecast a future evolution that is predicted by the commonly agreed-on model under the assumption that others predict precisely that same evolution. Instead, they propose that individuals start with some initial conjecture about the future path of economic variables and progressively refine this forecast by calculating (at each stage in an iterative process) the evolution that should be predicted if others are expected to forecast using the output of the previous stage’s calculation. (The thought process that this involves is like the one described by Keynes [1936] in his famous analysis of the “beauty contest.”)

(p.393) If this iterative calculation were pursued to the point of convergence2—so that a forecast were eventually obtained with the property that expecting others to forecast that way would lead to the same forecast—the resulting forecast would correspond to an RE equilibrium (REE) of the model used by decisionmakers.3 But Evans and Ramey assume instead (like Keynes) that in practice decisionmakers will truncate such calculations after a finite number of iterations; they propose that calculation costs limit the number of iterations that it is reasonable for a decisionmaker to undertake (and propose a particular stopping rule that need not concern us). Given the truncation of the expectation calculations, dynamic phenomena are possible—even assuming that people’s model of the economy is actually correct—that would not occur in an RE analysis. These include asset “bubbles” that last for some time (though not indefinitely) and are sustained by beliefs that are consistent with the economic model, based on a belief about others’ beliefs that is also consistent with others’ understanding the model, and so on for a finite number of iterations. But ultimately the asset bubble depends on higher-order beliefs that will be disconfirmed.

The eductive stability analysis proposed by Guesnerie (2005) similarly assumes that individuals make decisions that are optimal for a particular anticipated future evolution of the economy, and that they each possess a correct model of the economy. It further imposes the stronger restriction that both of these things are “common knowledge” in the sense that that term is used in game theory: each individual’s beliefs are consistent with knowledge that all others know that all others know [and so on ad infinitum] that these things are true. Nonetheless, as Guesnerie stresses, only under rather special circumstances are RE beliefs the only ones consistent with such a postulate. (It is in this case that Guesnerie refers to the REE as eductively stable and hence is a reasonable prediction of one’s model.) Under more general circumstances, he proposes that one should consider the entire set of possible paths for the economy that can be supported by beliefs consistent with common knowledge of rationality (the analog of the “rationalizable” outcomes considered by Bernheim [1984] and Pearce [1984]). This includes paths along which fluctuations in asset prices occur that are sustained purely by changing conjectures about how others will value the assets in the (p.394) future—conjectures that must be consistent with similar rationalizability of the conjectured future beliefs. Guesnerie proposes that policies should be selected with an eye on the entire set of rationalizable outcomes associated with a given policy; for example, it may be desirable to eliminate the risk of fluctuations due to arbitrary changes in expectations by choosing a policy for which a unique REE is eductively stable—but this is a criterion for policy design, rather than something that can be taken for granted.

In the approach proposed by Woodford (2010), a given policy is again associated with an entire set of possible outcomes, rather than with a unique prediction, and it is argued that one should seek a policy that ensures the greatest possible lower bound for the average level of welfare, over the set of outcomes associated with the policy. The set of possible outcomes corresponds to a set of possible (not perfectly model-consistent) beliefs about the economy’s future evolution that people may entertain. In this approach, however, the set of possible beliefs is disciplined not by a requirement that the evolution in question be rationalizable using a theory of others’ behavior (more generally, be consistent with knowledge of the correct model of the economy), but rather by a requirement that subjective beliefs not be grossly out of line with actual probabilities—an assumption of “near-rational expectations.” For example, events that occur with complete certainty (according to the policy analyst’s model) are assumed to be correctly anticipated, though events that occur with probabilities strictly between zero and one may be assigned somewhat incorrect probabilities. A parameter (which indexes the analyst’s degree of concern for robustness of policy to departures from model-consistent expectations) determines how large of a discrepancy between subjective and model-implied probabilities is to be contemplated.4 This approach requires policymakers to contemplate equilibrium outcomes that differ from the REE prediction to a greater or lesser extent, depending on policy and other aspects of the economic structure. For example, for a given value of the robustness parameter, equilibrium valuations of long-lived risky assets can depart to a greater extent from their “fundamental” (REE) values when the short-term riskless rate of return is lower, so that the anticipated future sale price of the asset accounts for a larger share of its current valuation.

Each of these concepts assumes less-perfect coordination of expectations than does the hypothesis of RE, and so they may provide a more plausible basis for policy analysis following structural change. Yet in each case, the central bank’s commitments regarding future policy will influence the set of possible subjective forecasts consistent with the hypothesis. In the proposals (p.395) of Evans and Ramey (1992) or of Guesnerie (2005), this is because the mapping from given conjectures about others’ forecasts to what one should oneself forecast (using the model of the economy) is influenced by the central bank’s public commitments regarding its conduct of policy. In the Woodford (2010) proposal, it is because the degree of discrepancy between given subjective beliefs and model-consistent beliefs will depend on policy commitments. Hence in any of these approaches, a comparative evaluation of alternative monetary policies will require a specification of the entire (state-contingent) future path of policy, and not simply a current action, just as in the case of REE analysis. Similarly, there will be potential benefits from commitment relative to the outcome under discretionary policy. Indeed, Woodford (2010) finds that when the policymaker wishes to choose a policy that is robust to departures from fully model-consistent expectations, the advantages of commitment over discretionary policy are even greater than when one assumes that agents in the economy will necessarily have RE.

Regardless of the degree of expectational coordination that is assumed, is it still reasonable for a central bank to commit itself in advance to a rule, chosen based on one view of what possible future contingencies may arise, even though future situations may well arise that were not contemplated at all?

I believe that the argument that rule-based policymaking is necessarily foolhardy in a world where nonroutine change occurs depends on too narrow a conception of what is involved in following a rule. In particular, it is important to recognize that there are different levels at which it is possible to describe the process through which policy decisions are to be made. Judgment may be exercised in the application of policy to particular circumstances (at a more concrete level of description of the policy) even though the judgment is used to determine the implications of a rule (at a more general level of description) that has been stated explicitly in advance. At a general level of description, it may be possible to state in advance a rule that is to be applied; but at a more concrete level of description of the policy, the application of the rule to specific circumstances may require an exercise of judgment.

I illustrate this idea with a more detailed discussion of possible types of commitments in the case of monetary policy.

13.3 Alternative Levels of Policy Commitment: The Case of Monetary Policy

One might imagine a rule for the conduct of monetary policy being specified at any of four distinct levels of description of the policy in question.5 These (p.396) involve increasing degrees of abstraction as one proceeds to higher level descriptions.

The lowest level is what I call the “operational” level. At the most concrete level of description, monetary policy (under routine conditions, rather than those during the recent crisis) involves a decision about a quantity of bank reserves to inject or withdraw each day, typically through open-market purchases or repo transactions. One might imagine that a monetary policy rule should be a specific formula that would tell the Trading Desk of the Federal Reserve Bank of New York (or the corresponding branch of another central bank) which trades to execute each day, as a function of various observable conditions. McCallum (1988, 1999) argues for a policy rule that is operational in this sense, and so proposes rules that specify equations for the adjustment of the monetary base.

The literature on monetary policy rules has instead often discussed specifications at a second, somewhat higher level, which I call the “instrument” level. At most central banks, the key decision of the policy committee (again, under routine conditions) is the choice of an operating target for a particular overnight interest rate—the federal funds rate, under the operating procedures of the Federal Reserve since at least the mid-1980s. The decision about how to achieve this target through market trades is then delegated to staff members with these operational responsibilities, or is at any rate determined without having to convene the central’s policy committee (i.e., the committee that chooses the operating target for the instrument of policy: the Federal Open Market Committee, in the case of the United States).6 One might imagine that a monetary policy rule should be a specific formula that determines what the correct target for the federal funds rate should be at each point in time, as a function of observable variables. The celebrated Taylor rule (Taylor 1993) is of this form, and so are most empirical characterizations of policy through estimation of a central bank reaction function and most of the normative proposals considered in the theoretical literature.

A still higher level description of policy is possible, however, at least in the case of those central banks basing their policy decisions on a clear intellectual framework that remains constant over the course of many meetings of the policy committee and is articulated with some degree of explicitness in the bank’s public communications. This level can be referred to as the “policy-targets” level.7 A central bank may determine the correct instrument setting (i.e., the operating target for the policy rate) at each meeting of the policy committee on the basis of previously specified targets for other (p.397) macroeconomic variables that are expected to be indirectly influenced by the path of the policy rate.

In particular, a forecast-targeting regime (Svensson 1999, 2005; Wood-ford 2007) involves choosing a target for the policy rate at each meeting that is consistent with the anticipated path for the policy rate required (according to the policy committee’s analysis) for the economy’s projected evolution to satisfy a quantitative target criterion. A policy rule might be specified by announcing the particular target criterion that will guide such deliberations; Svensson calls such a prescription a “targeting rule” (as opposed to an “instrument rule,” e.g., the Taylor rule). This is in fact the level at which central banks have most been willing to commit themselves to explicit criteria for the conduct of policy. Many central banks now have explicit quantitative targets for some measure of medium-run inflation; a few have also been fairly explicit about the criteria used to judge whether near-term economic projections are acceptable, and about the way in which projections for variables other than inflation are taken into account (Qvigstad 2006).8

Finally, it is possible, at least in principle, for a central bank’s policy commitments to be formulated at a still higher level, which I call the “policy-design” level. At this level, one would specify the principles on which policy targets are chosen, given a particular model of the way that monetary policy affects the economy. A commitment to specified principles at this level could be maintained in the face of a change in either the structure of the economy or policymakers’ understanding of that structure, even though it might well be appropriate to modify a central bank’s policy targets in light of such change. I do not think that any central banks have yet made explicit statements committing themselves to principles of policy design at this level of abstraction. But the formulation of useful principles at this level has been a goal of at least a part of the research literature on normative monetary policy. I believe that the quest for useful principles at this level becomes more important the more seriously one takes the likelihood of nonroutine change.

13.3.1 At Which Level of Specification Is Policy Commitment Appropriate?

Note that these four distinct levels are mutually compatible ways of describing a central bank’s policy; the same policy might simultaneously be correctly described at each of these levels. Hence, when contrasting possible specifications of monetary policy “rules” of these four distinct types, one is not necessarily talking about policies that are different, in terms of the actions that they would require a central bank to take under particular (p.398) circumstances. But the levels of description differ in the degree to which it is useful to imagine specifying a rule for policy in advance.

At each successively lower level of the specification, one comes closer to saying precisely what the central bank ultimately must do. At each lower level, finer institutional details about the precise mechanism through which monetary policy affects the economy become relevant. And finally, at each lower level, it is appropriate for the central bank to be prepared to adjust course more frequently on the basis of more recent information. In practice, decisions will be reviewed more frequently, the lower the level is. For example, in the case of the Federal Reserve, decisions at the operational level are adjusted daily, and sometimes more often, during periods of market turmoil. In contrast, decisions at the instrument level are scheduled for review only 8 times a year, though occasionally intermeeting changes in the funds rate target are judged necessary. The policy committees of inflation-targeting central banks reconsider the appropriateness of the planned path for the policy rate 8 or 12 times a year, but the inflation target itself remains unchanged for years. Yet even inflation targets change from time to time; for example, the Bank of England’s official target was changed at the end of 2003, and the European Central Bank slightly changed its definition of price stability after a review of its monetary policy strategy in 2003. The Bank of Canada’s inflation target has been modified several times since its introduction in 1991 and is reviewed at 5-year intervals. And surely the possibility of such changes in the light of changing knowledge is entirely appropriate.

The degree to which it is either possible or useful to articulate the principles on which decisions are made also differs greatly depending on the level of specification. I think that few monetary economists or central bankers—even among those who are strong proponents of rule-based policy and central bank transparency—would argue that there is a need for explicit policy commitments at the operational level. The literature on the consequences of alternative policy rules generally assumes that any nonnegative target for the policy rate can be implemented with a high degree of accuracy over time scales (a day or two) that are quite short compared to those that matter for the effects of interest rate policy on economic activity and inflation. It is similarly assumed that the open-market operations required to implement the policy have few if any consequences for the objectives of policy, other than through their effects on interest rates. Moreover, although in principle the same policy prescription (say, adherence to the Taylor rule) should have an equivalent formulation at the operational level, it would be complex to describe this in detail (i.e., to give a precise algorithm that would allow the correct operational decision to be computed under all possible circumstances). A simplified description at the operational level might instead be practical. However, if this is regarded as an actual commitment about how policy will be conducted, it would be less successful at achieving (p.399) the desired outcome with regard to the state-contingent evolution of the policy instrument, and hence less successful at achieving the central bank’s higher level stabilization objectives. Hence insistence on an operational policy commitment would have clear costs.

Deviations from a bank’s routine approach to the implementation of its interest rate target are often necessary at times of crisis, as increased uncertainty leads to a sudden increase in demands for liquidity. For example, Sundaresan and Wang (2009) describe the special measures introduced by the Fed to deal with unusual liquidity needs around the time of the millennium date change (the so-called Y2K scare), in such a way as to minimize the consequences of this unusual behavior for money market interest rates. An inability to respond in this way, owing to the existence of a rigid policy commitment at the operational level, would likely have meant greater disruption of financial markets. At the same time, the benefits of such a low-level commitment seem minimal. The most important argument for the desirability of a lower level commitment is that accountability of the central bank to the public is increased by specifying exactly what must be done in terms that can be verified by outside observers. But one cannot really say that a commitment at the instrument level, without specifying in advance the precise operational decisions required, reduces accountability to any significant extent, given central banks’ degree of success at achieving their interest rate targets over short time horizons in practice.

It is less obvious that a description of policy at a level of abstraction higher than the instrument level should suffice. Indeed, the literature on the quantitative evaluation of policy rules has almost exclusively focused on rules specified as formulas to determine the value of a policy instrument that is under relatively direct control of the central bank (see, for example, the review of this literature by Taylor and Williams 2011). Nonetheless, I think there are important advantages to considering rules specified by target criteria that need not involve any variable directly controlled by the central bank.

A first question is whether a mere specification of a target criterion suffices to fully determine outcomes under the policy, so that one can compare the outcomes associated with alternative policies. This issue is undoubtedly of practical relevance. For example, a criterion that only involves projected outcomes 2 or more years in the future (as is true of the explicit commitments of many inflation-targeting central banks) is one that is unlikely to imply a determinate solution; there will be alternative paths by which the economy could reach a situation consistent with the criterion, and in such a case the target criterion fails to fully determine policy. In my view, it is important to adopt a target criterion that does fully determine (but not overdetermine) a particular equilibrium. But this is a property that one can analyze given a specification of the target criterion alone; one need not specify the policy at a lower level. Giannoni and Woodford (2010) illustrate how this kind of (p.400) calculation can be undertaken assuming RE and using a structural model of the economy that specifies the constraints on feasible equilibrium paths of the target variables. The model need not even include the additional model equations required to determine the evolution of the central bank’s policy instrument. Giannoni and Woodford also describe a general approach to the derivation of target criteria that guarantees, among other desiderata, that the target criterion necessarily determines a unique bounded REE. There is also a question whether a given interest rate feedback rule determines a unique REE; one argument for the importance of choosing a rule that conforms to the Taylor Principle is that in many models, rules with weaker feedback from inflation to the interest rate operating target have been found to result in indeterminacy of equilibrium (e.g.,Woodford 2003: 252–261).

It is true that the literature on this topic typically assumes RE, and one might wonder instead how precisely the predicted evolution of variables (e.g., inflation and real activity) is pinned down if one admits that people in the economy may not all anticipate the evolution that the policy analyst’s own model predicts. I believe that a consideration of this issue is another important part of an analysis of the desirability of a proposed target criterion. But this question can also be analyzed without any need to specify the associated evolution of the central bank’s interest rate instrument, as illustrated by Woodford (2010). Moreover, specification of a policy commitment at the level of an instrument rule (or central bank reaction function), rather than through a target criterion, only increases the degree of uncertainty about the equilibrium outcome that results from doubts about whether people in the economy will have model-consistent expectations. This is because the relation between the interest rate reaction function and the evolution of the variables of interest (the “target variables” in terms of which the central bank’s stabilization objectives are expressed) is more indirect than the relation between the target criterion and the paths of these variables. In addition, the number of ways in which departures from model-consistent expectations can skew the outcome implied by the policy is correspondingly larger.

The analysis by Evans and Honkapohja (2003) illustrates this point, though they do not express the matter in quite this way. They analyze a standard New Keynesian model (the one analyzed by Clarida et al. [1999] under the RE assumption) under a particular hypothesized alternative to RE, namely least-squares learning dynamics. They compare predicted outcomes under the learning dynamics in the case of two different policy specifications that would be regarded as equivalent under the assumption of RE. (Both imply a determinate REE, and the REE evolution of the endogenous variables that each determines is the same.) The two rules are each described in their paper as interest rate reaction functions. However, the one they call the “expectations-based policy rule” is the equation determining the instrument-level interest rate; they obtain this equation by inverting their model’s structural equations to determine the interest rate as a function of observed private sector expectations, whether those correspond to (p.401) the model-consistent expectations or not. This derivation implies that the evolution of inflation and the output gap satisfy a particular target criterion (a type of flexible inflation target) that can be expressed in terms of those two variables alone. Systematic adherence to this rule is equivalent to commitment to the target criterion, rather than to any particular equation specifying the instrument as a function of “objective” factors without reference to people’s expectations. The alternative rule that they consider (the “fundamentals-based policy rule”) is instead a formula for the instrument-level setting as a function of the exogenous state of the world at each point in time. The two rules are chosen so that they would both determine precisely the same equilibrium evolution for the economy, under the assumption of RE. Yet Evans and Honkapohja show that under least-squares learning dynamics, a commitment to the target criterion (i.e., expectations-based policy) leads to convergence to the REE, whereas commitment to the instrument rule results in unstable dynamics.

A second question is whether specification of a target criterion, rather than a reaction function for the instrument, is a useful way of providing a guideline for policymakers in their deliberations. Of course, a monetary policy committee has to decide on the level of overnight interest rates, so the target criterion alone does not provide them with sufficient information to discharge their duty. Nonetheless, a target criterion relating the paths of some of the variables that the policy committee wishes to stabilize seems to be the appropriate level of detail for a prescription that a policy committee can agree to use to structure its discussions, that can be explained to new members of the committee, and that can ensure some degree of continuity in policy over time. Special factors are likely to be important at each meeting when deciding on the level of interest rates consistent with fulfillment of the target criterion; hence it is difficult to impose too much structure on this kind of deliberation without the committee members feeling that their procedures are grossly inadequate to dealing with the complexity of the situation. The considerations involved in deciding whether a particular target criterion is sensible are instead less likely to constantly change.

Indeed, there are important theoretical reasons to expect that a desirable target criterion will depend on fewer details about the current economic environment than would a desirable specification of a reaction function. Giannoni and Woodford (2010) show how to construct robustly optimal target criteria that implement an optimal response to shocks, regardless of which types of shocks are more important or of the degree of persistence, forecastability, and so on of the shocks that occur. The coefficients of an optimal reaction function will instead depend on the statistical properties of the shocks.9 Because each shock to the economy is always somewhat (p.402) different from any other, there will always be new information about the particular types of disturbances that have most recently occurred, making advance commitment to a particular reaction function inconvenient. The types of structural change that imply a change in the form or coefficients of the desirable target criterion instead occur more infrequently, though they certainly also occur.

As an example of the undesirability of strict commitment to an instrument rule, consider the consequences of the disruption of financial markets during the global financial crisis of 2007–2009. Prior to the crisis, other U.S. dollar money market interest rates moved closely with changes in the federal funds rate, so that adjustment of the Federal Open Market Committee’s target for the funds rate (which in turn resulted in actions that kept the effective funds rate very close to that target, on virtually a daily basis) had direct implications for other rates as well. But during the crisis, many other short-term rates departed substantially from the path of the funds rate. For example, one closely monitored indicator, the London Interbank Offered Rate (LIBOR) for the U.S. dollar—to which the lending terms available to many nonfinancial borrowers are automatically linked—had always remained close to the market forecast of the average funds rate over the corresponding horizon (as indicated by the overnight interest rate swap rate). But after the summer of 2007, a spread that had previously been extremely stable and of 10 basis points or less became highly volatile and at certain times reached several percentage points.

The same kind of Taylor rule for the federal funds rate as a function of general macroeconomic conditions (inflation and real activity) that might be appropriate at other times should not be expected to remain a reliable indicator when the relation between the funds rate and other market interest rates changes. For example, in the simulations of Cúrdia and Woodford (2010a), an inflexible commitment to the standard Taylor rule leads to overly tight policy when a financial disturbance increases spreads between the funds rate and the rates faced by other borrowers. Yet commitment to a target criterion is not subject to the same critique. Even if the central bank’s target criterion involves only the projections for inflation and some measure of aggregate real activity, if the bank correctly accounts for the consequences of financial disruptions on the monetary transmission mechanism in the forecast-targeting exercise, it will necessarily be sensitive to changing financial conditions when choosing a path for its policy rate. Likewise, it will modify its implementation procedures, if necessary, the more effectively to keep the policy rate close to that path.

One might counter that such an example shows only that a central bank must be willing to consider modifications of its commitment to an instrument rule occasionally, under sufficiently unusual circumstances. Indeed, John Taylor himself (Taylor 2008) proposed a modification of his celebrated rule during the crisis, under which the funds rate target (given specific values (p.403) for the current inflation rate and output gap) should be adjusted downward one-for-one with any increase in the LIBOR–Overnight Indexed Swap rate (OIS) spread. However, even this proposed modification is unlikely to provide as accurate a guideline as would be provided by commitment instead to a target criterion and the use of many indicators to determine the instrument setting required to implement it. Taylor’s quest for a simple reaction function that can be stated publicly in advance of the decisions made when using it requires him to choose a single indicator of financial conditions—the LIBOR-OIS spread at one particular term. But in fact there are many market rates and asset prices that influence economic decisions, and the different possible spreads and other indicators of financial conditions behave differently, especially during periods of financial instability (e.g., see Hatzius et al. 2010). Commitment to a target criterion rather than to a specific reaction function automatically allows a large number of indicators to be taken into account when judging the instrument setting that is required for consistency with the target criterion. In addition, the indicators that are considered and the weights given to them can easily be changed when economic conditions change.

Hence I would argue that the level at which it is most valuable for a central bank to make an explicit commitment—one that can be expected to guide policy for at least some years into the future—is that of a target criterion (for a detailed discussion, see Woodford 2007). This criterion can then be used to guide policy decisions at the instrument level through commitment to a forecast-targeting procedure for making instrument decisions. In turn these can be used to guide policy decisions at the operational level by making the staff in charge of operations responsible for achieving the operating target over a fairly short time horizon, without any need to specify the requisite market interventions. Note that the process used to derive the instrument path and the concrete market transactions required to implement it should take into account changes in market conditions, including ones that may not have been foreseeable when the target criterion was adopted.

Although I believe it is useful for policymakers to articulate a policy commitment at the level of a target criterion, the kind of commitment that I have in mind does not preclude reevaluation of the target criterion, if there is a significant change in the policy authority’s view of the relevant conditions. For example, there may be progress in understanding how the economy works. The benefits obtained from an explicit policy commitment are not vitiated by allowing for occasional reconsideration of the target criterion, if the authority remains committed to choosing the new target criterion in accordance with its higher level commitment to particular principles of policy design.

These highest level principles will include, of course, a specification of the ultimate goals that the policy targets are intended to serve. (In the theory of monetary policy expounded in Woodford [2003], for example, (p.404) the ultimate goal is assumed to be the maximization of the expected utility of a representative household.) But there are other important principles that deserve to be articulated. For example, I have proposed that when policy targets are reconsidered, they should be chosen from what I have called a “timeless perspective” (Woodford 1999).

13.3.2 Policy Design from a Timeless Perspective

By “choice from a timeless perspective” I mean that the rule of conduct that is chosen is the one that the policy authority would have wished to commit itself to—had it then had the knowledge of the economy’s structure that it has now—at a time far enough in the past for all possible consequences of the public’s anticipation of the bank’s systematic pattern of conduct to be taken into account.10 I argue that this is a desirable criterion for choice even though, at the time that the new target criterion is actually adopted, the public has already anticipated whatever it has anticipated up until that point, and these past expectations can no longer be affected by the current decision. They can only be fulfilled or disappointed.

This proposal is somewhat in the spirit of John Rawls’s (1971) interpretation of social contract theory, according to which citizens should accept as binding the principles of justice to which they have not actually voluntarily submitted themselves, on the grounds that these principles are ones that they should have been willing to choose in a hypothetical “original position,” from which—not yet knowing anything about the actual situation that they will occupy in society—they would not make choices that seek to take advantage of the particular circumstances of the individual that they actually become. The doctrine of the timeless perspective similarly argues that a central bank should accept to be bound by principles that it would have accepted before reaching its current situation by having previously considered the possibility of reaching that situation, among others.

A commitment to always choose new policy targets from a timeless perspective means that the occasion of a reconsideration of the policy targets can never be used as an excuse for reneging on previous policy commitments simply because the policymaker’s incentives are different ex post (when the effects of the anticipation of her actions need no longer be taken into account) than they were ex ante (when such effects were internalized). In the absence of a commitment to this principle—if, instead, the policy authority simply chooses the new target criterion associated with the best possible equilibrium from the current date—the need to reconsider policy targets from time to time raises difficulties similar to those discussed in the critique of discretionary policy by Kydland and Prescott (1977). In fact, this approach would reduce precisely to discretionary policy in the sense (p.405) of Kydland and Prescott, if the policy target were reconsidered each time a policy action must be taken. The problem is less severe if reconsiderations are less frequent, but the question of why frequent reconsiderations should not be justifiable would itself have to be faced. Strictly speaking, the state of knowledge will constantly be changing, so that if reconsideration is justified when the policy authority’s model of the economy changes, there is no obvious limit to the frequency of possible reconsiderations. Moreover, a policy authority that is not committed to the choice of targets from a timeless perspective would have a constant incentive to use any pretext, however minor, to call for reconsideration of the policy targets that it has previously announced but does not wish to adhere to.

With a commitment to choose the target criterion from a timeless perspective, it is no longer essential to prespecify the kinds of situations in which it will be legitimate to reconsider the target criterion. When this principle is followed, a reconsideration will always lead the policy authority to reaffirm precisely the same target criterion as it chose on the previous occasion if there has been no change in its model of the economy.11 In this case it will, as a practical matter, not make sense to go through the necessarily laborious process of debating the appropriateness of the target criterion except when there has been a substantial change in the authority’s view of the economy’s functioning. Hence reconsiderations of the target criterion should occur much less frequently than reconsiderations of the operating target for the policy rate, as stated above.

13.4 The Theory of Monetary Policy after the Global Financial Crisis

A thorough discussion of the kind of target criterion appropriate for a central bank to adopt is beyond the scope of this chapter. However, some brief remarks may nonetheless be appropriate about an issue that is likely to be on the minds of many at present: To what extent have the dramatic complications facing central banks during the recent global financial crisis shown that ideas about rule-based policymaking that were popular (in academic circles and at some central banks) prior to the crisis must be thoroughly reconsidered? And should such reconsideration cast doubt on the very wisdom of proposing that central banks articulate policy commitments on the basis of economic models that must always be regarded as (at best) provisional attempts to comprehend a complex and ever-changing reality?12

(p.406) Although reassessments of the theory of monetary policy in the light of the crisis have only begun, a few conclusions are already clear. Disruption of the normal functioning of financial markets, of the kind observed during the crisis, certainly affects the connection between central bank market interventions and the bank’s policy rate, the connection between that policy rate and other equilibrium rates of return, and hence the bank’s stabilization objectives. It follows that the appropriate policy decisions, at least at the operational and instrument levels, will surely be affected.

Commitment to a mechanical rule specified at one of these lower levels is unwise under such circumstances. For example, an inflexible commitment to the standard Taylor rule will lead to policy that is too tight in the case of financial disturbances, as illustrated by the simulations of Cúrdia and Woodford (2010a). But as argued in the previous section, monetary policy recommendations that are expressed in the form of a target criterion are not so obviously problematic. In fact, except in quite special circumstances, taking account of financial market imperfections should also have consequences for the form of a desirable target criterion. For example, in the presence of financial distortions, there are additional appropriate stabilization goals for policy that could safely be neglected if the financial system could be relied on to function efficiently. (The minimization of financial distortions becomes an additional stabilization goal, in addition to the traditional concerns for price stability and an efficient aggregate level of resource utilization, because of the implications of financial intermediation for the efficiency of the composition of expenditure and of production, and not just for their aggregate levels.) These additional concerns almost certainly imply that an ideal target criterion should involve additional variables beyond those that would suffice in a world with efficient financial intermediation. Nonetheless, the severity of the distortions resulting from the neglect of such refinements is probably not as great in the case of commitment to a target criterion as in the case of commitment to an instrument rule for the federal funds rate. At any rate, this is what the simulations reported in Cúrdia and Woodford (2010a) suggest.

Another special problem for many central banks raised by the crisis is that the zero lower bound on short-term nominal interest rates became a binding constraint on the use of traditional interest rate policy to achieve the desired degree of monetary stimulus. A situation in which the constraint binds is theoretically possible, but practically unlikely, in the absence of substantial disruption of the financial system; hence the issue was ignored in many analyses of optimal monetary policy rules prior to the crisis.

This constraint certainly changes what can be achieved by interest rate policy and must be taken into account when choosing an appropriate state-contingent path for the policy rate. However, it does not mean that an appropriate criterion for choosing the path for the policy rate is necessarily much different from the kind that would have been recommended by the (p.407) standard literature.13 Eggertsson and Woodford (2003) show that even when the zero lower bound is expected sometimes to bind, an optimal policy commitment can still be characterized by commitment to a particular target criterion. Although the optimal target criterion in this case is slightly more complex than those recommended in the literature (which assumed the bound would never be a binding constraint), Eggertsson and Woodford also show that a particular type of simpler target criterion already advocated in the theoretical literature continues to provide a fairly good approximation to optimal policy (at least in the numerical example that they analyze) even in the case of a crisis that causes the zero lower bound to bind for a substantial number of quarters.14

The key feature that is required for a targeting regime to have desirable properties when the zero lower bound binds is for the target criterion to involve a price-level target path, rather than only a target for the rate of inflation looking forward. A purely forward-looking approach to inflation targeting can lead to a very bad outcome when the zero lower bound constrains policy, as shown by Eggertsson and Woodford (2003), because the central bank may be unable to prevent undershooting of its target while the constraint binds. Yet it will permanently lock in any unwanted price declines that occur by continuing to target inflation in a purely forward-looking way once it regains control of aggregate expenditure. An expectation that this will occur leads to expectations of a deflationary bias to policy (to the extent that people correctly understand how the regime will work), which make the zero lower bound on nominal interest rates an even tighter constraint; in such a regime, expectations of deflation and contraction become self-fulfilling, amplifying the effects of the original disturbance. In contrast, in the case of commitment to a price-level target path, any undershooting of the path implies a greater degree of future inflation that will be required to “catch up” to the target path. Hence (again, to the extent that people correctly understand how the regime will work) undershooting should create inflationary expectations that, by lowering the anticipated real rate of return associated with a zero nominal interest rate, will tend to automatically limit the degree of undershooting that occurs.15

(p.408) The simple target criterion proposed by Eggertsson and Woodford is actually one that has already been recommended as an optimal target criterion in a variety of simple New Keynesian models that abstracted from the zero lower bound. In fact, target criteria that involve a target path for the price level, and not simply a target for the rate of inflation going forward, have been found to be more robust in the sense of reducing the extent to which economic stabilization suffers as a result of errors in achieving the target.16 The greater robustness of this form of target criterion to difficulties caused by a failure to achieve the target owing to the zero lower bound is closely related to the other robustness results.

Financial disruptions also require reconsideration of the traditional doctrine that interest rate policy is the sole tool that a central bank should use for macroeconomic stabilization, and policy can be conducted while maintaining a balance sheet made up solely of short-term Treasury securities. I would argue that the traditional doctrine is a sound one, as long as financial markets operate with a high degree of efficiency. But disruption of the ability of private parties to effectively arbitrage between different markets, as during the recent crisis, creates a situation in which targeted asset purchases by the central bank and/or special credit facilities serving particular classes of institutions become additional relevant dimensions of central bank policy.

Cúrdia and Woodford (2011) analyze the effects and illustrate the potential usefulness of these additional dimensions of policy in the context of a dynamic stochastic general equilibrium model with credit frictions. They find, however, that the existence of potential additional dimensions of policy does not greatly change the principles for choosing an appropriate target path for the policy rate. Hence these dimensions do not call into question the desirability of a forecast-targeting framework for addressing that issue or even justify departure from a conventional form of target criterion. The extent to which the central bank is able to limit anomalous behavior of credit spreads through unconventional policies will matter, of course, for the appropriate path of the policy rate, as Cúrdia and Woodford show through numerical examples. But this kind of modification of interest rate policy will automatically occur under the forecast-targeting procedure; it does not require a change in the target criterion.

The effective use of unconventional dimensions of policy also requires that policy be conducted within a systematic framework that involves some degree of advance commitment of policy actions, rather than in a purely discretionary fashion. The reasons are similar to those advanced in discussions (p.409) of conventional interest rate policy. Once again, the effects of policy depend not only on current actions (e.g., the quantity and type of assets that the Fed purchases this month) but also on expectations about future policy (whether this month’s purchases are only the start of an intended sequence of further purchases, how long it intends to hold these assets on its balance sheet, etc.). Given this, a purely discretionary approach to policy, which chooses a current action to achieve some immediate effect without internalizing the consequences of having been anticipated to act in that way, is likely to be quite suboptimal. In particular, the introduction of unconventional measures ought to be accompanied by an explanation of the anticipated exit strategy from these measures.

The crisis has also led to much discussion of the extent to which monetary policy (of the Fed in particular) during the real estate boom contributed to the occurrence or severity of the crisis. This issue raises the question of whether, even during times when financial markets appear to be functioning well, monetary policy decisions need to take into account their potential consequences for financial stability. This is not a topic that is yet well understood, but it is surely an important topic for central bankers to study. In Woodford (2012), I consider some standard arguments for trying to separate this issue from monetary policy deliberations and conclude that the arguments do not justify avoiding this inquiry.

To the extent that the risk of financial crisis is endogenous and is influenced by monetary policy, this is a concern that has not been addressed in traditional analyses of optimal monetary policy rules (e.g., Woodford 2003; Cúrdia and Woodford 2010a,b, 2011). Hence the target criteria for setting monetary policy proposed in the traditional literature are not necessarily appropriate when one takes this additional consideration into account.17 This is an example of a circumstance under which it might be justifiable for a central bank to modify its existing policy commitment at the policy-targets level, though in a way that is consistent with its existing higher level commitments at the policy-design level.

In Woodford (2012), I give an example of how this might be done. In the simple model proposed there, the optimal target criterion for interest rate policy involves not only the projected path of the price level and of the output gap but also the projected path of a “marginal crisis risk” variable. This variable measures the degree to which marginal adjustments of the policy rate are expected to affect the risk of occurrence of a financial crisis (weighted by the expected welfare loss in the event of such a crisis). In (p.410) periods when the marginal crisis risk is judged to be negligible, the recommended procedure would reduce to flexible price-level targeting of the kind discussed in Woodford (2007). But when the risk is not negligible, the target criterion would require the central bank to tolerate some undershooting of the price-level target path or output relative to the natural rate (or both) to prevent a greater increase in the marginal crisis risk.

Although the adoption of such a procedure would require a departure from recent conventional wisdom, in that it would allow some sacrifice of conventional policy targets to reduce crisis risk, it would maintain many salient characteristics of the kind of policy regime advocated in the precrisis literature. It would still be a form of inflation-targeting regime (more precisely, a form of price-level targeting regime). Such a procedure would not only ensure relative constancy of the inflation rate that people would expect in the medium run (i.e., a few years in the future), but it would also in fact ensure constancy of the long-run price-level path, regardless of the occurrence either of occasional financial crises or of (possibly more frequent) episodes of nontrivial marginal crisis risk.

13.5 Conclusion

I am not suggesting that the recent crisis provides no grounds for reconsideration of previously popular doctrines about central banking. On the contrary, it raises many new issues, some of which are already the topics of an active literature. However, I will be surprised if confronting these issues requires wholesale abandonment of the lessons for policy emphasized by the literature on policy rules. Among the insights that I think most likely to be of continuing relevance is the recognition that suitably chosen policy commitments can substantially improve on the macroeconomic outcomes that could be expected from purely discretionary policy, even when those chosen to exercise the discretion are policymakers of superb intelligence and insight into current economic conditions.

There is an important respect, however, in which prior thinking about the advantages of policy rules should be modified in the light of recent events. It has been common in the theoretical literature to draw a sharp distinction between policy rules—understood as completely specified prescriptions for action under all possible contingencies—and purely discretionary policy, as if these two poles represent the only intellectually coherent positions. I have argued instead for both the possibility and the desirability of an intermediate position, in which there are multiple levels of description of policy; policy can and should be specified in advance at the level of the general principles in accordance with which decisions will be made, whereas judgment that cannot be reduced to a mechanical formula will necessarily be involved in the (p.411) application of those principles to concrete situations. I have shown in detail how multiple levels of description are possible in the case of monetary policy decisions. I believe that this reformulation of what is understood by a policy rule can increase both the practical relevance of theoretical prescriptions for monetary policy and the political legitimacy of decisionmaking by central banks.

References

Bibliography references:

Bernheim, Douglas. (1984) “Rationalizable Strategic Behavior,” Econometrica 52: 1007–1028.

Clarida, Richard, Jordi Gali, and Mark Gertler. (1999) “The Science of Monetary Policy: ANew Keynesian Perspective,” Journal of Economic Literature 37: 1661–1707.

Cúrdia, Vasco, and Michael Woodford. (2010a) “Credit Spreads and Monetary Policy,” Journal of Money, Credit and Banking 42(s1): 3–35.

———. (2010b) “Conventional and Unconventional Monetary Policy,” Federal Reserve Bank of St. Louis Review 92: 229–264.

———. (2011) “The Central-Bank Balance Sheet as an Instrument of Monetary Policy,” Journal of Monetary Economics 58: 54–79.

Eggertsson, Gauti, and Michael Woodford. (2003) “The Zero Interest-Rate Bound and Optimal Monetary Policy,” Brookings Papers on Economic Activity 2003(1): 271–333.

Evans, Charles. (2010) “Monetary Policy in a Low-Inflation Environment: Developing a State-Contingent Price-Level Target.” Speech given at the 55th Economic Conference, Federal Reserve Bank of Boston, October 16. Available at: http://www.chicagofed.org/webpages/publications/speeches/2010/10_16_boston_speech.cfm.

Evans, George W., and Seppo Honkapohja. (2003) “Expectations and the Stability Problem for Optimal Monetary Policy,” Review of Economic Studies 70: 807–824.

Evans, George W., and Garey Ramey. (1992) “Expectation Calculation and Macroeconomic Dynamics,” American Economic Review 82: 207–224.

Friedman, Benjamin M., and Kenneth N. Kuttner. (2011) “Implementation of Monetary Policy: How Do Central Banks Set Interest Rates?” in Benjamin M. Friedman and Michael Woodford (eds.), Handbook of Monetary Economics, volume 3B. Amsterdam: Elsevier, pp. 1345–1438.

Frydman, Roman, and Michael D. Goldberg. (2011) Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State. Princeton, NJ: Princeton University Press.

Giannoni, Marc P., and Michael Woodford. (2010) “Optimal Target Criteria for Stabilization Policy.” NBER Working Paper 15757, National Bureau of Economic Research, Cambridge, MA.

Guesnerie, Roger. (2005) Assessing Rational Expectations 2: Eductive Stability in Economics. Cambridge, MA: MIT Press.

(p.412) Hatzius, Jan, Peter Hooper, Frederic S. Mishkin, Kermit L. Schoenholtz, and Mark W. Watson. (2010) “Financial Conditions Indexes: A Fresh Look after the Financial Crisis.” NBER Working Paper 16150, National Bureau of Economic Research, Cambridge, MA.

Keynes, John Maynard. (1936) The General Theory of Employment, Interest and Money. New York: Macmillan.

Kydland, Finn E., and Edward C. Prescott. (1977) “Rules Rather Than Discretion: The Inconsistency of Optimal Plans,” Journal of Political Economy 85: 473–491.

McCallum, Bennett T. (1988) “Robustness Properties of a Rule for Monetary Policy,” Carnegie-Rochester Conference Series on Public Policy 29: 173–203.

———. (1999) “Issues in the Design of Monetary Policy Rules,” in John B. Taylor and Michael Woodford (eds.), Handbook of Macroeconomics, volume 1C. Amsterdam: Elsevier, pp. 1483–1530.

Pearce, David. (1984) “Rationalizable Strategic Behavior and the Problem of Perfection,” Econometrica 52: 1029–1050.

Phelps, Edmund S., G. C. Archibald, and Armen A. Alchian (eds.). (1970) Microeconomic Foundations of Employment and Inflation Theory. New York: W. W. Norton.

Qvigstad, Jan F. (2006) “When Does an Interest Rate Path ‘Look Good’? Criteria for an Appropriate Future Interest Rate Path: A Practitioner’s Approach.” Staff Memo 2006/5, Norges Bank, Oslo, Norway.

Rawls, John. (1971) A Theory of Justice. Cambridge, MA: Harvard University Press.

Sargent, Thomas J. (1993) “Rational Expectations and the Reconstruction of Macroeconomics,” in Rational Expectations and Inflation, second edition. New York: HarperCollins, pp. 1–18.

Sundaresan, Suresh, and Zhenyu Wang. (2009) “Y2K Options and the Liquidity Premium in Treasury Markets,” Review of Financial Studies 22: 1021–1056.

Svensson, Lars E. O. (1999) “Inflation Targeting as a Monetary Policy Rule,” Journal of Monetary Economics 43: 607–654.

———. (2003) “What Is Wrong with Taylor Rules? Using Judgment in Monetary Policy through Targeting Rules,” Journal of Economic Literature 41: 426–477.

———. (2005) “Monetary Policy with Judgment: Forecast Targeting,” International Journal of Central Banking 1: 1–54.

Taylor, John B. (1993) “Discretion versus Policy Rules in Practice,” Carnegie-Rochester Conference Series on Public Policy 39: 195–214.

———. (2008) “Monetary Policy and the State of the Economy.” Testimony before the Committee on Financial Services, U.S. House of Representatives, February 26. Available at: http://www.stanford.edu/~johntayl/Onlinepaperscombinedbyyear/2008/Monetary_Policy_and_the_State_of_the_Economy.pdf.

Taylor, John B., and John C. Williams. (2011) “Simple and Robust Rules for Monetary Policy,” in Benjamin M. Friedman and Michael Woodford (eds.), Handbook of Monetary Economics, volume 3B. Amsterdam: Elsevier, pp. 829–860.

Woodford, Michael. (1999) “Commentary: How Should Monetary Policy Be Conducted in an Era of Price Stability?” in New Challenges for Monetary Policy. Kansas City, MO: Federal Reserve Bank of Kansas City, pp. 277–316.

———. (2003) Interest and Prices: Foundations of a Theory of Monetary Policy. Princeton, NJ: Princeton University Press.

(p.413) ———. (2007) “Forecast Targeting as a Monetary Policy Strategy: Policy Rules in Practice.” NBER Working Paper 13716, National Bureau of Economic Research, Cambridge, MA.

———. (2008) “Principles and Public Policy Decisions: The Case of Monetary Policy.” Paper presented at the Yale Law School Legal and Economic Organization seminar, New Haven, CT, March 6. Available at: www.columbia.edu/~mw2230/PrinciplesandPolicy_YLS.pdf.

———. (2010) “Robustly Optimal Monetary Policy with Near-Rational Expectations,” American Economic Review 100: 274–303.

———. (2011) “Optimal Monetary Stabilization Policy,” in Benjamin M. Friedman and Michael Woodford (eds.), Handbook of Monetary Economics, volume 3B. Amsterdam: Elsevier, pp. 723–828.

———. (2012) “Inflation Targeting and Financial Stability,” Sveriges Riksbank Economic Review 2012: 7–32. (p.414)

Notes:

I thank Amar Bhidé, Roman Frydman, and Andy Haldane for helpful comments, and the Institute for New Economic Thinking for research support.

(1.) On the importance for policy analysis of confronting the occurrence of nonroutine change, see Frydman and Goldberg (2011).

(2.) This assumes that the process would converge, if pursued far enough. In the examples considered by Evans and Ramey (1992), this is the case, and their interest is in the alternative forecasts that remain possible when the calculation is instead truncated after a finite number of iterations. But such an algorithm need not converge at all, nor need there be a unique limiting forecast independent of the initial conjecture, as Guesnerie (2005) emphasizes.

(3.) Of course, this would still only be an equilibrium relative to the model that they happen to believe in, because the iterative calculation is merely a check on the internal consistency of their forecasting and is not a proof that it must correctly describe how the world will actually evolve. Thus, such a conception of how people forecast could still allow for surprises, at which times there might be an abrupt change in the model that people believe and hence in the way that they forecast.

(4.) Note that what is relevant is the discrepancy between the subjective beliefs and what the model predicts should happen if people hold those beliefs, and not the discrepancy between subjective and REE beliefs. These may be quite different, if the model’s prediction for the economy’s evolution is highly sensitive to subjective beliefs.

(5.) The first three levels are distinguished in Woodford (2007: 5–9), which also discusses the possible specification of policy rules at the different levels.

(6.) For the distinction between instrument choice and the decisions involved in implementation of that decision, see, for example, Friedman and Kuttner (2011).

(7.) The distinction between policy prescriptions that are specified at the instrument level (instrument rules) and those specified at the policy-targets level (targeting rules) has been stressed in particular by Svensson (2003).

(8.) See Woodford (2007: 21–25) for further discussion of the Norges Bank procedures as a particularly explicit example of a forecast-targeting approach.

(9.) This behavior is illustrated in Woodford (2003: 529–530) in the context of a simple example.

(10.) See Woodford (2008, 2011: 743–748) for further discussion of this issue in general terms.

(11.) A general method for deriving an optimal target criterion given the policy authority’s stabilization objective and its economic model, in a way that conforms to this principle, is explained in Giannoni and Woodford (2010).

(12.) See Woodford (2010) for further discussion of these issues.

(13.) This issue had not been neglected in the theoretical literature on optimal monetary policy prior to the crisis. Thanks to Japan’s experience since the late 1990s, the consequences of a binding zero lower bound had already been the topic of fairly extensive analysis prior to 2008.

(14.) Eggertsson and Woodford (2003) analyze the issue in the context of a dynamic stochastic general equilibrium model with perfectly functioning financial markets, but Cúrdia and Woodford (2010b) show how the same analysis applies to a model with credit frictions in which the zero lower bound comes to bind as a result of a disruption of financial intermediation.

(15.) Although no central bank has yet adopted a target of this kind, there has recently been some discussion in the Federal Reserve System of the advantages of doing so, at least as a temporary measure when the zero lower bound constrains policy, as it has in recent years. In particular, see Evans (2010).

(16.) These errors may arise from imperfect knowledge on the part of the central bank, stemming either from poor estimates of parameters of the bank’s structural model or from mistaken judgments of the economy’s current state. See Woodford (2007, 2011: 741–742) for further discussion of this topic.

(17.) Of course, the significance of the problem will depend on both the degree to which the risk of financial crisis is predictably influenced by monetary policy and the extent to which such risk cannot be adequately controlled using other policy tools (improved regulation, macroprudential supervision, etc.). But I do not believe that we can confidently conclude at this time that the problem is negligible.