2

Click here to load reader

Econometrie Appliquee: Modeles de Consommation.by Georges Rottier

Embed Size (px)

Citation preview

Page 1: Econometrie Appliquee: Modeles de Consommation.by Georges Rottier

Econometrie Appliquee: Modeles de Consommation. by Georges RottierReview by: Carl Erik SarndalJournal of the American Statistical Association, Vol. 74, No. 366 (Jun., 1979), p. 510Published by: American Statistical AssociationStable URL: http://www.jstor.org/stable/2286376 .

Accessed: 15/06/2014 12:45

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access to Journalof the American Statistical Association.

http://www.jstor.org

This content downloaded from 185.2.32.49 on Sun, 15 Jun 2014 12:45:39 PMAll use subject to JSTOR Terms and Conditions

Page 2: Econometrie Appliquee: Modeles de Consommation.by Georges Rottier

510 Journal of the American Statistical Association, June 1979

Econometrie Appliquee: Modeles de Consommation. Georges Rottier. Paris: Dunod, Gauthier-Villas, 197t5. 284 pp

This book in the French language is based on notes from a first course in econometric methods in demand analysis taught by the author at the University of Paris I-Panth6on-Sorbonne. Written for students following the French equivalent of a master's degree program in economics, the book would also, as the author rightly points out, serve well for self-study as an introduction to the subject for practitioners in industry and administration, particularly for individuals concerned with marketing and forecasting. The presenta- tion is geared towards real life applications and requires only a modest background in mathematics and statistical theory. Two ap- pendices give elements of matrix algebra and of relevant statistical theory.

Emphasizing the central role of model building in econometrics, the author first discusses the specification of models in studies of consumer demand. He then presents least squares estimation theory for linear models, followed by standard hypothesis testing theory and analysis of variance interpretations. Problem areas such as mis- specification of variables, errors of measurement in variables, auto- correlation, and nonlinearity are discussed. Macroeconomic relation- ships and theory of simultaneous equations are then presented with reference to data-oriented examples.

This well-written book is of definite interest and value to franco- phone students, but probably to a lesser extent to a general North American audience. Apart from the language difficulty, the applica- tions are set against the background of the French economy.

CARL ERIK SARNDAL The Universitjy of British Columbia

Dynamic Programming and Stochastic Control. Dimitri P. Bertsekas. New York: Academic Press, 1976. xv + 397 pp. $22.50.

The aim of this book is to provide a unified treatment of sequential decision making under uncertainty by means of dynamic program- ming. In so doing, the author simultaneously covers a large area of stochastic control theory, optimal stopping problems, probabilistic dynamic programming, and in particular, Markov decision analysis. This has necessitated a fairly general setup, but generality is exactly what the author wishes to provide. In fact, in his preface, the author states that he is attempting to make as few structural as- sumptions on his basic model as possible so that readers can ap- preciate the importance and generality of concepts such as risk, feed- back, sufficient statistics, adaptivity, contraction mappings, and the principle of optimality in many varied and seemingly unrelated settings. Engineers can benefit from this treatment by their primary interest in stochastic control theory, operations researchers can benefit by their interest in Markov decision models, and statisticians can benefit by their interest in optimal stopping problems. Of course, this is not to say that all three groups of researchers, plus others, will not be interested in all of the topics in the book; they verv likely will be.

The opening chapter is somewhat more than the usual introduction. A brief but fairly rigorous account of the development of expected utility as a criterion for decision making is given so that the reader without such background can see why maximizing expected monetary value is not always desirable. In this chapter, the author also exposes us to some of the types of models and the aspects of optimal solutions, such as the certainty equivalence principle. that we will see in more detail later on.

Chapter 2 introduces the basic dynamic programming model for a finite time horizon and its recursive solution. The principle of optimality is not only explained in a convincing manner but a proof of this intuitive notion is also provided. Although I will comment more on notation later on in this review, I will say right niow that the author is very careful in the use of notation. For example, he denotes an action by u, but to show that an action depends on the state x, he uses the notation ,A(x) to mean the function of x which might take on the value u. Thus a policy becomes a sequence of functions (,AI, A2, . . .) that might eventually assume the values (U,, U2, .. .). To understand this thoroughly is clearly of paramount importance for any student wishing to go anywhere in (lynamic programming, and the notation given here, I believe, clarifies and reinforces, rather than confuses this issue.

Chapter 3 is an excellent, involved chapter that presents four models from different areas and solves them fairly completely. These include the linear control model with quadratic costs, along with the associated concepts of controllability, observability, and stability, the optimality of (s, S) policies in the stochastic inventory problem with setup cost, Mossin's model of dynamic p)ortfolio selection in the presence of several risky assets and several particular forms of utility functions, and a particular optirnal stopping problem, the asset selling problem.

In Chapter 4, the assumption of perfect state information is dropped, and we are introduced to the concept of sufficient statistics for making optimal future decisions. Again, the linear control problem with quadratic costs is discussed, but now the problem of state estimation is treated as well. In fact, a 20-page appendix to this chapter, a very worthwhile addition to the book, is devoted to least squares estimation and the Kalmon-filter. This chapter also shows how the posterior probability distribution on the current state vector acts as a suitable "state" in the dynamic programming recursion and briefly mentions Sondik and Smallwood's recent results in this area. The author then uses this concept of a posterior distribution as a state to solve several sequential problems, most notably, the hy- pothesis testing problem that results in the sequential probability ratio test.

Chapter 5 is a good collection of suboptimal methods for solving dynamic programming problems and some of the properties of these methods. These include state space and/or action space grid ap- proximations and the author's own convergence properties of these approximations, open-loop controllers, naive feeclback controllers, open-loop feedback controilers, and partial open-loop feedback con- controllers. The most important discussion of these lattei three con- trollers concerns their "adaptivity," namely, whether they perform better than open-loop controllers, that is, controllers that use no feedback at all. Fortunately for readers, the author provides very lucid proofs of the adaptive property when it exists and also provides several counterexamples.

The final three chapters treat problems with an infinite time horizon: discounted problems, undiscounted total cost problems, and average cost problems. Although the author has not chosen to label these chapters "Markov decision models," the theory and examples are practically identical to the theory an(I examples from Markov decision theory. Nevertheless, the treatment of this area is much more thorough and extensive than any other single treatment I have seen to date. For example, the author covers all of the theory and most of the exainples from the relevant chapters of Ross's well-known book, Applied Pr obabitlity Models with Optimization Applications, but he does much more besides. This includes his discussion of so-called positive and negative dynamic programming and also his potentially useful scheme for using successive approximations in the average cost problem.

The author states that this book could be used for a one-semester or one-year course for upper-level undergraduates or graduate stu- dents who have had a course in probability theory as well as calculus, real analysis, vector-matrix algebra, and elementary optimization theory. After reading the book and using it for a graduate-level course, I am convince(c that students need quite a lot more mathematics than the minimum required by the author.

There are several reasons for this. First of all, the breadth of material requires a knowledge of manv diverse areas: some real analysis here, some convexity and optimization results there, some (a good deal of) probability theory here, some matrix algebra there, and so on. Indeed, students need to know, not merely have been exposed to, all of these areas before they can tackle this book. Second, because the author attempts to gain as much generality as possible, the notation is really quite frightening, with subscripts and superscripts everywhere. (This is not to say that his notation is excessive, which it is not.) Researchers or students who have been in math for years will not let this notation upset them, but the relative newcomer will never get past it! Finally, the problems are so difficult that a student who needs problems for routine practice will find almost no help here. In fact, a large percentage of the problems are recent results from journal articles, and to make these possible for all but the advanced researcher, the author has supplied lengthy hints. My reaction to the problems is that they are there to be read (for the results they contain) rather than to be worked, especially by students.

This book is definitely an excellent addition to the books that use dynamic programming to solve stochastic control problems. In fact,

This content downloaded from 185.2.32.49 on Sun, 15 Jun 2014 12:45:39 PMAll use subject to JSTOR Terms and Conditions