In under the wire
Jan. 31st, 2013 08:10 pmAs you may recall, my resolution this year was to finish one book per month and write about it. Well, I finished a book this month. It is a bit of a cheat because it is a short book, and I read it this afternoon. Marshall's Tendencies: What Can Economists Know? by John Sutton, is the book, a short monograph on some common fallacies in macroeconomic modeling. A few notes:
Sources of "error" (random variation or uncontrolled factors) in models are important. What kinds of models and assumptions are useful in 'messy' situations like social sciences and economics? This short book purports that Marshall's idea of economic models being like the tides--large effect "central tendencies" that can explain relationships in conjunction with secondary or smaller "noise" that is less important--is in many cases a fallacy.
He purports that two views for this indeterminacy arise: a "class of models" approach vs. an "unobservability" view. I would say that the whole "data underdetermines theory" aspect arises in both? There is discussion on generalizability (skepticism in the face of empirical vs. causal models) and how much we can assume that patterns in the past are stable.
He talks a bit about Keynes and Hayek, and mentions specifically (twice) that Hayek's vision was taken to an extreme by a school that completely eschews empirical checking of theories to data: I think this is in reference perhaps to the Austrian School. The difficulties of finding stable and predictable elements of messy systems are a problem that pervade economics and social science, but that does not mean that we should abandon the search for common-sense presuppositions of empirical modeling (nor that such presuppositions don't already exist).
The author mentions: In preparing these lectures, I have had in mind an ideal reader: this is someone who already knows, from studying other fields, how a successful theory based on form mathematical models works. But he or she has only recently stumbled upon economics, and though accepting the practical importance of its agenda, is more than a little skeptical as to what may be gained by writing down formal mathematical models in this area.... I seem to fit the bill, although the author also only touches on a few concepts that I don't particularly understand all that well, eg, Nash equilibrium.
There is a good discussion of generalizing abstract features as opposed to writing out all components of a system, in for instance the Carnot cycle for engines. Many different physical engines can be reduced to this form of a piston working, and thus results can be devised for them. Notes on setting boundaries of outcomes remind me of the Roberts and Pashler article on theories fit to data in that they take a fundamentally "data set"-based approach. But this book motivates it a little more in discussing uncontrolled exogenous variables. They go over some examples where empirical theory works well (option pricing, auctions) and some where it does not (unemployment vs. job posting availability).
It is also nice to see that Sutton agrees with me on the notion that lots of theories are in some sense utilitarian; regarding the success of Block-Scholes type stock option pricing he says: What is of interest here, relative to the agenda of the present lecture, is that we are dealing with a situation in which the true model--the underlying model of stock price movements--is known to a degree of precision adequate for the purpose at hand.
They question "rational actors" as possibly being unrealistic, and it's funny too b/c the whole rational action thing is apparently a justification for applying some kind of optimization function to your math. Yeeeeah.
They mention at the end that fully abandoning Marshall's unsatisfactory analogy of tides can lead to excessive pessimism (and they single out the Austrian school again for that). They actually do set some bounds on actions based on concepts of "viability" and "stability" that seem more broadly generalizable to economic markets. Still, to me the examples seemed simplistic.
Overall, a good read. Except the author uses footnotes way too much. It is okay to integrate some of those "asides" into the text. Stop making me treat this book as random access.
Great quote at the end of chapter 1: Different situations call for different approaches. There is no recipe for research.
Related: Rogeberg and Melberg on the flaw in modern economics and how to fix it. I haven't gotten ahold of the article yet, but this summary is good, especially in how it decomposes economic theories into four different realms of proof: mathematical, "as-if" empirical, causal, and welfare.
Also Related (linked on fb just today from a friend who's a PhD psychologist): Priming: does it really exist?. An interesting example of the difficulty of experimental repeatability and theory proving. (incidentally, Kahnemann was the speaker at CMU's doctoral hooding ceremony in April. It was actually a good keynote lecture).
Other progress: I am still in the middle of several texts, reading in parallel instead of series, I guess.
Scientific Method in Practice: Still on Chapter 5 (deductive logic). Chapter 6 is "Probability": I think I can probably skip/skim that one being as I've got plenty of experience with probability.
Experimental Design: I've read chapters I-III of R.A. Fisher's Design of Experiments, and a related article: "Misunderstandings between experimentalists and observationalists about causal inference". Imai et al. JRSSA, 2008, vol 171 part 2, pp 481--502. I'm going to have to extend my borrowing time on this book. It's pretty neat to read the words from the horse's mouth, as it were. The lady tasting tea. Darwin's plants. "Student's" t-test. It's quite enlightening to think of the simpler aspects of randomization and the implications of model-based statistical theory on experimental design. On the other hand, the JRSSA article goes into more depth and is providing some good context for the modern repercussions of the book's history lesson.
Incidentally, I learned from Gauch's book that Aristotle originally applied the term "Scientia" to those conclusions that were absolutely provable through uncontested assumptions and logical deduction alone. But this has to be relaxed in the face of the inobservability of truth from the real world without iterations and induction. From reading more about experiments/fixed effects/randomization etc. I see the links between the frequentist "fixed effect" paradigm and this standard of truth. The above books/articles (especially the JRSSA one) are helping me understand that point of view, even if I don't quite believe it yet ;)
Sources of "error" (random variation or uncontrolled factors) in models are important. What kinds of models and assumptions are useful in 'messy' situations like social sciences and economics? This short book purports that Marshall's idea of economic models being like the tides--large effect "central tendencies" that can explain relationships in conjunction with secondary or smaller "noise" that is less important--is in many cases a fallacy.
He purports that two views for this indeterminacy arise: a "class of models" approach vs. an "unobservability" view. I would say that the whole "data underdetermines theory" aspect arises in both? There is discussion on generalizability (skepticism in the face of empirical vs. causal models) and how much we can assume that patterns in the past are stable.
He talks a bit about Keynes and Hayek, and mentions specifically (twice) that Hayek's vision was taken to an extreme by a school that completely eschews empirical checking of theories to data: I think this is in reference perhaps to the Austrian School. The difficulties of finding stable and predictable elements of messy systems are a problem that pervade economics and social science, but that does not mean that we should abandon the search for common-sense presuppositions of empirical modeling (nor that such presuppositions don't already exist).
The author mentions: In preparing these lectures, I have had in mind an ideal reader: this is someone who already knows, from studying other fields, how a successful theory based on form mathematical models works. But he or she has only recently stumbled upon economics, and though accepting the practical importance of its agenda, is more than a little skeptical as to what may be gained by writing down formal mathematical models in this area.... I seem to fit the bill, although the author also only touches on a few concepts that I don't particularly understand all that well, eg, Nash equilibrium.
There is a good discussion of generalizing abstract features as opposed to writing out all components of a system, in for instance the Carnot cycle for engines. Many different physical engines can be reduced to this form of a piston working, and thus results can be devised for them. Notes on setting boundaries of outcomes remind me of the Roberts and Pashler article on theories fit to data in that they take a fundamentally "data set"-based approach. But this book motivates it a little more in discussing uncontrolled exogenous variables. They go over some examples where empirical theory works well (option pricing, auctions) and some where it does not (unemployment vs. job posting availability).
It is also nice to see that Sutton agrees with me on the notion that lots of theories are in some sense utilitarian; regarding the success of Block-Scholes type stock option pricing he says: What is of interest here, relative to the agenda of the present lecture, is that we are dealing with a situation in which the true model--the underlying model of stock price movements--is known to a degree of precision adequate for the purpose at hand.
They question "rational actors" as possibly being unrealistic, and it's funny too b/c the whole rational action thing is apparently a justification for applying some kind of optimization function to your math. Yeeeeah.
They mention at the end that fully abandoning Marshall's unsatisfactory analogy of tides can lead to excessive pessimism (and they single out the Austrian school again for that). They actually do set some bounds on actions based on concepts of "viability" and "stability" that seem more broadly generalizable to economic markets. Still, to me the examples seemed simplistic.
Overall, a good read. Except the author uses footnotes way too much. It is okay to integrate some of those "asides" into the text. Stop making me treat this book as random access.
Great quote at the end of chapter 1: Different situations call for different approaches. There is no recipe for research.
Related: Rogeberg and Melberg on the flaw in modern economics and how to fix it. I haven't gotten ahold of the article yet, but this summary is good, especially in how it decomposes economic theories into four different realms of proof: mathematical, "as-if" empirical, causal, and welfare.
Also Related (linked on fb just today from a friend who's a PhD psychologist): Priming: does it really exist?. An interesting example of the difficulty of experimental repeatability and theory proving. (incidentally, Kahnemann was the speaker at CMU's doctoral hooding ceremony in April. It was actually a good keynote lecture).
Other progress: I am still in the middle of several texts, reading in parallel instead of series, I guess.
Scientific Method in Practice: Still on Chapter 5 (deductive logic). Chapter 6 is "Probability": I think I can probably skip/skim that one being as I've got plenty of experience with probability.
Experimental Design: I've read chapters I-III of R.A. Fisher's Design of Experiments, and a related article: "Misunderstandings between experimentalists and observationalists about causal inference". Imai et al. JRSSA, 2008, vol 171 part 2, pp 481--502. I'm going to have to extend my borrowing time on this book. It's pretty neat to read the words from the horse's mouth, as it were. The lady tasting tea. Darwin's plants. "Student's" t-test. It's quite enlightening to think of the simpler aspects of randomization and the implications of model-based statistical theory on experimental design. On the other hand, the JRSSA article goes into more depth and is providing some good context for the modern repercussions of the book's history lesson.
Incidentally, I learned from Gauch's book that Aristotle originally applied the term "Scientia" to those conclusions that were absolutely provable through uncontested assumptions and logical deduction alone. But this has to be relaxed in the face of the inobservability of truth from the real world without iterations and induction. From reading more about experiments/fixed effects/randomization etc. I see the links between the frequentist "fixed effect" paradigm and this standard of truth. The above books/articles (especially the JRSSA one) are helping me understand that point of view, even if I don't quite believe it yet ;)
no subject
Date: 2013-02-01 05:36 am (UTC)