An eclectic essayist is necessarily a dilettante, which is not in itself a bad thing. But Gladwell frequently holds forth about statistics and psychology, and his lack of technical grounding in these subjects can be jarring. He provides misleading definitions of “homology,” “sagittal plane” and “power law” and quotes an expert speaking about an “igon value” (that’s eigenvalue, a basic concept in linear algebra). In the spirit of Gladwell, who likes to give portentous names to his aperçus, I will call this the Igon Value Problem: when a writer’s education on a topic consists in interviewing an expert, he is apt to offer generalizations that are banal, obtuse or flat wrong.
The banalities come from a gimmick that can be called the Straw We. First Gladwell disarmingly includes himself and the reader in a dubious consensus — for example, that “we” believe that jailing an executive will end corporate malfeasance, or that geniuses are invariably self-made prodigies or that eliminating a risk can make a system 100 percent safe. He then knocks it down with an ambiguous observation, such as that “risks are not easily manageable, accidents are not easily preventable.” As a generic statement, this is true but trite: of course many things can go wrong in a complex system, and of course people sometimes trade off safety for cost and convenience (we don’t drive to work wearing crash helmets in Mack trucks at 10 miles per hour). But as a more substantive claim that accident investigations are meaningless “rituals of reassurance” with no effect on safety, or that people have a “fundamental tendency to compensate for lower risks in one area by taking greater risks in another,” it is demonstrably false.
Problems with Popular Nonfiction
Posted 27 January 2017 - 10:34 PM
Posted 27 January 2017 - 10:38 PM
And specifically on how pop science is better in essay form than book form:
The reasoning in “Outliers,” which consists of cherry-picked anecdotes, post-hoc sophistry and false dichotomies, had me gnawing on my Kindle. Fortunately for “What the Dog Saw,” the essay format is a better showcase for Gladwell’s talents, because the constraints of length and editors yield a higher ratio of fact to fancy. Readers have much to learn from Gladwell the journalist and essayist. But when it comes to Gladwell the social scientist, they should watch out for those igon values.
Posted 01 February 2017 - 02:20 AM
I wish I had time to write a detailed critique of Mark Lilla. I am still upset that a professor can write things like "structuralism is a social science."
Posted 01 February 2017 - 02:50 AM
well, he's a political scientist...
my annoying opinions: whisky, food and occasional cultural commentary
current restaurant review: house of curry (sri lankan in rosemount, mn)
current whisky review: glen ord 28
current recipe: white bean curry with green peppers
facts are meaningless. you could use facts to prove anything that's even remotely true!
Posted 01 February 2017 - 03:35 AM
Those books are so well-reviewed, too.
Posted 01 February 2017 - 11:30 PM
Posted 01 February 2017 - 11:46 PM
Levi-Strauss came up with structuralism. Deliver me.
(But you probably mean the Gladwell-oid genre, which is different.)
Posted 02 February 2017 - 12:34 AM
Possibly they're less well-edited because they're not meant to stand on their own? If the essays were so good, why not publish them to magazines as they're written?
Posted 15 April 2017 - 01:33 AM
As much as I enjoy hating on Michael Lewis, Tamsin Shaw's review of The Undoing Project in the NYRB is pretty unfair, and pedantic to the point of being almost incoherent: http://www.nybooks.c...d-manipulators/
These passages are probably worse to read than anything from Lewis's book:
In Thinking, Fast and Slow, Kahneman characterizes System One intuitions as fast and automatic, whereas System Two reasoning is slow and deliberate. In other words, he characterizes our intuitive judgments phenomenologically, by describing the speed and effortlessness with which they come to us. They are, in effect, snap judgments.When philosophers describe our reliance on intuition, however, they are not concerned with the phenomenology of judgments per se but with the architecture of justification.
We have to rely on intuition, they contend, where our discursive justifications come to an end, for instance in the fundamental laws of logic, such as the principle of noncontradiction, or basic rules of inference. We cannot justify our belief in these laws in ways that don’t beg further questions. Our justification for employing them rests on our finding them self-evident. We cannot deliberate rationally without them. Since they are the necessary basis for any deliberative thought, we cannot characterize mental functions as straightforwardly belonging to an intuitive System One or a deliberative System Two.
Paraphrased: "Ha ha, philosophers like me mean something different when we say 'intuition', therefore Kahneman and Tversky are wrong"
Similarly, when people’s judgments appear to be affected by irrelevant stimuli, for example a reminder of our mortality seeming to make us more risk-averse (priming effects, that is), a very large number of potential causal factors would have to be ruled out before such irrational biases could be confidently described as features intrinsic to System One. If it is not a simple task to divide thinking into two separate systems, it will not be easy to reduce the complex interactions between unconscious biases, background beliefs, and deliberation in any given case to an identifiable and systematic error.
These objections, if correct, would suggest that many of the psychological experiments Kahneman cites in Thinking, Fast and Slow would be impossible to replicate. And indeed the very year that it was published a replicability crisis emerged in the field of psychology, but most severely in social psychology. The psychologist Ulrich Schimmack has recently created a Replicability Index that analyzes the statistical significance of published results in psychology. He and his collaborators, Moritz Heene and Kamini Kesavan, have applied this to the studies cited in Thinking, Fast and Slow to predict how replicable they will be, assigning letter grades to each chapter. Kahneman and Tversky’s own work gets good grades, but many other studies fare very poorly. The chapter on priming, for example, gets an F. As reported in Slate, the overall grade of the chapters assessed so far is a C-. Kahneman has posted a gracious response to their findings, regretting that he cited studies that used such small sample sizes.
Which is an implausible and stupid interpretation of the replication crisis, which is mostly explained by just really shoddy application of statistics.
It saddens me how poorly this crowd appears to understand relatively basic things, and instead prefer to spout pretentious intellectual bullshit.
Posted 15 April 2017 - 01:42 AM
That second quote is stupid because the replication crisis has very little to do with Shaw's supposed "complex interactions between unconscious biases, background beliefs, and deliberation in any given case to an identifiable and systematic error" and everything to do with those tiny sample sizes and poor statistical practice.
It's stupid rhetorical sleight of hand. If you think the project theory and behavioral economics project is wrong, find a real counterexample, not one that's already been explained by something other than your shitty pretentious hypothesis.
It's 2017 and the mandarins at the NYRB apparently still can't even spot their own scientific illiteracy. Or more likely they can, and they're assigning all the psych pieces to Shaw to make some stupid wrongheaded point about the primacy of philosophy over psychology. Assholes.
Posted 17 July 2018 - 03:39 AM