11 Comments
deletedSep 19, 2022Liked by dynomight
Comment deleted
Expand full comment

I'm increasingly under the belief that a significant fraction of academic studies are essentially worthless.

Expand full comment

Scientific studies are written to be published. Like many things there is an element of "teaching to the test" - writing what the editors and peer reviewers want to see. The gatekeepers want to see strong and novel results. They don't care so much about the functional implications. Meanwhile in the real world, all we care about are the functional implications.

Expand full comment

Like a better written and less long-winded version of what Judea Pearl was talking about in "The Book of Why".

Expand full comment

I find instrumental variable approaches an ingenious solution to many of the ills meantimes above. Somehow IV is still fairly confined to the field of economics. Prolly coz econ is full of simultaneous equation problems + genuine instruments are feverishly difficult to find. I find The Wind of Change: Maritime Technology, Trade, and Economic Development†

By Luigi Pascali* a great example for this approach to analysing observation data. Using the change in effective distance between countries caused by the switch from sail to steampowered ships as an instrument, Pascali estimates the effect of trade on per capita income. Note that the size of the change in shipping time from steamboat technology (effective distance) is a geographical feature, largely determined by the countries location vis-a-vis the relevant trade winds. Meaning the change in effective distance cannot itself be an effect of per capita income, nor affect per capita income other than through trade levels. This allows the author to extract the causal arrow from the data, rather then assuming it a priori. Pretty cool.

Expand full comment

You may also be in this paper elaborating on your point that "When writing papers, never say “cause”. Instead use words like “associated” or “risk factor” exactly as if they meant “cause”. This paper is saying that authors should openly acknowledge their causal assumptions and the taboo about causal statements is bad. Additionally, the paper argues that with more advanced analytic techniques (I think related to DAGs and methods by Judea Pearl), better assessments of causality are possible: https://journals.sagepub.com/doi/pdf/10.1177/1745691620921521

I don't know much about causal modeling.

On another noted, related to "coding", I remember another paper saying this more formally. Their point was that measurement A (e.g., college years) is correlate with a "true construct" B (e.g., education), and controlling for A only helps insofar as A is correlated with B. Otherwise, B could still confound the relationship between X and Y.

Expand full comment
Oct 13, 2022·edited Oct 13, 2022

Quick note, off topic but worth mentioning: exercise is NOT what determines how fat people are. The truth is way more complicated than that and exercise is actually a very inefficient method to lose weight.

Expand full comment