Freakonomics
by Stephen Dubner & Steven Levitt
- Personal Finance
- Ashto =
- Jonesy =

Freakonomics – by Stephen J Dubner & Steven Levitt
Freakonomics shows, economics ultimately boils down to the study of incentives. Incentives are all around us, whether natural or manufactured (like by a parent, teacher, boss, politician or economist). In this episode, we talk about 4 major areas of economics, using a few stories and studies from the book to flesh out the concepts:
– Moral incentives VS Economic incentives
– Information Asymmetry
– Correlation VS Causation
– Risk Assessment
Grab a copy of Freakonomics here: https://www.bookdepository.com/Freakonomics-Steven-D-Levitt/9780061956270/?a_aid=adamsbooks
Freakonomics Summary
At its roots, economics is simply the study of incentives. It’s all about how we use our limited resources in an attempt to satisfy our unlimited wants and needs. It’s about the trade-offs we make each day. For example, we’re incentivised to go to work and trade our time (limited resource) to get money to buy the things we want. When we’re a toddler, we’re curious to see what’s going on up there on the stove so we reach up and touch it, but when we get burned we’re incentivised not to touch it again. Our parents praising us for a good score in our 4th grade maths test is an incentive to do some extra homework the night before the test.
If these incentives don’t occur organically, it’s up to people to create them. A trip to the toy store might be how a parent incentivises their child to eat all of their veggies, or a government agency might fine a company who doesn’t pay all of their taxes correctly. People in positions of power – parents, teachers, bosses, politicians – are setting up incentives that encourage people to do more ‘good’ things and less ‘bad’ things.
Written by Stephen Levitt, and economist, and Steven Dubner, and journalist, Freakonomics revels ‘the hidden side of everything’.
Moral Incentives VS Economic Incentives
In Freakonomics we read about an Israeli economists who studied how they could reduce the number of parents that picked their kids up late from day care. They measured the number of late pickups per week over a 6 week period to get a baseline figure, then instituted a $3 fine for every time a parent was late. They measured the number of late pickups for another 6 week period and found that after the fine was introduced, the number of late pickups went… UP! Their incentive had backfired. Before, being on time was ‘the right thing to do’ and being late meant you might be seen as a bad parent, but now, for just $3 you’ve bought you way out of guilt and you’ve got yourself some cheap extra babysitting. Switching from a moral incentive (doing the right thing) to an economic incentive (be on time or pay) had unintended consequences. Next, the economists decided to remove the fine because it clearly didn’t work, but it was too late – late pickups didn’t drop back to pre-fine levels, because parents had already absolved themselves of the ‘right thing’ mindset and late pickups were now expected.
In Predictably Irrational (see book #XX), Dan Ariely referred to a similar concept – social norms VS market norms. Interesting things happen when we confuse the two or when we transition from one to the other. Another study involved an attempt to increase the amount of blood donations in the UK in the 1970s. Economists thought that a small monetary incentive for people after they had already donated was a good will gesture to thank them for their time. But they found that this made the frequency of donations go DOWN. The economists shifted the moral incentive to an economic one: what used to be a generous, philanthropic act that made people feel good about themselves now became a painful and uncomfortable way to make a few bucks.
Correlation VS Causation
Freakonomics shows two things can happen together at the same time, but that doesn’t mean that one necessarily MAKES the other one happen. Also, just because A leads to B doesn’t mean that B leads to A as well. In 2007, a study measured the relative ratio of the length of index fingers compared to ring fingers in high school students. They found that boys with higher ring-to-index finger ratios scored higher on the math portions of the SAT tests, and girls with relatively longer ring fingers had better verbal reasoning scores. This is ‘correlation’ – the two things happened at the same time. But it would be very hard to convince me that this was ‘causation’ – that a longer ring finger meant you were better at maths. If that were true, rather than studying and doing maths practice questions leading up to your exam, you should spend that time stretching out your ring finger and your scores would go through the roof… Hardly seems believable.
We mix up correlation and causation all of the time. As a species, we developed to link causal stories to things that happened around us. Two caveman go for a walk, one gets bitten by a snake then dies a few hours later. We linked them together with a story that told us that snakes were dangerous and we should avoid being bitten by them. It might have just been an unlucky coincidence that your mate had an unrelated heart attack and just happened to have been bitten by a harmless snake earlier that day, but evolution proved that our ancestors who were a little more cautious around snakes tended to survive long enough to pass on their genes. Today though, we need to be very careful and not be tricked by statistics. We need to be careful to distinguish between correlation and causation (do A and B just happen at the same time, or does one cause the other?) and if there is causation we need to be careful in specifying the direction (does A cause B or does B cause A?).
Risk Assessment
We human are horrible at assessing risks. People are more afraid of terrorists than of heart disease. But the number of people dying in terrorist incidents is almost zero – it’s a rounding error when compared with the number of people who die from heart failure. We’d rather send our daughter to play with the neighbour who has a pool in her backyard than with the neighbour whose parents have a gun in a locked case in their bedroom closet. But there is 1 child drowning per every 11,000 backyard pools (550 drownings per year from 6 million pools) compared with 1 child killed for every 1,000,000+ guns (175 child deaths per year from 200m+ guns in the US). By these numbers, swimming pools are 90 to 100 times more dangerous to children than guns.
Our risk assessment is completed skewed. The authors highlights three key factors the screw up our risk assessment:
- CONTROL: If we feel in control, we view things as safer. We prefer to drive a car than fly, even though there are far more deaths from car crashes than from plane crashes. When we’re in the driver’s seat we feel in control, but in a plan we relinquish all control to the pilot, so it feels riskier. Similarly, we can’t control a terrorist setting off a bomb, but we think we can control how much chocolate and ice cream we eat…
- PRESENT vs FUTURE: We weight the things that could kill us now more heavily than the things that could kill us in the future. A terrorist can kill us right now, but eating too much McDonald’s won’t kill us for a few decades.
- AVAILABILITY: We see the news stories of the horrific plane crashing into the side of a mountain or the kid playing with a gun that accidentally goes off, so these feel more likely compared to the guy who fell asleep at the wheel and quietly veers off the road or the child who drowns in the pool without any news coverage. A simple equation might be: Risk = Hazard + Outrage. There is a massive outrage factor when it comes to kids playing with guns or terrorists hijacking a plane, so these scenarios feel more risky.
Visit this link to get your copy from our Top 50