Incentives Matter

It’s sad how much of an advantage econ 101 and basic game theory can give you in the real world. Most people don’t have a coherent internal model of how organizations work. Even the brightest few will usually just have a vague sense that accountability or culture or some other bucket of things matter. An intuition for what good and bad look like rather than a mental model. Understanding that incentives shape behaviour and that most but not all people act in accordance with their incentives puts you far ahead of the pack.

example:

In tech, teams are composed of engineers and product owners. The engineers build things. The product owner(s) decide what subset of possible things to build. One of the clients I work with has a product which is bad. Most of their projects tend to devolve into unworkable messes and need to be rewritten. There are many reasons for this. One of them is that product owners don’t focus on making their product work or be useful. They don’t focus on maximising revenue or user count or user satisfaction or any other metric which is a good proxy for value. Instead they focus on shipping features. They are determined to ship feature X by date Y. Even if feature X will deliver little to no value compared to, say, improving performance. Even if Y is unrealistic and X needs to be watered down to the point that it’s useless and it’s inclusion is nothing more than a box ticking exercise. Why do they do this? Simple. Incentives. If you look at the incentive structure of the product owners, they are not assessed based on the revenue the product delivers, it’s conversion or any other measure of value. Instead they are handed a list of features by an executive or other firms, negotiate when to deliver those features and are then judged based on weather the features are delivered on time. Deliver on time and all is well. Don’t and you’ll have career problems. Surprize surprize people who are not rewarded for trying to maximize value, and actually are actively punished because working on high value features means delaying lower value but committed to features, won’t optimize for value.

Disaggregate Ideology

Imagine you’re living in Germany in the 1930’s. You’re still you with your current ethics and beliefs. The Nazi’s ideology includes a number of claims/beliefs.

  • Jews should be killed
  • Aggressive wars of expansion are necessary
  • Dissent should be criminalized
  • 1 + 1 = 2

Does disagreeing with the first three statements mean you should also disagree with the fourth? If you’re optimizing for truth, the answer seems to be no. You should judge independent beliefs independently. Yet in reality most people tend to group otherwise unrelated claims based on which political faction or ideology happens to have claimed them first. This is bad. In theory it could be useful. If you don’t have time to examine every belief/claim, assigning higher truth value to claims from groups you trust seems resonable. In practice most of the time I’ve seen people do this it’s effectively blinded them to ideas outside of the mainstream and served to help them perpetuate their existing beliefs. In my experience it also has a darker side. If bad groups ideas can be dismissed, it’s all too easy to think a group is bad, dismiss/apply higher standards it’s claims, think it’s bad because you don’t belive it’s claims etc…

A few examples of tribes/ideologies that I instinctively dislike but agree with certain claims of:

  • Modern feminism says gender norms are bad. I agree.
  • White Nationalists say white people are oppressed/victims of systemic racism. To some extent, I agree.
  • Islamists say the west is corrupt and decadent. 23% of American children live in single mother households. To an extent, I agree.
  • Communists say that heavy capital has disproportionate power. I agree.
  • A flavor of the libertarian right says welfare is bad because it goes to underserving people. To some extent I agree.

Money Money Money

I’m in the first year of my career. I make more than my whole family put together. I save around £800 a month. Many people I work with make drastically more than me and yet don’t save a penny. This is normal. Most people are irresponsible. You shouldn’t be.

101

Generally speaking, being good with money is simple. Spend less than you earn. Save the difference in index funds. A few caveats:

  • Index funds are the best choice now because they are low risk and outperform other kinds of investment. That may change. The further from 2019 you’re reading this, the more likely index funds are no longer the best option.
  • Spending more than you earn is fine when you’re intentionally drawing down your savings in a sustainable manner, For example, if you retire/FIRE.
  • This advice applies to normal people in normal situations. If you live in a society where your wealth can be taken from you or devalued easily, investing in other more permanent goods may be better. This is why the middle eastern branch of my family would invest in education/signalling, gold and property abroad. Money can be seized, houses repossessed, currency devalued by sanctions. A medical degree is always valuable because every society, even one in the midst of war/breakdown, values doctors.
  • There are other exceptions. Use your judgement.

On Debt

Going into debt for luxury goods/consumption is almost always bad. Taking on debt to invest in yourself, e.g through education, can be good but still merits serious thought.

On Meaning

For the miser: Money is not meaning. You should not judge your success in life by the amount of money you make or have. You should not spend your time thinking about how much money you have except when doing so is productive. Material wealth is a means to an end, not an end in itself.

For the ascetic: Money does matter. It can buy wisdom, safety, a better future for your family, a longer life, time with people you love and many other things which have value. It can also be wasted on luxuries and consumption.

Scoring

A rough scale of how well you’re doing in terms of financial prudence:

  • Shit tier: Spend more than you earn. Have large amounts of (consumption) debt.
  • Bad tier: Not in debt but living paycheck to paycheck.
  • Okay: Saving ~15% of income. Emergency fund covering 6 months expenditure.
  • Good : Saving ~30% of income. Emergency fund covering 1 year.
  • God tier: 50% savings rate.
  • Marx tier: 70%+ savings rate

My stats

  • Emergency Fund: 2 months (but I can move back in with my parents so actually more like 6 months)
  • Saving Rate: 27% + 8% pension = 35%
  • Tier: Okay

Housing and Invisible Problems

I’m on the train. It’s a Saturday and I’m with my girlfriend travelling to my parents house on the outskirts of London. Behind me a man talks about the merits of a 4 bedroom vs 2 bedroom apartment and how the former costs less per person. I’m a software engineer. Just over a year and a half into my career. I make 55k. It’s more than everyone in my family put together. I pay ~£1000 for my rent per month which is around 35% of my salary.

As the train moves I can see London passing by outside the window. It’s so large. So many people spend so much on housing. Either directly through the money they pay for rent or indirectly with the time they spend commuting. A one hour commute door to door is 2 hours a day. Two hours a day is 10 per week. 40 per month. A whole additional week working each month without pay or career progress. It’s such a waste.

It’s interesting how some problems are not talked about despite their impact. There are rational reasons to neglect high impact issues. They can be intractable. The result of adversarial political processes where any intervention will require prolonged conflict against strong coalitions; They can be too costly to solve. They can be solvable and tractable in theory but their structure can make coordination or coalition building difficult. If a problem cannot be solved by a single individual or institution and also cannot rally a coalition is becomes intractable. Still, is this really the case for housing? Maybe. Partially. But I think it’s something else.

The housing crisis has a few characteristics

  • There are seemingly no pareto optimal solutions. Even ignoring specific policies and looking at the problem in abstract, any policy which radically reduces housing prices or the rate of price growth harms homeowners.
  • There is no accountability. No single person or institution is responsible for housing. It is the crux of no one’s career. Neither bureaucrats nor politicians are incentivised enough to care about it.
  • It’s an invisible problem. The suffering, the poor pushed out of cities, the families damaged by absent parents who spend their lives on the train. All of these problems are slow and accumalatory. None are immediately visible or have monocausal impacts.

These are partly why there is so little political will devoted to the topic. Still, there’s one more important reason. Ignorance. Few people realize that the housing crisis is a product of our actions. Most see it as inevitable or normal or somehow a natural result of the market. Not as a result of insane overregulation and government failure. When a problem seems to be natural as opposed to man made, people care less.

Optimisers vs Drones

Since I gained enough social skills to function in society and organisations, I’ve gradually come to notice a distinction between people. When working as part of an organisation, most people don’t optimise for or really care about outcomes. They just do the kind and amount of work that is normal. A small proportion of people are different. They do care about outcomes, whether that be the firm’s wellbeing or making a great product. They don’t just do what is expected. Instead they look to make the greatest impact. They optimise processes, try to improve their teams ways of working, try to challenge bad or ineffective policies etc…

Concept labels are useful. Let’s call the first group of people drones. Let’s call the second optimisers. Drones are not independent actors. They are, by and large, a reflection of their environment. In a good firm they do well and are an asset. In a bad organisation they internalise and perpetuate pathological behaviours. A very good culture and team dynamic can transform drones into optimisers, but that’s inordinately hard to do and requires managers who are great leaders and can forge a tribal identity.

Good examples of optimizers are John Boyd and TaraMac Aulay. Both fought against the current to implement changes which had disproportionate impacts.

When hiring, you should be on the lookout for optimisers. It’s not the only criteria by any means, a optimiser with an IQ of 50 won’t be much help, but in most high-skill professions and especially in leadership positions an optimiser is far more impactful and trustworthy than a drone.

In life, you should aim to be an optimiser and not a drone. This isn’t easy. Going against the flow can have significant personal costs. More than that, thinking for yourself is a skill. It’s like a muscle. If you haven’t done it for most of your life for whatever reason, your muscle is atrophied and it’s a very hard and long process from that state to one where you have a healthy mind and take ownership of your work, team and effects on the world.

Identity 101

The philosophy of identity asks a simple question. What makes me, me? It’s valuable because it’s answer has a lot of implications:

  • Whether killing one of two identical simulations with billions of identical people is murder or not
  • Whether me today is the same person as me tomorrow (if not, the non-identity problem kicks in)
  • Whether uploaded minds are the same as the physical person they were uploaded from.
  • Whether sufficiently similar people count as one person.
  • Etc…

There are a few basic theories of identity.

The first theory is naive physicalism. Who I am is defined by the physical vessel I inhabit. I am me because I have my body. The problem with it is that it’s highly counterintuitive in a number of situations. It says that if I transplant my brain into a cyborg body I am not longer me, which seems wrong because I am the same consciousness with the same memories and thoughts and feelings. It says that if I loose an arm, I am less me and if I loose enough of my body I am not me. It doesn’t really make sense.

The second theory is continuism. This is the one most people hold. It says that I am me as long as there is a continuous line of consciousness. Even though  in 20 year I may be very different from myself today in a number of ways, I would still be me because there is a continuous consciousness that links those two points in time. The problem with this theory is that it’s also counterintuitive. If over the course of 20 years I gradually metamorphosize into a fish with an effective human IQ of 0.5, continuism says I’m still the same person. That seems wrong. A goldfish is not me even if it’s consciousness is directly linked to mine by an unbroken line of experience. There are also other weaker objections about things like interruptions in consciousness caused by, say, sleep or dying and then receiving CPR.

The final theory, and the one that best aligned with my intuitions, is one I like to call the personspace proximity theory of identity. There are X traits or attributes that a person has. Age. Sight. Hair Colour. Memories. Character. Intelligence. Etc…  We consider some subset of these traits to be morally relevant to determining a person’s identity. Let’s call that set N. That gives us an N dimensions space in which a person is a point. Identity is that point. That is you. The further you move from that point, the less you a person is. Eventually you move far enough, let’s say into goldfish territory, and the difference is so great that you are no longer who you once were. This theory is nice because it avoids the problems of the physicalist and continualist theories. It’s also nice because it’s not discrete. Sudden cliff far discontinuities in personhood are strange. Binary identities are weird. Moral reality is continuous, not discrete.

Against Pascals Wager

Pascals wager says that we should believe in god because the cost of not believing could be eternity in hell while the cost of believing is 0. It’s wrong in a few obvious ways.

  • There is an infinitely large possible space of possible omnipotent beings. Many would punish faith, not reward it. Hence having faith is not a strictly dominant strategy.
  • Believing is not costless.
    • Submission to evil is bad (yes, most gods are evil.)
    • Having inaccurate beliefs about the world is bad. (If your utility function contains a term for belief accuracy)
    • Making yourself more vulnerable to religious infohazards. (If you believe religion is bad and seductive and accepting some of it’s tenants makes you more vulnerable to others.

I think a persons ability to understand and refute pascals wager type arguments is a good litmus test for general argumentative ability, at least in philosophy.