Housing and Invisible Problems

I’m on the train. It’s a Saturday and I’m with my girlfriend travelling to my parents house on the outskirts of London. Behind me a man talks about the merits of a 4 bedroom vs 2 bedroom apartment and how the former costs less per person. I’m a software engineer. Just over a year and a half into my career. I make 55k. It’s more than everyone in my family put together. I pay ~£1000 for my rent per month which is around 35% of my salary.

As the train moves I can see London passing by outside the window. It’s so large. So many people spend so much on housing. Either directly through the money they pay for rent or indirectly with the time they spend commuting. A one hour commute door to door is 2 hours a day. Two hours a day is 10 per week. 40 per month. A whole additional week working each month without pay or career progress. It’s such a waste.

It’s interesting how some problems are not talked about despite their impact. There are rational reasons to neglect high impact issues. They can be intractable. The result of adversarial political processes where any intervention will require prolonged conflict against strong coalitions; They can be too costly to solve. They can be solvable and tractable in theory but their structure can make coordination or coalition building difficult. If a problem cannot be solved by a single individual or institution and also cannot rally a coalition is becomes intractable. Still, is this really the case for housing? Maybe. Partially. But I think it’s something else.

The housing crisis has a few characteristics

  • There are seemingly no pareto optimal solutions. Even ignoring specific policies and looking at the problem in abstract, any policy which radically reduces housing prices or the rate of price growth harms homeowners.
  • There is no accountability. No single person or institution is responsible for housing. It is the crux of no one’s career. Neither bureaucrats nor politicians are incentivised enough to care about it.
  • It’s an invisible problem. The suffering, the poor pushed out of cities, the families damaged by absent parents who spend their lives on the train. All of these problems are slow and accumalatory. None are immediately visible or have monocausal impacts.

These are partly why there is so little political will devoted to the topic. Still, there’s one more important reason. Ignorance. Few people realize that the housing crisis is a product of our actions. Most see it as inevitable or normal or somehow a natural result of the market. Not as a result of insane overregulation and government failure. When a problem seems to be natural as opposed to man made, people care less.

Optimisers vs Drones

Since I gained enough social skills to function in society and organisations, I’ve gradually come to notice a distinction between people. When working as part of an organisation, most people don’t optimise for or really care about outcomes. They just do the kind and amount of work that is normal. A small proportion of people are different. They do care about outcomes, whether that be the firm’s wellbeing or making a great product. They don’t just do what is expected. Instead they look to make the greatest impact. They optimise processes, try to improve their teams ways of working, try to challenge bad or ineffective policies etc…

Concept labels are useful. Let’s call the first group of people drones. Let’s call the second optimisers. Drones are not independent actors. They are, by and large, a reflection of their environment. In a good firm they do well and are an asset. In a bad organisation they internalise and perpetuate pathological behaviours. A very good culture and team dynamic can transform drones into optimisers, but that’s inordinately hard to do and requires managers who are great leaders and can forge a tribal identity.

Good examples of optimizers are John Boyd and TaraMac Aulay. Both fought against the current to implement changes which had disproportionate impacts.

When hiring, you should be on the lookout for optimisers. It’s not the only criteria by any means, a optimiser with an IQ of 50 won’t be much help, but in most high-skill professions and especially in leadership positions an optimiser is far more impactful and trustworthy than a drone.

In life, you should aim to be an optimiser and not a drone. This isn’t easy. Going against the flow can have significant personal costs. More than that, thinking for yourself is a skill. It’s like a muscle. If you haven’t done it for most of your life for whatever reason, your muscle is atrophied and it’s a very hard and long process from that state to one where you have a healthy mind and take ownership of your work, team and effects on the world.

Identity 101

The philosophy of identity asks a simple question. What makes me, me? It’s valuable because it’s answer has a lot of implications:

  • Whether killing one of two identical simulations with billions of identical people is murder or not
  • Whether me today is the same person as me tomorrow (if not, the non-identity problem kicks in)
  • Whether uploaded minds are the same as the physical person they were uploaded from.
  • Whether sufficiently similar people count as one person.
  • Etc…

There are a few basic theories of identity.

The first theory is naive physicalism. Who I am is defined by the physical vessel I inhabit. I am me because I have my body. The problem with it is that it’s highly counterintuitive in a number of situations. It says that if I transplant my brain into a cyborg body I am not longer me, which seems wrong because I am the same consciousness with the same memories and thoughts and feelings. It says that if I loose an arm, I am less me and if I loose enough of my body I am not me. It doesn’t really make sense.

The second theory is continuism. This is the one most people hold. It says that I am me as long as there is a continuous line of consciousness. Even though  in 20 year I may be very different from myself today in a number of ways, I would still be me because there is a continuous consciousness that links those two points in time. The problem with this theory is that it’s also counterintuitive. If over the course of 20 years I gradually metamorphosize into a fish with an effective human IQ of 0.5, continuism says I’m still the same person. That seems wrong. A goldfish is not me even if it’s consciousness is directly linked to mine by an unbroken line of experience. There are also other weaker objections about things like interruptions in consciousness caused by, say, sleep or dying and then receiving CPR.

The final theory, and the one that best aligned with my intuitions, is one I like to call the personspace proximity theory of identity. There are X traits or attributes that a person has. Age. Sight. Hair Colour. Memories. Character. Intelligence. Etc…  We consider some subset of these traits to be morally relevant to determining a person’s identity. Let’s call that set N. That gives us an N dimensions space in which a person is a point. Identity is that point. That is you. The further you move from that point, the less you a person is. Eventually you move far enough, let’s say into goldfish territory, and the difference is so great that you are no longer who you once were. This theory is nice because it avoids the problems of the physicalist and continualist theories. It’s also nice because it’s not discrete. Sudden cliff far discontinuities in personhood are strange. Binary identities are weird. Moral reality is continuous, not discrete.

Against Pascals Wager

Pascals wager says that we should believe in god because the cost of not believing could be eternity in hell while the cost of believing is 0. It’s wrong in a few obvious ways.

  • There is an infinitely large possible space of possible omnipotent beings. Many would punish faith, not reward it. Hence having faith is not a strictly dominant strategy.
  • Believing is not costless.
    • Submission to evil is bad (yes, most gods are evil.)
    • Having inaccurate beliefs about the world is bad. (If your utility function contains a term for belief accuracy)
    • Making yourself more vulnerable to religious infohazards. (If you believe religion is bad and seductive and accepting some of it’s tenants makes you more vulnerable to others.

I think a persons ability to understand and refute pascals wager type arguments is a good litmus test for general argumentative ability, at least in philosophy.

Against torture

If you read the public discourse, the standard argument for torture is that in some situations the moral harm of torturing someone is outweighed by the moral benefit of preventing some other bad thing. The standard response is to talk about the value of life and human rights. One reason the two sides seldom convince one another is because deontological argumentation is not persuasive to a consequentialist and visa versa.

What’s a more persuasive consequentialist argument against torture? There are some common arguments. Torture doesn’t work. Torture cases backlash which strengthens our enemies. I won’t bother repeating these. Some are less common, more broadly applicable and are not brought up often enough. The simplest one of these is that clear moral norms are useful and their erosion has has serious long term costs which likely outweigh any short term gains. One such norm is that society and the state should not abuse human beings. Allowing torture erodes that norm. Another one worth considering is that our states cannot be trusted to use torture judiciously and morally. Hence even if torture is morally acceptable in certain situations, it’s legalisation or tolerance would mean it’s used in many situations where it is immoral.

I think each of these arguments could have a whole article about them with flowery dialogue and facts and complex argumentation. For the norm erosion argument it would be about the long moral journey the west has made, how deeply rooted certain beliefs/norms are and how quickly they can fall away leaving the worst of society free to come out. It would paint a vivid picture of how our humanitarianism shapes us. How it stems from both christianity and later the enlightenment. It would paint an equally vivid picture of the times in history where we rejected these norms and where that lead. The holocaust. The civil wars in Yugoslavia or other places. Slavery. Etc… For the untrustworthy state argument it would be a chomskeyesq long expose of the horrors the US/UK have done abroad. From funding and arming horrific, torturous groups in Latin America to consistently crushing secular arab nationalism and feeding the forces of reaction to Asia and the support for the Khamer Rouge, mass killing of “socialists” in Indonesia and so on. At the end it would chronicle or interview people tortured by the US/UK wrongly and the effect it had on them, possibly talking to a torturer as well about why they found torture dehumanising and wrong, before ending on a note saying that most experts believe torture is ineffective.

Weaving stories is satisfying. It’s also encouraged and rewarded by our society. It feels good. It’s not. A stories persuasiveness only loosely correlates to its truth value. Most normal people can’t extract arguments from a narrative. Most people, normal or not, don’t even try to. Complexity and stories don’t reveal truth. They hide it.

Counterpoint:

  • Maybe the norm thing is just a unfalsifiable justification I make up to rationalise my existing view, that torture is wrong.
  • There are plausible arguments in favour of torture. Some of them also aren’t common in the public discourse. By choosing to not make them and instead only give the anti-torture argument, I’m lying by omission.

Not all good things are just

There is a distinction between whether an act or event was good and whether it was right. Goodness refers to effects. Rightness refers to justice. The difference between the two is that justice takes motives into account. This is why it is possible to simultaneously believe that colonialism was wrong and that it was good. Wrong because the colonisers often systematically oppressed and abused their subjects. Good because it bought education, peace, medicine, modern agriculture and various other benefits.
This is not to say these are my positions. Just that that there is a difference something being good and just and that distinction is worth remembering.

Infinite regress, circularity or unjustified beliefs.

I spent some time talking about metaphysics with my other half. There are physical facts. Jupiter pulls smaller things towards it. The sun is hot. Any epistemic system needs to explain physical facts. Why is the sun hot? Because of nuclear fusion. Why is nuclear fusion happening? Because the sun is large enough for its gravity to force atoms together. The problem is that explanations of fact/ law requires referencing another different fact/law. Saying that X is the case because X is the case isn’t acceptable. We need a deeper reason. The problem then is that for every law or fact you use to explain something, you generate another question asking for that fact or law to be explained in turn. There are only four possible ways this kind of chain can end:

  • Infinite regress
  • Circularity
  • Axiomatic/foundational/unjustified beliefs
  • A magic fact/law which is self-explaining and requires no explanation.

Circularity and infinite regress are unsatisfying and illogical. Systems which accept them are usually just trying to hide the fact that they, like any system of beliefs, rest on axiomatic beliefs which are not empirically justified. Finding a magic self-justifying law or fact seems implausible at best if not downright impossible. Claims like “A happens because A happens” don’t satisfy any resonable notion of explanation. Hence the only option left is accepting that any thought system will inevitably rest on some kind of bedrock which is not justifiable. For our current science, that that bedrock could include axiomatic beliefs like

  • That the future will be like the past in certain important ways. (e.g: Gravity won’t just disappear next Tuesday.)
  • That our empirical observations about the universe are mostly true. (We’re not in a simulation being fed false imput)
  • That logic is true

Here’s a good general rule: Any system which claims it assumes nothing is lying or badly wrong.