Project Idea: LifeBoar Initiative

There are many existential risks which would either destroy human civilisation or set us back thousands of years. Many of these risks could be mitigated by building sealed, self sufficient underground cities where humanity could survive to rebuild/repopulate the earth.

Types of risks this could potentially help with:

  • super-lethal engineered pandemics/nano-plagues
  • certain kinds of memetic catastrophes
  • decade long ice ages or hothouse earth type scenarios
  • full nuclear exchange (this wouldn’t wipe out civilisation, just europe + america + asia, so the project would only really be relevant in future scenarios where nuclear war becomes more widespread/damaging duo to tech advancement)

Key problems assuming funding/resources are available:

  • Achieving a sufficient level of isolation from the external environment + self-sufficiency.
  • If we’re looking at 100+ years we need a viable breeding human population. That’s is a lot of people.
  • Maintaining population during long times when no crisis is evident + stopping people from breaking in when a crisis is evident (the latter is solved by having a remote enough location)
  • Maintaining political neutrality + support

Thoughts on the justification of state coercion

Over the years I’ve thought a bit about justifications for state violence. I think it’s useful for me to lay out my reasoning here.

States exist. States use violence and coercion all the time. If you refuse to give them money, they’ll fine you. If you refuse to pay the fine they’ll imprison you. If you resist imprisonment they’ll kill you. The same applies for refusing to abide by other laws, from attempting to murder another person to attempting to employ someone the state deems should not be allowed to work (illegal immigrants, people without the right occupational licensing, children, etc…)

By default coercing people is wrong. Hence by default we should assume state use of violence is wrong unless there’s another justification.

In some cases, that states use of violence is justified on the same grounds that an individuals use of violence would be justified: because the violence is done in order to stop a rights infringement. e.g: A terrorist wants to shoot up a school but police shoot him dead. A mugger attacks people and the state imprisons him to prevent further attacks.

This defence is convincing but it only applies to a very small subset of state activity, namely enforcing laws against violent or property crimes. The vast majority of state action, e.g: laws on minimum house sizes, product regulations, mandatory education, etc… are not covered by this. What then justifies those other acts?

One argument is that social contract argument, namely that people consent to being governed (meaning "living under the threat of violence if they disobey for their entire lives"). I don’t find this argument plausible for the reasons laid out in this article. The TLDR is people don’t actually ever agree to such a contract and various arguments suggesting that people "implicitly" or "hypothetically" agree are not persuasive.

Another argument is the "democracy" or voting argument. It says that since people vote for a government, the governments actions are justified. I don’t find this argument persuasive. I also don’t find this argument persuasive. Majority support for violence does not in itself justify violence. If I invite someone to my house, have a vote on whether I get to have their money and the vote goes in my favour, that doesn’t mean I can take their money or use violence on them if they refuse to give it to me. Similarly if in my workplace we agree that women shouldn’t show their bare skin, that fact that 51% or even 99.9% vote in favour it does not make it okay to do violence to women who won’t wear veils.

The final argument is the utility argument. The argument loosely goes: coercion is bad but so is living in Somalia, which is what the alternative to a state is. Essentially we trade off some rights violation in the form of coercion for a great deal of overall utility. This is the argument i’m most sympathetic to but with a few caveats:

  • I think violating rights to gain utility is only justified when the increase in utility is very large compared to the rights being traded away
  • I think this justification still only works for a small % of the coercion existing states engage in
  • I don’t think this justifies paternalistic coercion, which is what most of our coercion does.

The conclusion this leads me to is fairly simple. I think our states morally ought to coerce far less. Some coercion is justified by the large increases in net utility but much coercion is not justified. Banning the consumption of highly addictive drugs which are likely to turn people into addicts who then commit crimes: okay. Forcing people to put away X% of their salary in order to get government health insurance: not okay

Good code and good tests lead to one another. The reverse is also true

In software Engineering we write code and we write tests. I think there’s a bi-directional relationship between unit tests and good code.

I think writing good code, where good roughly means clean, makes writing unit tests easy.

I think writing unit tests makes writing good code easy.

I think that this leads to a self-reinforcing dynamic where bad, untested code tends to get worse and more untested over time while good code tends to get better and more tested.

Why does this relationship exist? Let’s look at it from a "Why is good code easy to test" perspective

  • Small functions that do one thing are easy to write tests for. It’s clear what the paths are and hence what needs to be tested.
  • Pure functions or functions where impure things (e.g: api clients) are passed in (aka dependency injection) are easy to write tests for. No mocking and minimal implementation knowledge is required.
  • Code with clear typing is easy to write tests for. You know what the range of possible inputs you needs to test for is (e.g: is someVar nullable?). You know what the outputs are or should be.

The converse of all of these is true as well.

Why do tests encourage good code? I think the reasons boil down to writing tests making bad code painful. You feel the pain when you have large methods which do multiple things. You feel the pain when typing is unclear and you have no idea what inputs to test for. Most of all, you feel the pain when the way impurity is handled is bad because you have to reach deep inside your code and mock, making your tests fragile and closely tied to implementation.

That’s one thing. Another thing is that having good tests makes refactoring safe and painless. You feel free to mess around with breaking up the logic in different ways. Bad tests are either insufficient, making you worry about breaking things by accident without knowing, or tied to implementation, making any refactor a pain as you know you’ll have to tinker with tests.

As always, I think craftsmanship in software requires not cutting corners. I also think that bad code is essentially a collective action problem, something I’ll write more about later.

Equilibria shifts and moral takovers

Societies, social groups and institutions often seem to rapidly jump from one moral equilibrium to another. Why does this happen?

(Relevant reading:

One model goes like this. Most people are cowardly and self-interested. They care about career success, being liked and so on. They do have some moral preferences but will only express and act on them if there are no significant social costs to doing so.

Imagine you start with a space which is 30/30/30 split between purples, yellows and greens. All three groups have slightly different moral views and express them.

Imagine that purples do two things:

  • gain positions of power in the space
  • begin to punish those who openly disagree with purple views

What happens? A few things.

  • Non-purples are less likely to express their views/disagreement and when they do so, will do it smaller groups or 121. (To use the wait but why model, they speak less often and when they do they have a smaller amplification factor.)
  • It now appears that purple views dominate => further self-censorship
  • The less people publicly disagree with purple, the easier it becomes to single out an punish those who do dissent
  • The less public disagreement there is, the more pro-purple arguments dominate the idea space people are exposed to and the more people become genuinely purple

The model here is a simple one where intolerance and even mild social costs are a good way to suppress dissent. The other factors main factors are preference falsification and speaking out being a collective action problem.

Why do rapid jumps from one equilibria to another take place? Usually because the initial equilibria is unstable. Once an event triggers an initial wave of speaking out, suddenly people realize that 1) there are no longer large personal costs to dissent 2) many others actually agree with them.

Subvocal lie detectors and technological determinism

People have written about futures where technological developments make totalitarian forms of government far more common. I remember reading articles over the years that speculate on how technology shapes the governance landscape today, how the internet and mass-media make dictatorship harder or how mass-surveillance and the individual targeting it allows makes repressive states like China possible.

A few examples of techs that make state oppression harder/easier:

  • We have the tech currently to strap electrodes to a persons throat and pick up subvocalised sounds (e.g: their internal monologue). The applications of this are obvious.
  • The internet makes communicating and organising much easier for anyone who dissents against the state. (Yes censorship is rampant. Still, compare your chances of publishing heretical stories on twitter vs in a major newspaper or in wechat vs the people’s daily)
  • Facial recognition, mass surveillance and various forms of biometric tracking make it far cheaper in terms of man hours to track down and silence key figures who take part in protests/dissent without large collateral damage or the need for an army of secret policemen
  • More autonomous weapons making popular support less important for regime stability or military success

A more general pattern of through I’ve always had is about technological determinism. To what extent is that shape of human society determined by our technological environment. Cannons make castle walls less important, defence becomes harder, centralisation and strong states ensue. Agriculture makes it easier to extort and control people. We go from roaming tribes to strongmen to the first states. It’s interesting to think how little control over our own future we may have as a species.

Of course there’s no good way to know how far technology determined what forms of civilisation or organisation are competitive, how far it will determine it in the future or how much space there is within the competitive limits imposed by a technological environment. We can just tell stories and argue by example, but that’s not a good way to arrive at the truth.

Against buying illegal drugs, even when they should be legal

Imagine you lived in 1760. You have a sweet tooth. You buy sugar to put in your tea. Sugar is in itself not intrinsically harmful or addictive. But the sugar industry is based largely on slavery and the sugar brand you buy is made by slaves. Are you doing something wrong?

I see a similar parallel to people who buy drugs today. The main argument for legalizing at least certain drugs such as weed is that they’re less addictive and harmful then already legal drugs such as tobacco and alcohol. I think that argument is sound. I still think that buying drugs from criminals is deeply immoral because those drugs are often grown by slaves and the industry leads to violence and horror for both criminals and ordinary citizens.

My views on how far individuals morally ought to follow states laws are conflicted. But even in the absence of any law, I think buying a good is wrong when the act of purchase contributes to such evil.

TLDR: If you want to do drugs you should grow them yourself, not buy them.

2021 Year in Review

Previous Year: [[2020 In Review]]

N.B: This is my annual retrospective post. It contains no special insights and is 100% skippable.


A good year overall. My economic situation continues to improve. My social situation is somewhat improved. Intellectually, I did not write anywhere near as much as I wanted and I didn’t read much. Still, my good habits (RSS feed aggregation, podcasts, pocket as a reading backlog) mean that I suspect my overall quantity of high quality reading has increased.


Total posts this year: 15 (+2 vs 2020)

Decent Posts

  • [[Why does tech debt exist]]
  • [[Avoiding problems is easier than fixing them]]
  • [[How far is technological progress deterministic]]
  • [[Fears and Thoughts on humanity moving towards a singleton]]
  • [[Against honouring allied bomber crews]]
  • [[80k podcast with Mushtaq Khan]]
  • [[Startup Idea – Children as a Service]]
  • [[Inefficient writing systems]]
  • [[Stupidity is a problem we should care about more]]
  • [[Ideas flow easily into empty vessels]]
  • [[Unrefined thoughts on some things rationalism is missing vs religions]]
  • [[Strength, not courage, is the second component of goodness]]
  • [[Inspired]]
  • [[Exploiting Crypto Prediction Markets for Fun and Profit]]
  • [[How my school gamed the stats]]

I’ve kept writing at a rate of at least one post a month. This is good but still I feel that both the quantity and quality of what I write is far below what I can achieve. I have dozens of decent ideas each quarter I never write up. In fact, I often write up ideas I think are less good while the "good" ideas languish on the backburner out of misguided perfectionism.

Main aim for this year re writing

write more often but also write a bit more carelessly. It’s better to write a post expressing a good idea poorly than to not write the post at all.


Total Books Read This Year: 9 (+4 vs 2020)

I continue to read three serials: A Practical Guide To Evil, Pale and Delve.


In terms of other content, I have 67 sources in the smart category of my RSS feed collections (? vs 2021) and 21 high quality podcasts.

I’ve finished my con study group and worked my way through principles of economics by Cowen and Tabarock.

main aim for this year re consuming info

I should do another study group. Mathematics sounds like a good next topic.

There are a few high quality sources that are paywalled. I have more money than I need. I should subscribe to

  • The diff
  • Razib Khan’s whatever
  • The private eye
  • Astral Codex 10
  • Dominic Cummings blog

$50 or so a month makes no difference to me financially and is a small price to pay for more knowledge. My avoidance of spending money is not rational, it’s a product of habits learned from when I was much poorer which are maladaptations in my present circumstances.


My economic situation continue’s to improve. I’ve gone from an annual income of 65k to one of roughly 110k. My net worth has gone from 54k or so to just north of 100k, largely due to saving but also due to strong investment performance. My savings rate continues to stay above 50% despite large one time costs for buying a house/healthcare. Money has strong compounding. Good decisions and bad decisions both have disproportionately large effects if made early in life. I’m thankful for my wealth and for having had the right sources of information and morals to make the correct decisions regarding it.

In the longer term, I don’t think I want software engineering to be what I do with my life. I miss philosophy and think I can make more of an impact there. I also look at a lot of arguments in the field and find them fairly weak, plus I love discussing it more than anything else. It’s worth trying out at some point. Along with other things like politics, a podcast etc… Still, that transition is years in the future.


My social life has improved a bit. At the beginning of the year I reached out to various people and scheduled one hour long monthly talks. These are going well and helping me keep alive/rekindle many old relationships. I have not met any new high-quality people, and that’s something I aim to work on a bit this year, especially once covid is over.

Why does tech debt exist

[[Software Engineering]]

(Raw and unedited. TODO: clean up a bit, use a better analogy. This could take half the space and say just as mutch)

Imagine you have a factory. In it workers stand on an assembly line and paint widgets. Different workers and teams paint widgets at different rates and in slightly different ways. Teams have only one output: painted widgets per week.

The factory owner want’s to maximize production of widgets. From the top down targets are set which reward people at every level for producing more widgets and punish them for producing less. What happens next? Simple. People are incentivised to maximize their production of widgets and so they do so to the best of their ability.

Now let’s change the scenario slightly. Imagine that team’s use machines to paint the widgets. Each team has their own machine. A team can choose to use a special paint. This paint is easier to apply and will increase their rate of widget painting by 50% overnight. But the paint damages the machine and consequently will slow the team down by 3% compounding for each month it is used.

Would you expect teams to use the special paint?

The answer is probably not. Unless the incentive pressure is particularly harsh and the only concern is to survive the short term, any rational team should recognize that taking the drug will make them worse off in the long-term. It’s in their self interest to not use it.

(Not enough slack)

Now let’s change the scenario. Rather than each team having their own painting machine, instead imagine there’s a single giant machine everyone in the factory shares. The same scenario applies. Each team can choose to use a special paint. Using the special paint raises their productivity by 50% but slows down everyone by 3% compounding.

Would you expect teams to use the special paint?

The answer here is probably yes. It’s a classic collective action problem where a large private benefit and a socialised cost makes it individually rational for teams to do something that is collectively disastrous. Absent some kind of formal or informal control structure, output metric incentives will lead to people burning the commons.

This is why tech debt is such a problem in most low-mid tier companies. It’s illegible. Leadership pushes for more visible output (features/tickets). What they get is invisible tar which makes everything take longer and longer and longer as they’re left clutching their heads and wondering why.

(This is a partial explanation, there are other factors at play too. From not being able to not produce tech debt to bad individual incentives. I should write another article about the various factors that keep tech debt in check in good firms)

Avoiding problems is easier than fixing them

I think a lot about character, virtue and how to be a better person. One observation is that it’s far easier to avoid having problems in the first place than it is to overcome them once you have them. This applies to many of the more mundane aspect of life:

  • debt
  • addictive substances
  • poor health
  • bad people

The more in debt you get, the harder it is to get out as more and more money is eaten up by interest.

Once addicted to a substance, it’s hard to give up the addiction. Often it’s impossible to completely overcome it. Alcoholics famously say that you never stop being an alcoholic.

Once you let your health degenerate or a health condition progress, it’s often exponentially harder and costlier to treat. It’s harder to treat late stage cancer than early stage cancer. Ditto for most diseases. It’s easier to loose weight if you’re a bit fat than if you’re too obese to walk. (Although in fairness weigh loss is almost entirely down to diet)

The more you spend time with bad people, the more likely you are to be drawn into their problems/pathologies and to meet other bad people through them.

My general approach in life is to avoid these problems in the first place. I think one of the major causes of people making bad decisions in regards to various kinds of problems is conformity. Hence I think that by ignoring social consensus on something being "okay" and trying to decide for yourself if the risk tradeoff is justified, you can usually come to a better decisions. This is especially true of alcohol which is widely normalized but is actually a highly dangerous, highly addictive drug which causes health damage even when consumed in small quantities.

How far is technological progress deterministic?

Imagine an alternate 21’s century. Say the world diverged around 1900. Assume the same rough overall rate of technological development. No golden age, nuclear war, collapse etc… How different would their technological landscape look to ours? Would they have discovered most of the same tech’s we have or would we have vastly different technologies and progress in different areas?

This question is a subset of the more general question: how far history is chaotic vs path-dependent. Is our civilization like a river flowing through a canyon, one set path with only minor short-term diversions possible, or like water flowing over a plain where even small changes in initial conditions can carve out different channels?

Some thoughts on what makes a tech highly prevalent vs only existing in some possible worlds

  • The "distance" or leap needed to reach a new tech from existing tech. Whether the core discoveries that make the tech possible are incremental and linked to existing knowledge or areas of study or one-off, random insights. (I think the internet may be an example of the latter. Not sure thought. Networks between devices were always going to be a thing Maybe a global network of some kind would have arisen naturally even without the nuclear war proof distributed system we got)
  • Whether the tech is gated behind various understandings in many fields or progress or requires one field only
  • Whether there is a clear an pressing problem that the tech solves or if the uses of the tech only become apparent after it is developed, sometimes decades after
  • Whether a tech is politicized or not. (e.g: Eugenics/selective-breeding in the west. We could breed super-geniuses, we don’t because it’s taboo to engage in selective breeding, even if it’s not coercive)

It’s clear that certain technologies would exist in most non dark age possible worlds. It’s clear that some subset of tech would not. I doubt MOBA’s would have been discovered in other timelines as their creation seems to be just so accidental.

Why does this matter?

  • It determines how much low-hanging technological fruit may be lying around, open for exploitation if we engage in creative institution design or some other kind of reform/experimentation
  • It determines how far we should expect to be able to predict what the tech of extra-terrestrial civilizations will look like. Often people think of ET’s in terms of an overall level of tech development and how their level will compare to ours. A different view is that tech isn’t like a beachball but like a sea urchin. Different civ’s can have radically different levels of knowledge in different fields. Maybe there exist aliens with amazing materials science but no computers or AI or any kind. Maybe there exist civs with no mathematics but excellent biological spaceships made from pure intuition. (The more likely outcome by far is a dead universe or one filled with optimized AI ish life expanding at the speed of light)