I suspect that AI will end humanity

Part 1 of the [[Will AI kill us all]] sequence

Recently I’ve come to the conclusion that AI is probably the greatest X-Risk humanity faces. I’ve also changed my mind quite a bit about timelines and now expect with something like 80% credence that everyone dies by 2100 from AI. Most of the 20% is uncertainty about the reliability of my own thought processes and worries about social contagion etc…

Thoughts on AI

  • why I think we all die
    • It will be much smarter than we are
      • train stop analogy
        • analogy
          • imagine a train going at full speed across a railway crossing a desert
          • the railway is 100km long
          • at some point, there is a roughly 200m long platform
          • the train has an automated breaking system that randomly applies the breaks at some point during the journey
          • Q: how likely is it the train will stop alongside the platform?
          • A: not very likely at all
        • explanation
          • imagine a scale of intelligence going from 0 – 100
          • commonly we think of 0 as being a dumb human/child and 100 as being Einstein/John von Neumann etc…
          • this is incorrect. In reality 1 is a small cell, 3 is a mammal, 4 is a human, 10 is a intelligence so far beyond us we can’t comprehend it etc…
          • if the range from dumb human – smart human is very small as a % of the overall possible range, it’s unlikely AI will happen to reach it’s natural limit just in that range
      • no reason to believe human brain is magic natural limit
        • natural selection is highly inefficient vs intentionally designed solutions
        • for most things, we can outdo nature by several orders of magnitude e.g:
          • mach 7 hypersonic vehicle vs the fastest animal
          • titanium and other alloys vs the strongest tree/bone
          • nuclear weapons, guns, blades vs predators claws
          • vaccines, drugs vs natural immune systems
        • by default, we should probably assume that the minds we one day build will be much, much better than the best (human) minds which natural selection has built
    • it will probably be unaligned
      • mindspace is large
        • all agents have some kind of utility function
        • the space of possible utility functions is incredibly vast
        • the space of utility functions resembling anything we humans would recognise (even things we would recognise as bad human morality like e.g: facism) are a tiny % of the overall space
        • absent very significant effort, the utility function an AI ends up having will be very, very alien and strange to us
      • alignment is hard
        • mesa optimizers
        • deceptive alignment
          • If you want an agent to do X it will do X while it’s weaker than you. Once it’s much stronger than you and no longer needs to care about what you want it will stop doing X and do Y instead. That also means that it’s hard to tell if an agent is actually aligned or just playing you.
        • interpretability
          • At the moment, it’s impossible to really know what an AI wants, what it values or what it’s thinking
          • hence, it seems unlikely we’ll know how aligned AI’s are or whether they’re trying to deceive us
      • bad incentives/dynamics
        • commercial races
        • military races
        • alignment harder than application = we do application first
      • breaking point argument
        • everything that worked before breaks the moment you go from 80 IQ systems to 280 IQ systems
    • it will kill us all
      • it will want to kill us all
        • humans could destroy it/switch it off
        • humans are made of atoms which can be used for other things
      • it will be able to kill us all
        • If something is much smarter than you, it can outplay you at virtually every game you can conceive of. This includes manipulation, cyber-security, AI research, biotech etc…
  • a few standard counterarguments and why I think they don’t really make sense
    • AI will stop at human level or close to it
      • see above
    • we can put it in a box/airgapped system
      • this won’t happen in reality. Commercial and military systems are and will be fully networked
      • if it did happen:
        • the moment you interact with it, it can manipulate you into doing anything including letting it out
        • we think it’s airgapped, it’s understanding of physics, computers and signal processing will be far beyond ours so it may well be able to find a way around the air-gapping
    • it won’t hate us
      • see above
    • AI’s won’t be agentic by default
      • I think this is the most likely objection
      • heavily depends on what ML paradigm is used for AI
      • I think there are strong incentives to make agentic AI’s for both firms and govs as agentic systems are far more useful
    • people will realise it’s bad and start to regulate
      • no evidence of this happening ATM
      • don’t think this will happen given
        • huge commercial and military incentives to speed ahead
        • nothing bad happens until you hit the part of the slop where you go from below human to very beyond human AI quickly (AKA there’s no fire alarm for AGI)
      • don’t believe China will do it
      • not sure how a ban would work? Two factor model: AI = algo strength + amount of compute. Do you ban anyone having more than X GPU’s in a data center? Ban cloud providers from providing GPU’s for model training? Ban algo research in CS journals?

Feb 24th 2023 Masked Dinner Party

I’m hosting a (dinner if not too many people) party.

What to expect:

  • a wide spectrum of competitive debaters, philosophy peeps and rationalist/ea aligned people
  • good food

Ground rules:

  • no drugs other than alcohol or tobacco
  • +1s are welcome (can be a friend, doesn’t have to be a partner)
  • bring and wear a mask of some kind (suggestion: don’t wear anything too uncomfortable like the horse mask in the pic above. You can get cool looking, comfortable and cheap masks from Amazon)

Where and when:

  • time 24/02/2023 19:30
  • address: My House (message me if you don’t have it)

Fat Acceptance => is it ever okay to shame people?

I see lots of critiques of fat acceptance on youtube. Some thoughts

Two definitions of fat acceptance:

  1. people shouldn’t be shamed/discriminated against/stigmatised for being fat
  2. being fat isn’t health wise bad for a person

2 is obviously false. I’ll write a bit about 1.

Srdjan: Initial thoughts: Shame = making people feel bad for doing something. It’s generally wrong to intentionally make people feel bad. It’s additionally wrong to infringe on others autonomy by coercing them. If done at a strong enough intensity, I think shame can be coercive in the same way physical force can be. (e.g: imagine I have a shame ray gun that makes someone feel extremely intense shame when I shoot them. I pre-commit to shooting people who do X with the ray consistently for an hour to incentivise them not do X. This feels coercive in a similar way that inflicting a strong negative sensation via whipping would be)

Mirror Srdjan: Why is it bad? Punishing murderers and rapists is fine not just for consequential reasons (less crime in the future) but also because they deserve to suffer as they have made others suffer (<== YMMV on this intuition). How is this different for obesity. Shaming them may well:

  • lead to less obese people = less suffering from disease and early death
  • be morally correct because the obese lead a decadent lifestyle

Srdjan: Hmmm. Okay, but something seems off here.

  • Punishing obese people seems different from punishing rapists and murderers. Obese people aren’t violating others rights or choosing to do immense harm to others. Obese people rather make decisions that harm themselves. Coercing A to stop them infringing on B’s rights is one thing. Coercing A because you think A’s actions are suboptimal for their own interests is a different thing.
  • The consequential point is a different one I’ll address later. To keep it short: I’m not sure about the factual claim that shaming obesity leads to less obesity. I’m also not sure I buy that (coercing A will increase A’s utility) => (coercing A is justified). I could be swayed if people could opt into being shamed e.g: by wearing a "shame me if I do X" bracelet but that’s not the case in reality. I could also maybe be swayed if more than 50% of obese people would rather be shamed than not shamed but again I don’t think this is true. (Counterargument: What if 50+% of all people would like to be shamed and looking only at currently obese people is unfair because it’s selecting for people who shaming doesn’t work?)

Mirror Srdjan: Does this logic apply to all shaming for deviation from social norms? Don’t you believe that there should be strong norms that discourage fathers from abandoning their families and children? Isn’t shaming a major way to enforce norms?

Srdjan: I’m unsure deeply unsure here. I guess my thoughts are something like this:

  • Some things are so bad that we should use physical force to stop people doing them (e.g: murder, rape)
  • Other things are bad, but not bad enough to warrant overt coercion. Instead society should judge people doing them as bad and that bad reputation should have some effects. (e.g: A person who cheats on their girl/boyfriend)
  • Crucially, all these bad things are things that effect 3rd parties. Violence, breaking a contract, etc…
  • I’m still just not sure what level of utility gain to individual X, if any, justifies coercing an individual X. Still, I personally would rather live in a society with strong social norms, even enforced by shame, than without them. I’m not sure how to reconcile that. Maybe there are 3rd party effects from a lot of things like e.g: having kids and abandoning them. Maybe that’s a cop-out.

Initial thoughts on What We Owe The Future

I’ve been reading What We Owe The Future as part of an EA book group. Some tentative initial thoughts:

  • The book seems to conflate two versions of longtermism. Longtermism as the philosophical position that almost all value likely exists in the distant future and longtermism as the cause-area for EA.

Philosophical longtermism = the view that most moral value in our timeline exist in the (distant) future. Thoughts on philosophical longtermism

  • You can optimize for world states and be indifferent to which specific people happen to come to exist in those worlds. This get’s around any non-identity objections.
  • If you want to say we have obligations to future people specifically, rather than just obligations to optimize for better future world-states, you have to defend a bunch of really weird claims. (e.g: it not having sex at a precise moment in time immoral because you deny a specific future person the right to life)
  • My guess is the book will go for a world state optimisation argument. If it does, longtermism as a conclusion is pretty self-evident and shouldn’t require any large argumentative leaps.
  • I think the analogy between spatial location being morally irrelevant and temporal location being morally irrelevant is highly interesting. I’ve given it a lot of thought myself and have come to conclusions that are similar and potentially more radical (e.g: whether a mind is or will ever be physically instantiated is not a morally relevant factro)

Thoughts on longtermism as a cause-area

  • debating 101: when someone is saying something that seems a) unclear or b) so obviously true that disagreeing with it would be stupid you should be sceptical and try to pin down their position. What does longtermism as a cause area actually mean? Does it mean x-risk? But we already care a great deal about x-risk. Does it mean trying to influence the far, far future? But then isn’t tractability a huge concern? How would a band leader in Africa 200k years ago have been able to predict the impact of their actions on today?
  • I worry quite a bit about corruption and staying honest. Most charities are highly inefficient because the charity sector, unlike for-profits, does not have meaningful natural selection for effectiveness. EA partially tries to alleviate this by funding effective charities and improving collective epistemic norms. Don’t "longtermist" charities with impossible to guage impacts inevitabally mean funding and prestige distribution just reverts to the standard model of effectiveness not mattering at all? Won’t that mean that, even if longtermism was hypothetically viable, our selection mech would be so broken that we’d pretty much only get useless charities.

Why are there some many devastating objections to modern ethical theories?

Egalitarians hold that equality is good in and off itself. I used to think that egalitarianism wasn’t just wrong, but outright crazy and morally unintuitive to almost everyone. Why? Consider the following world:

  • 10 billionairs
  • 100 millionairs
  • 1’000 normal people with $10’000
  • 1 homeless person

If you could press a button to reduce the income/wealth of everyone in that society to the level of the homeless person, would you do it? The answer seems clearly to be no. Yet the "everyone is homeless" world would be more equal. Hence egalitarianism is wrong, right?

Here’s the problem, the same kinds of objections apply to all popular moral systems:

  • Deontology: the "would you punch 1 innocent person in the face to stop 100’000’000’000’000’000’000’000 people from being horrifically tortured to death."
  • Utilitarianism: "A child is dying of terminal cancer alone in the woods. Before it dies we use a star-trek teleporter to beam it from the woods to a pedophiles group cell in prison. They spend an hour gangraping the child before it dies. Assuming their pleasure > the childs pain and there are no other societal effects, is this okay?)"

Looking at egalitarianism with fresher eyes, I think that the real problem with egalitarianism as talked about in philosophy is the same problem deontology faces and utilitarianism does too. The core issue is that humans value a variety of moral goods: equality, fairness, dessert, procedural constraints, outcomes etc…. Any singular theory of the good which is baed on a singular part of the moral puzzle will always have many, many cases where it will give deeply morally counterintuitive results.

I’ve written about the problems with singular conceptions of the good before, but the more I consider it the more I’m coming to see it as being the core issue behind the serious issues most modern ethical theories have.

Draft: Use an RSS feed

There are many parts of being smart. One of the most important parts is consuming and processing what others say. Most of our ideas and thoughts aren’t original. They’re reflections of, responses to or minor refinements of what others have thought. Even the ideas we have more often than not spring from debates, topics and thoughts we’ve interacted with. The point here is that your information diet matters and it matters a lot.

What problems do people typically have with their information diet? I think there are two:

  • Not consuming as much high quality information as they would want
  • Consuming too much low quality information Missing out on high quality information is bad because you miss out on important ideas, arguments or insights you could otherwise have had. Consuming low quality information is often bad because it wastes time but also is usually optimized to be addictive and, in the case of things like politics or clickbait news, will actively corrupt you into being more tribal.

(I’m intentionally not digging more into what makes a source "high quality" or "low quality" That’s a whole other rabbit hole and not super relevant to the point of this article)

If you want to optimize your information diet, there are many different things you can do. I think one of the most important is to look critically at the method through which you subscribe to and receive information. I think there are a few such methods people commonly use:

  • A social media feed (facebook, twitter etc…)
  • A subscription to (or just regularly checking) a certain newspaper website, magazine or other media source
  • A email subscriptions
  • Just randomly clicking around to sites/blogs they remember not checking for a while

There are a few problems with these methods.

The problem with email is that it doesn’t scale. It’s easy to subscribe to 10 or so newsletters or blogs but once you’re subscribed to dozens or hundreds your inbox will be flooded.

The problem with randomly clicking around is that

  • you forget sources over time
  • you can be influenced by the addictiveness of sources. The more addictive/seductive a source is, the more you’re likely to remember it and navigate to it.
  • it’s high effort and that means you typically won’t do it that much.

The problems with traditional media is

  • You’re relying on a single source with it’s own political biases and gatekeeping
  • Most traditional media caters to normal people. Normal people are tribal, irrational, and generally pretty stupid. Hence most media is of low quality. Even high-brow sources like the Economist don’t compare well to blogs like SSC or Scholars Stage

The problem with social media is:

  • It’s optimzied to be as addictive as possible and will alter your feed to show you the things you are most likely to click on.
  • It’s won’t show you everything you subscribe to. Only a filtered subset of it.
  • It will often reinforce filter bubbles by filtering out content it thinks you won’t like/engage with and filtering in content you will like.
  • There’s political bias/censorship.
  • Most goods blogs and news sources aren’t on social media.

Still bad is relative and unless there’s a better alternative, these criticisms mean nothing. I think there is an obvious alternative. RSS. RSS is a standard for publishing changes to a resources. Almost all websites you’re likely to consume content from use it. Using RSS, you can pull changes/updates from a website and essentially get a site specific newsfeed. Using an RSS reader, you can essentially subscribe to any website/blog/youtube channel and organize those streams of content into one or many newsfeeds. You can then interact with those streams in a simple, low effort way much as you would with social media.

This is what my RSS feed app looks like at the moment: ![[FeedlyScreenshot.png]]

I can have hundreds of sources, most of them high-quality blogs that publish less than once a month. No one filters those sources. No one censors them. No one tries to make them more addictive.

I can segment it, having one feed for high quality smart info and another feed for tribal partisan stuff I dip into once a month to get a feel for the pulse of the political climate.

I can dip into it whenever I need stimulation/am bored. Rather than a bus ride wasted on scrolling facebook, I instead scroll through my smart feed, pick an article and end up usually having spent my time well and enriched my mind.

It’s amazing and brilliant and more people should do it. If you don’t, I strongly recommend trying it.

Counterfactual Morality

When I wrote about allied bomber crews in WW2, I said that they were immoral partly because had they been on the wrong side of the war, they would still have done the same things. I think the idea of judging people by their counterfactual moral actions is one which I’ve internalized and held for years but never directly named and described so here goes.

Often when we want to judge a persons moral character, we look at either their actions or their intentions.

Looking at actions alone is initially appealing but has a core problem. Consider two people, Bill the psycho and Joe the normal person. Bill enjoys raping, torturing and hurting others at least some of the time. He has no regard for morals as we understand them and just acts to benefit himself. Joe is a normal person. Imagine both Bill and Joe are placed in an identical situation. They are at work. A coworker leaves their wallet on the table while going to the bathroom. Neither Bill nor Joe steal the wallet. Joe doesn’t steal the wallet because, even were he sure he could get away with it, he won’t do something he considers morally wrong, in this case stealing. Bill doesn’t steal the wallet because the small chance of being caught or even suspected outweighs the benefit he would get from the $50 or so he can expect to find inside. Were the calculus different in terms of self interest, he would steal without hesitation. This scenario illustrates the main problem with judging people by their actions: we don’t take intent into account. When judging whether a person is good, not whether they are a force for good, intent matters. Killing a person because you stab them in the heart with a knife because you hate them and killing a person because you stab them with a knife by accident while performing lifesaving surgery are morally different and that’s not a difference looking at acts alone can account for.

Now, this doesn’t mean that judging by actions is clearly inferior. There are reasons why looking at actions can be useful:

  • practically, intent is inaccessible while actions are observable
  • the elephant in the brain hypothesis: humans self-deceive. Evolution has optimized us to act in a way that is best for us while generating plausible stories about why our actions are actually good/virtous/moral in order to impress our social groups.
  • the "is a person good" vs "is a person a force for good" distinction I wrote about in being good vs doing good

The naive approach to solving this kind of problem is simple: take intent into account. The problem with this is that a person’s intent isn’t the only thing we care about. Consider the following cases.

Case 1. C the Coward and H the hero are two people. Both believe that raping people is bad. Under normal circumstances both wouldn’t rape (action) and both would have the same primary reason for not raping (they think it’s wrong). If a warlord captured them and tried the force them to rape Coward would rather rape than have a single finger broken by the warlords men. Hero would rather be tortured to death than rape. (Assume they have equal sensitivity to pain/self-control and so are paying the same "cost" for defiance). In normal life H and C have the same moral beliefs/intentions ("rape is bad and I won’t do it") and the same action (not raping). Yet when push comes to shove only H is actually willing to live by their morals.

Case 1. M the Midwit and F the Free-Thinker are two people. While living in a moderate liberal democracy they both have a fairly standard set of civilizational middle class beliefs including the belief that Killing/imprisoning/making people destitute because of their ethnicity/religion/class/politics is wrong. The difference is that M just holds these beliefs because that’s what everyone around them believes whereas F holds them as a result of consideration of the various reasons for and against tolerance. Time goes on an a new illiberal party/social movement becomes prominent and then dominant. In the new environment most people are exposed to arguments for intolerance. M becomes intolerant, matching their new environmental norm. F does not. (Note that this is different from the "would you believe in slavery if you were born in ancient Greece" argument. It’s about people who have been exposed to anti-slavery worldviews/arguments, not about reasoning them out form scratch)

I think both the cases above illustrate a different trait we care about when assessing whether someone is a good person. The specific traits in the examples are courage and wisdom but more generally speaking let’s call what we care about having environmentally-independent morality. Many people only have morals when the costs of doing so are low or negative. They don’t steal when stealing is not in their interest anyway. They won’t lynch other races when lynching is socially unacceptable and likely punishable. These people don’t actually have morals independent of their environment, rather they’re just reflections of whatever the prevailing game-theoretic equilibrium, social norm or epistemic environment they happen to be in.

There’s a lot to work out here.

  • Aren’t all people a product of their environment?
    • Yes but to different extents
  • Isn’t determinism true, free will an illusion and hence there’s no actual difference in how far people have "environmentally independent morality"?
    • Yes on the object level but on the messy level of practical abstractions we make to deal with our incomplete information, env-independent morality is still a useful concept in the same way as saying someone is unpredictable/erratic is useful even though we realize that at the level of atoms, an "erratic" person is just as predictable as everyone else
  • Can’t you include env-independence under intent or actions. Either as firmness of intent or as range of environments in which a person would act in a certain way? (e.g: don’t just look at how a person acts now, look at how they will act in the weighted average of all future timelines)?
    • yes
  • Isn’t it a bit quick to assume that cowardice/midwittery are morally blameworthy? Not everyone has a high IQ/innate courage.
    • Being morally blameworthy is one thing. Being a good or bad person as far as practical judgements go is a different things. Consider the case of X who is due to a brain defect beyond their control absolutely sadistic and evil. Even if you believe they’re not morally blameworthy, you probably still think they’re evil in the sense of practically treating them differently.
    • This is a tricky question I’ll write about in the future

I guess overall my message here is fairly simple. When judging people, whether for work, friendship or love, their moral character is probably the most important/heavily weighted trait. When judging that trait, you shouldn’t ask yourself only if that person is good in the present. You should ask yourself how far they would be good where being good had real costs. Hmmm. As always, a long winded article leads to a simple, obvious truth that could be said in one sentence. Still, sometimes you have to walk the road to make sure the destination is correct.

Against personal belief in religion

I just finished listening to the lunar society podcast episode with [[Charles Murray]], author of The Bell Curve. Near the end of the podcast, there was a discussion on religion. I think it’s worth writing down my thoughts on religion generally.

I went through a phase when I was 6 or so where I transitioned from being religious to being strongly anti-religious. When I was young I felt scared at night. Praying to god helped me not feel afraid. It was as if the prayer protected me from the darkness. At some point a switch flipped and I asked the question: what if the thing in the darkness wasn’t the devil, what if it was god. What if they were one and the same. What if the force in the darkness was the same force that purported to be in the light. I didn’t know at the time but this kind of idea had a long history (Gnosticism).

When I was older I read atheist books a bit. Atheism was part of my identify. (This was a mistake, as always keep your identity small). I also read or was exposed parts of the bible and Quran, some by myself and some through religious education classes in school. The more I read the clearer it became that:

  • Much of what was written in holy books was clearly false
  • Much of what was written was evil

I think both of these problems are independently enough to make me non-religious, but I think the second objection is especially strong. Not everyone has good epistemic norms or knowledge of the facts of history, hence not everyone is capable of spotting the inaccuracies in holy books. Everyone should know that slavery, killing innocents, rape and genocide are bad and any being which rewards and accepts them is not worthy of worship.

I can accept that people believing an evil thing may be socially beneficial. Maybe certain beliefs/norms are better than the alternatives, the things that fill the god shaped hole in people’s minds. Maybe they’re powerful social coordination mechanisms. Still, it being good for most people to believe X does not mean I want to believe it.

A side note here: the obvious problem here is that there are [[Two Definitions of Religion]]. What most modern western religious people believe is very far from what the holy books say. A fair objection to my points above is something along the lines of "Yes the Bible/Quran/Torah is evil but the set of beliefs most self-professed Christians/Jews/Muslims you meet do not include murdering homosexuals or men’s dominion over women". I find this somewhat convincing but I guess that my response is along the lines of:

  • Many of the parts people do believe (e.g: Hell in which sinners burn for eternity) are still sufficiently evil to render the whole evil
  • If you read Mein Kampf and cherry pick all the parts without anti-semitism, racism, calls to violence etc… at some point you’ve gone sufficiently far from the source material that it no longer makes sense to use the same label.

Still, I’m not sure I’m correct. Yes religion as it says in the books is evil. Still if people have religious beliefs that lack the evil parts, (e.g: Quakers, Mormons, most Christians I’ve met) then it seems wrong to judge those beliefs/people more harshly simply because they draw their intellectual lineage from partially dark roots.

Hamilton, morally compromised art, rising above revulsion

A few months ago I started listening to the Hamilton soundtrack. I thought it was brilliant. It took me through his life, a whole person and his world. From beginning to end, virtues and flaws and through the birth of a nation. The last song bought me to tears, which hasn’t happened before.

A few weeks later I noticed that the actors in Hamilton were almost all non-white. This was strange given America is mostly white and actors/broadway even more so. I did some digging and unsurprisingly it turns out this was due to an explicit and open decision to bar whites from pretty much all major roles in the musical.

When I learned that, I couldn’t enjoy the songs any more. The songs hadn’t changed and neither had the characters, but I knew that every actor who took those roles did so knowing whites couldn’t apply. I knew the anti-white racism which is so accepted and normalised was present here too. It soured my feelings on the music.

This reaction was unconscious. Or, I was conscious of it but didn’t choose to have it. I spent a while thinking about whether the reaction was right. I decided that it wasn’t. I’ve read books by dictators, mass murderers and war criminals without a single emotion. I’ve read books advocating for evil wholesale. Why was I having a strong reaction to Hamilton but to nothing else? The answer isn’t that I have a good reason to think Hamilton is worse than Kissingers memoirs, it’s that it’s an issue which strikes closer to heart because I’ve lived surrounded by it whereas I haven’t experienced Cambodia or Mao’s china. In other words, my reaction was irrational and hence it was wrong.

I thought a bit more about it and there are a wide range of reasons to teach yourself to not only listen to/watch/play works by people you may find morally reprehensible, but to do so without a negative emotional reaction or tint. Those are:

  • Even evil works written by evil people (e.g: Triumph of the Will) can contain insights and interesting ideas. Mao was evil but his strategy for Guerrilla warfare is interesting.
  • Bad or morally flawed people can still write great/good works. A carpenter who is a rapist makes furniture that is still fine to use.
  • Not reading work you consider to be immoral or by immoral people creates a strong filter bubble where you never step outside of your own worldview. This is bad. Of the people you think are immoral, most would think you are immoral. Majorities of humans throughout history held beliefs we today think are self-evidently reprehensible. If you want to be a good person, you should assume a position of extreme scepticism regarding your ability to come to the correct moral conclusions. Instead of shutting out bad people/ideas, you should intentionally seek them out and engage with them until you can pass the ideological turing test both internally and externally.

Feelings of revulsion towards people you disagree with, even the worst people such as war criminals, inhibit clarity and good judgement. They’re feelings I have long avoided and want to continue to not have.

It’s always good to look at the other side of an argument. The good arguments for not consuming art by "bad people" is that

  • you shouldn’t give money to bad people
  • you shouldn’t send a signal to the market/society to make more bad art
  • it has a morally corrosive effect on you

I don’t really find any of these arguments credible.

In terms of giving money and sending signals, you can consume art without paying for it which solves the problem. If it’s status signalling we’re worried about you can also just not tell people you’ve watched it. (There’s a deeper, more fundamental objection here about not punishing people for having non-conforming moral beliefs but I’ll leave that to another article).

In terms of moral corrosion, I agree that watching 10’000 hours of Nazi propaganda makes you more likely to be a Nazi. I still think that assuming that your current moral outlook is fundamentally correct and that your main objective should be to retain it is also pretty unjustified. I guess my opinion here is that listening/watching material has conscious and subconscious effects. The subconscious effects are what we are worried about because they lead to a kind of brainwashing. I think the solution to this is to consume a balance of material expressing different ideologies, people, world views etc… Choosing to consume only things that espouse your current worldview is no different than watching 10k hours of propaganda, you’re still brainwashing yourself just it so happens that the brainwashing is directed at something you already believe.

Most people can’t say No

I think that many people don’t say "No" enough. I think they do this not because it’s the optimal strategy, but because they want to avoid discomfort. I think this behaviour is damaging, often morally wrong and almost always symptomatic of deeper character problems.

What do I meant when I say that many/most people are reluctant to say "No"? A few examples I’ve seen regularly:

  • Not saying "No" to hanging out and instead generating excuses
  • Not saying "No" when asked if you enjoyed a food/activity/film/book
  • Not saying "No" when asked to do overtime
  • Not saying "No" when a friend/family member asks for help (e.g: money, staying over)

Why is not saying no a problem? I think there are two main reasons: dishonesty and doing things you shouldn’t/don’t want to do.

On dishonesty. Most white lie "No"s do not qualify as lying because the person being told the white lie knows it is one. Still, theres a difference between lying (an intentional attempt to mislead) and being dishonest (saying things you know to be untrue). White lies are not lies but they are dishonest. That’s less bad but still bad.

Why is dishonesty bad? I’m not sure. I can imagine that on a case by cases basis maybe some degree of dishonesty is good. Telling people you’re happy to see them. Smiling when you don’t feel happy. Etc… Still, I feel an instinctive dislike. I guess that’s for two reasons:

  • It’s still manipulating others, which is bad
  • I think there’s something dangerous about normalising dishonesty for yourself, about habituating yourself to it. I think the natural state of human beings is one of preference falsification, conformity and dishonesty. I think it’s good to have strong norms against these kind of behaviours and easy to slip and fall away from those norms and regress into a natural state.

Other than dishonesty, I think not saying no is a problem because it also leads to a lack of assertiveness. Most people are highly agreeable. Hence for most people it is good to train disagreeableness, at least to the extent that they can become able to go against others or groups when it is warranted. A simple maxim for me has always been "If you can’t say no to a cinema trip, you wouldn’t say no to driving the train to Auschwitz". It’s extreme and sounds silly but I think it’s true nonetheless.