1. 1. Review
  2. 2. Short overview
  3. 3. Highlights
    1. 3.1. Book 1: Map and Territory
      1. 3.1.0.1. What do I mean by Rationality?
      2. 3.1.0.2. It is rational to feel.
      3. 3.1.0.3. The ancient tree parable
      4. 3.1.0.4. Belief in belief
      5. 3.1.0.5. Aumann’s Agreement theorem:
      6. 3.1.0.6. Occam’s Razor
      7. 3.1.0.7. Absence of evidence is evidence of absence
      8. 3.1.0.8. Conservation of Expected Evidence
      9. 3.1.0.9. Fake explanations
      10. 3.1.0.10. Semantic stopsigns - why ask the next question in one case (science) and not the other (religion)?
      11. 3.1.0.11. Mysterious answers to mysterious questions
      12. 3.1.0.12. Lawful uncertainty - if 70% of all cards are blue, you should guess that the next card is blue, not match probabilities in your guesses.
      13. 3.1.0.13. Terminator is not an illustrative historic case in another planet
      14. 3.1.0.14. Interlude
  4. 3.2. Book 2: How To Actually Change Your Mind
    1. 3.2.0.1. The proper use of humility
    2. 3.2.0.2. The best is the enemy of good
    3. 3.2.0.3. The fallacy of grey
    4. 3.2.0.4. Infinite certainty
    5. 3.2.0.5. Your rationality is my bussiness
    6. 3.2.0.6. Correspondence bias - feeling normal
    7. 3.2.0.7. Ficition vs non-fiction
    8. 3.2.0.8. The bottom line
    9. 3.2.0.9. Rationalization vs rationality
    10. 3.2.0.10. Specify the argument in advance
    11. 3.2.0.11. Motivated skepticism
    12. 3.2.0.12. Motivated stopping and motivated continuation
    13. 3.2.0.13. An important thing for young businesses and new-minted consultants to keep in mind:
    14. 3.2.0.14. Dark side - lies and opinions
    15. 3.2.0.15. Moore’s paradox:
    16. 3.2.0.16. Do we believe everything we’re told?
    17. 3.2.0.17. The “Outside the box” box - lovely chapter!
    18. 3.2.0.18. Tell about future (2019) to the people from the past (1901)
  5. 3.2.1. Deep
    1. 3.2.1.1. To sound deep
    2. 3.2.1.2. To seem deep
    3. 3.2.1.3. To be deep
    4. 3.2.1.4. We change our mind less ofthe than we think
    5. 3.2.1.5. Hold off on proposing solutions
    6. 3.2.1.6. Genetic heuristic - rules of thumb:
    7. 3.2.1.7. Pressure
    8. 3.2.1.8. Cheap holiday shopping
    9. 3.2.1.9. Avoid Happy Death Spiral by:
    10. 3.2.1.10. Avoid Happy Death Spiral not by:
    11. 3.2.1.11. Great ideas
    12. 3.2.1.12. The challenge of pessimism
    13. 3.2.1.13. Spirals of hate
    14. 3.2.1.14. Psychological aspects
    15. 3.2.1.15. Oops
    16. 3.2.1.16. Human beings make mistakes
    17. 3.2.1.17. To avoid professing doubts, remember:
    18. 3.2.1.18. You can face reality:
    19. 3.2.1.19. On shifting you belief probabilities
    20. 3.2.1.20. No escape from rationality laws
    21. 3.2.1.21. Line of retreat
  • 3.3. Book 3: The Machine In The Ghost
    1. 3.3.0.1. Evolution
    2. 3.3.0.2. Evolutionary biology
    3. 3.3.0.3. real-life lesson from AI
    4. 3.3.0.4. Instrumental vs terminal values
    5. 3.3.0.5. Wishes
    6. 3.3.0.6. What is optimism?
    7. 3.3.0.7. Lost purposes
  • 3.3.1. About words
    1. 3.3.1.1. Logic does not help you, it just is
    2. 3.3.1.2. Size of the map and territory
    3. 3.3.1.3. On dictionaries
    4. 3.3.1.4. Taboo your words
    5. 3.3.1.5. Word compression
    6. 3.3.1.6. Definition of a word
    7. 3.3.1.7. On categories
    8. 3.3.1.8. Hidden variables in questions
    9. 3.3.1.9. 37 ways how words can be wrong
  • 3.3.2. Bayes theorem
  • 3.4. Book 4: Mere Reality
    1. 3.4.0.1. Soul
    2. 3.4.0.2. On really dissolving questions
    3. 3.4.0.3. Confusions exists in the map, not the territory
    4. 3.4.0.4. Probability is in the mind
    5. 3.4.0.5. Being true and reliable
    6. 3.4.0.6. Think like reality
    7. 3.4.0.7. Reductionism
    8. 3.4.0.8. Explaining vs explaining away
    9. 3.4.0.9. Taking joy in the merely mundane reality
    10. 3.4.0.10. Physicists vs rationalists
    11. 3.4.0.11. Quantum explanations
    12. 3.4.0.12. Falsifiable and testable (decoherence)
  • 3.4.1. Occam’s Razor
    1. 3.4.1.1. About quantum stuff - Quantum non-realism
    2. 3.4.1.2. How science is supposed to work
    3. 3.4.1.3. Science does not trust your rationality
    4. 3.4.1.4. About the speed of science:
    5. 3.4.1.5. Book recommendations for a careful reasoning example:
    6. 3.4.1.6. Eliezer’s childhoood role model
    7. 3.4.1.7. Interlude - A technical explanation of technical explanation
  • 3.5. Book 5: Mere Goodness
    1. 3.5.0.1. Detached level fallacy
    2. 3.5.0.2. Recursive justification
    3. 3.5.0.3. Metaethics:
    4. 3.5.0.4. Value is fragile:
    5. 3.5.0.5. The moral about scope insensitivity:
    6. 3.5.0.6. Shut up and multiply:
    7. 3.5.0.7. Out there to win:
    8. 3.5.0.8. The 12 virtues of rationality:
  • 3.6. Book 6: Becoming Stronger
    1. 3.6.0.1. Useful Japanese terms
    2. 3.6.0.2. Perseverence:
    3. 3.6.0.3. On doing the impossible:
    4. 3.6.0.4. Bystander apathy:
    5. 3.6.0.5. Sins:
  • 3.7. Extra reading materials
    1. 3.7.1. For fun procrastination
    2. 3.7.2. To get a truly alien feeling
    3. 3.7.3. To read from Eliezer’s list
  • Rationality - from AI to Zombies by Eliezer Yudkowsky

    Review

    Rationality: from AI to Zombies has a very correct name for this book (and it really talks about both AI and zombies and they are related!). Actually, this is not a book, but 6 books that are compiled together under one roof. These six book each talk about a different Bayesian rationality topic/aspect in an ever increasing difficulty level and ending the series with a book of motivation and some more personal aspects of the author.

    Before I etner the realm of books, a few notes about the autor.
    Eliezer Yudkowsky is a Jew with a physicist fathers and physician mother (I think). This guy is super smart and works to make a friendly AI. The theoretical questions are also included in Rationality and give a very fresh perspective of what really are the problems in AI and how you should approach the field. He has become a rationalist because the people around him/his parents in childhood could not really answer important questions and that made him quite early on, to become an atheist. The questioning and checking everything from first principles has led him there where he is.

    This mega book consists of essays from a blog and has been formed into topic groups and further into books. Rationality has left me with a head full of information that would take more time to process than I did leave for processing while reading. The series started great - first book about biases was super fun, easy going and very informative, so kept going further in no time. The second and third bookd were not that easy anymore, but still doable. And I stuck on the fourth book - about science etc. This one was too much for me at a time, so I struggled and then stopped for a while. After some light literature, I returned and finished the series easily.

    For me, the most interesting parts were about religion questions (because this would help in discussions with religious people) and about science and quantum physics (- nice explanation about quantum stuff from a different perspective than usual). After some books and especially this one, I and super sad I did not read more about scientific phylosophy before studies or at least at that time. Most probably I would be doing something else right now if I had the framework what actually science is.

    This book has helped me start to set up a framework of what is right, what is wrong, how you should and should not think and reason. This is not just a book about a topic. It’s a workbook for everone who wants to lead a more fulfilling, rational and cause based life. It has changed my inner world and neurons so much (hopefully to the good side!) that I hope it will have a lasting effect.

    BTW, some of the essays are short stories of a Bayesian rationality school/temple that refresh the heavy load a bit. And at least once, one of these short stories totally saved my day (after a bit of mental break down and a big fight with Andrea in Brussels, at the cafeteria terrace). So, thank you Eliezer!


    Short overview

    Book 1: Map and Territory
    What is a belief, and what makes some beliefs work better than others? These four sequences explain the Bayesian notions of rationality, belief, and evidence. A running theme: the things we call “explanations” or “theories” may not always function like maps for navigating the world. As a result, we risk mixing up our mental maps with the other objects in our toolbox.

    Book 2: How to Actually Change Your Mind
    This truth thing seemspretty handy. Why, then, do we keep jumping to conclusions, digging our heels in, and recapitulating the same mistakes? Why are we so bad at acquiring accurate beliefs, and how can we do better? These seven sequences discuss motivated reasoning and confirmation bias, with a special focus on hard-to-spot species of self-deception and the trap of “using arguments as soldiers.”

    Book 3: The Machine in the Ghost
    Why haven’t we evolved to be more rational? Even taking into account resource constraints, it seems like we could be getting a lot more epistemic bang for our evidential buck. To get a realistic picture of how and why our minds execute their biological functions, we need to crack open the hood and see how evolution works, and how our brains work, with more precision. These three sequences illustrate how even philosophers and
    scientists can be led astray when they rely on intuitive, non-technical evolutionary or psychological accounts. By locating our minds within a larger space of goal-directed systems, we can identify some of the peculiarities of human reasoning and appreciate how such systems can “lose their purpose.”

    Book 4: Mere Reality
    What kind of world do we live in? What is our place in that world? Building on the previous sequences’ examples of how evolutionary and cognitive models work, these six sequences explore the nature of mind and the character of physical law. In addition to applying and generalizing past lessons on scientific mysteries and parsimony, these essays raise new questions about the role science should play in individual rationality.

    Book 5: Mere Goodness
    What makes something valuable—morally, or aesthetically, or prudentially? These three sequences ask how we can justify, revise, and naturalize our values and desires. The aim will be to find a way to understand our goals without compromising our efforts to actually achieve them. Here the biggest challenge is knowing when to trust your messy, complicated case-by-case impulses about what’s right and wrong, and when to replace them with simple exceptionless principles.

    Book 6: Becoming Stronger
    How can individuals and communities put all this into practice? These three sequences begin with an autobiographical account of Yudkowsky’s own biggest philosophical blunders, with advice on how he thinks others might do better. The book closes with recommendations for developing evidence-based applied rationality curricula, and for forming groups and institutions to support interested students, educators, researchers, and
    friends.

    Highlights

    Book 1: Map and Territory

    The fish trap exists because of the fish; once you gotten the fish, you can forget the trap. => Carry with you what you can use, so long as it continues to have use; discard the rest.

    What do I mean by Rationality?

    Epistemic rationality: systematically improving the accuracy of your beliefs.
    Insrumental rationality: systematically achieving your values.

    It is rational to feel.

    That which can be destroyed by the truth should be.
    That which the truth nourishes should thrive.

    The ancient tree parable

    If a tree falls in a forest and no one hears it, does it make a sound?
    One says, Yes, it does, for it makes vibrations in the air.
    Another says, No, it doesn’t, for there is no auditory processing in any brain.

    Belief in belief

    The difference between believe a religion and believe in belief of religion because it is comfortable/easy for you right now.

    Aumann’s Agreement theorem:

    No two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong.

    Occam’s Razor

    The simplest explanation that fits the facts.
    Measure simplicity with Solomonoff induction - length of the shortest computer program which produces that description as an output.

    Absence of evidence is evidence of absence

    Your strength as a rationalist is your ability to be more confused by fiction than by reality; if you are equally good at explaining any outcome you have zero knowledge. The strength of a model is not what it can explain, but what it can’t for only prohibitions constrain anticipation.

    Conservation of Expected Evidence

    You can only ever seek evidence to test a theory, not to confirm it.

    Fake explanations

    If you are equally good at explaining any outcome, you have zero knowledge.

    Semantic stopsigns - why ask the next question in one case (science) and not the other (religion)?

    Where did God come from? Saying God is uncaused or God created Himself leaves us in exactly the same position as Time began with the Big Bang. We just ask why the whole metasystem exists in the first place, or why some events but not others are allowed to be uncaused.

    What distinguishes a semantic stopsing is failure to consider the obvious next question

    Mysterious answers to mysterious questions

    Mystery is a property of questions, not answers

    Lawful uncertainty - if 70% of all cards are blue, you should guess that the next card is blue, not match probabilities in your guesses.

    It is a counterintuitive idea that, given incomplete information, the optimal betting strategy does not resemble a typical sequence of cards.
    It is a counterintuitive idea that the optimal strategy is to behave lawfully, even in an environment that has random elements.

    Terminator is not an illustrative historic case in another planet

    Remember how, century after century, the world changed in ways you did not guess.
    Maybe then you will be less shocked by what happens next.

    Interlude

    If you find yourself stumped by deep and meaningful questions, remember that if you know exactly how a system works, and could build one yourself out of buckets and pebbles, it should not be a mystery to you.


    Book 2: How To Actually Change Your Mind

    Or in other words - how to get rid of the big, basic problems: excuses, rationalizations, doublethinking, death spirals and not letting go.

    The proper use of humility

    If you ask someone to be more humble, by default they’ll associate the words to social modesty - which is an intuitive, everyday, ancestrallly relevant concept.

    Humility is a virtue that is ofthen misunderstood. This doesn’t mean we should discard the concept of humility, but we should be careful using it. It may help to look at the actions recommended by a humble line of thinking, and ask: Does acting this way make you stronger or weaker?

    To be humble is to take specific actions in anticipation of your own errors. To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty.

    BTW, you’d still double-check your calculations if you were wise.

    The best is the enemy of good

    Beware when you find yourself arguing that a policy is defensible rather than optimal; or that it has some benefit compared to the null action, rather that the best benefit of any action.

    The fallacy of grey

    The Sophistcate: The world isn’t black and white. No one does pure good or pure bad. It’s all gray. Therefoe, no one is better that anyone else.
    The Zetet: Knowing only gray, you conclude that all grays are the same shade. You mock the simplicity of the two-color view, yet you replace it with a one-color view.

    Infinite certainty

    I am not totally sure I have to be always unsure.

    Your rationality is my bussiness

    What business is it of mine, if someone else chooses to believe what is pleasant rather than what is true? Can’t we each choose for ourselves whether to care about the truth?

    A snappy comeback: Why do you care whether I care whether someone else carea bout the truth?

    An actual answer: I believe that it is right and proper for me, as a human being, to have an interest in the future, and what human civilization beomes in the future. One of those interests is the human pursuit if truth, which has strengthened slowly over the generations. I wich to strengthen that pursuit furtherm in this generation. That is a wish of mine, for the future. For we are all of us players upon that vast gameboard, whether we accept the responsibility or not.

    Correspondence bias - feeling normal

    We attribute our own actions to our situations, seeing our behaviours as perfectly normal responses to experience. But when someone else kicks a vending machine, we don’t see their past history trailing behind them in the air. We just see the kick, for no reason we know about, and we think this must be a naturally angry person - since they lashed out without any provocation.

    Most people see themselves as perfectly normal, from the inside. Even people you hate, people who do terrible things.

    Ficition vs non-fiction

    Nonfiction conveys knowledge, fiction conveys experience. Medical science can extrapolate what would happen to a human unprotected in a vacuum. Fiction can make you live through it.

    The bottom line

    (Context - there are two sealed boxes up for auction: box A and box B - one of them contains a valuable diamond. There are a lot of signs and portents indicating whether a box contains a diamond; but I have no sign which I know to be perfectly reliable.)

    The handwriting of the curious inquirer is entangled with the signs and portents and the contents of the boxes, whereas the handwriting of the clever arguer is evidence only of which owner paid the higher bid. There is a great difference in the indications of ink, though one who foolishly read aloud the ink-shapes might think the English words sounded similar.

    Rationalization vs rationality

    Rationality is the forward flow that gathers evidence, weigs it, outputs a conclusion.
    Rationalization is a backward flow from conclusion to selected evidence.
    If you genuinely don’t know where you are going, you will probably feel quite curious about it. Curiosity is the first virtue, without which your questioning will be purposeless and your skills without direction.

    Specify the argument in advance

    It does no good to rehearse supporting arguments, because you have already taken those into account.
    You defeated yourself the instant you specified your argument’s conclusion in advance.

    Motivated skepticism

    Spending one hour discussing the problem, with your mind carefully cleared of all conclusions, is different from waiting ten years on another $20 million study.

    Motivated stopping and motivated continuation

    The moral is that the decision to terminate a search procedure is, like the search procedure itself, subject to bias and hidden motives.

    You should suspect motivated stopping when you close off search, after coming to a comfortable conclusion, and yet there’s a lot of fast cheap evidence you haven’t gathered yet - there are websites ou could visit, there are counter-counter arguments you could consider, or you haven’t closed your eyes for five minutes by the clock trying to think of a better option.

    You should suspect motivated continuation when some evidence is leaning in a way you don’t like, but you decide that more evidence is needed - expensive evidence that you know you can’t gather anytime soon. a spoopsed to something you’re going to look up on Google in thirty minutes - before you’ll have to do anything uncomfortable.

    An important thing for young businesses and new-minted consultants to keep in mind:

    What you failed prospects tell you is the reason for rejection, may not make the real difference; and you should ponder that carefully before sepending huge efforts. If the venture capitalist says If only your sales were growing a little faster!, or if the potential customer says It seems good, but you don’t have feature X, that may not be the true (reason of) rejection. Fixing it may, or may not, change anything.

    Dark side - lies and opinions

    Promoting less than maximallly accurate beliefs is an act od sabotage. Don’t do it to anyone unless you’d also slash their tires.

    Once you tell a lie, the truth is your enemy; and every truth connected to that truth, and every ally of truth in general; all of there must oppose, to protect a lie. Whether you’re lying to others, or to yourself.

    Everyone has a right to their own opinion. When you think about it, where was that proverb generated? It is something that someone would say in the course of protecting a truth, or in the course of protectinf from the truth.

    Moore’s paradox:

    It’s raining ouutside, but I don’t believe that it is.

    It’s not as if people are trained to recognize when they believe something. It’s not like they’re ever taught in high school: What it feels like to actuallly believe something - to have that statement in your belief pool - is that it just seems like the way the world is. You should recognize this feeling, which is actual belief, and distinguish it from having good feelings about a (unquoted) belief that you recognize as a belief (which means that it’s in quote marks).

    Do we believe everything we’re told?

    Descartes thought that we would first comprehend what the proposition meant, then consider the proposition, and finally accept or reject it.
    Spinoza (Descartes’s rival) suggested that we first passively accept a proposition in the course of comphrehending it, and only afterward actively disbelieve propositions which are rejected by consideration.

    Philosophy for the last few centuries went with Descartes, but modern experiments go with Spinoza:
    (experiment about bussiness while reading false statements that leads to completely different outcomes comparing with a not-busy state) = we should be more careful when we expose ourselves to unreliable infromation, especiallly if we’re doing something else at the time. Be careful when you glance at that newspaper in the supermarket.

    The “Outside the box” box - lovely chapter!

    Whenever someone exhorts youto think outside the box, they usually, for your convinience, point out exactly where outside the box is located.
    In venture capital there’s a direct economic motive to not follow the herd - either someone else is also developing the product, or someone else is bidding too much for the startup. Steve Jurventson once told me that at Draper Fisher Jurvetson (venture capital), only two partners need to agree in order to fund any startup up to $1.5 million. And if all the partners agree that something sounds like a good idea, they won’t do it.

    Tell about future (2019) to the people from the past (1901)

    • In the future, there will be a superconnected global network of billions of adding machines, each one of which has more power than all pre-1901 adding machines put together. One of the primary uses of this network will be to transport moving pictures of lesbian sex by pretending they are made out of numbers.
    • Your grandchildren will think it is not just foolich, but evil, to say that someone should not be President of United States because she is black.

    Deep

    To sound deep

    If you want to sound deep, you can never say anything that is more than a single step of inferential distance away from your listener’s current mental state.

    To seem deep

    Study nonstandard philosophies. Seek out discussions on topics that will give you a chance to appear deep. Do your philosophicl thinking in advance, so you can concentrate on explaining erll. Above all, practice staying within the one-inferential-step bound.

    To be deep

    Think for yourself about wise or important or emotionally fraught topics. Thinking for yourself isn’t the same as coming up with an unusual answer. It does mean seeing for yourself, rather than letting your brain complete the pattern. If you don’t stop at the first answer, and cast out replies that seem vaguely unsatisfactory, in time your thoughts will form a coherent whole, flowing from the single source of yourself, rather than being fragmentary repetitions of other people’s conclusions.

    We change our mind less ofthe than we think

    Once your belief is fixed, no amount of argument will alter the truth-value; once your decision is fixed, no amount of argument will alter the consequences.

    Hold off on proposing solutions

    Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any.

    Genetic heuristic - rules of thumb:

    • Be suspicious of genetic accusations against beliefs that you dislike, especially if th proponent claims justifications beyond the simple authority of a speaker. Flight is a religious idea, so the Wright brothers must be liars is one of the classically given examples.
    • By the same token, don’t think you can get good information about a technical issue just by sagely psychoanalyzing the personalities involved and their flawed motives. If technical arguments exist, they get priority.
    • When new suspicion is cast on one of your fundamental sources, you really should doubt all the branches and leaves that grew from that root. You are not licensed to reject them outright as conclusions, because reversed stupidity is not intelligence, but…
    • Be extremely suspicious if you find that you still believe the early suggestions of a source you later rejected.

    Pressure

    Time pressure greatly increases the inverse relationship between perceived risk and perceived benefit, consistent with the general finding that time pressure, poor information, or distraction all increase the dominanace of perceptual heuristics over analytic deliberation. (- yo don’t think clearly under these circumstances)

    Cheap holiday shopping

    The cheaper the class of objects, the more expensive a particular object will apppear, given that you spend a fixed amount. Which is more memorable, a $25 shirt or a $25 candle?

    ^ gives a whole new meaning to the Japanese custom of buying $50 melons, doesn’t it? You look at that and shake your head and sy “What is it with the Japanese?” And yet they get to be perceived as incredibly genereous, spendthrift even, while spending only $50. You could spend $200 on a fancy dinner and not appear as wealthy as you can by spending $50 on a melon. If only there was a custom of gifting $25 toothpicks or $10 dust specks; they could get away with spending even less.

    Avoid Happy Death Spiral by:

    • splitting the Great Idea into parts;
    • treating every additionall detail as burdensome;
    • thinking about the specifics of the causal chain instead of the goood or bad feelings;
    • not rehearsing evidence;
      not adding happpiness from claims that you can’t prove are wrong.

    Avoid Happy Death Spiral not by:

    • refusing to admire anything too much;
    • conducting s biased search for negative points until you feel unhappy again;
    • forcibly shoving an idea into a safe box.

    Great ideas

    There is never an Idea so true that it’s wrong to criticize any argument that supports it. Never.

    The challenge of pessimism

    It’s really hard to aim low enough that you’re pleasantly surprised around as often and as much as you’re unpleasantly surprised.

    Spirals of hate

    Yes, it matters that the 9/11 hijackers weren’t cowards. Not just for understanding the enemy’s realistic psychology. There is simply too much damage done by spirals of hate. It is just too dnagerous for there to be any target in the world, whether it be the Jews or Adolf Hitler, about whom saying negative things trumps saying accuaret things.

    Psychological aspects

    I do mean to point out a deep psychological difference between seeing your grand cause in life as protecting, guarding, preserving (defend), versus discovering, creating, improving (up).

    Oops

    Not every change is an improvement, but every improvement is necessarily a change. If we only admit small local errors, we will only make small local changes. The motivation for a big change comes from acknowledging a big mistake.
    Do not indulge in drama and become proud of admitting errors. It is superior to get it right the first time.
    Better to swallow the entire bitter pill in one terrible gulp.

    Human beings make mistakes

    Human beings make mistakes, and not all of them are disguised successes. Human beings make mistakes; it happens, that’s all. Say oops, and get on with your life.

    To avoid professing doubts, remember:

    A rational doubt exists to destroy its target belief, and if it does not destroy its target it dies unfulfilled.
    A rational doubt arises from some specific reason the belief might be wrong.
    An unresolved doubt is a null-op.
    An univestigated doubt might as well not exist.
    You should not be proud of mere doubting, although you can justly be proud when you have just finished tearing a cherished belief to shreds.
    Though it may take courage to face you doubts, never forget that to an ideal mind doubt would not be scary in the first place.

    You can face reality:

    What is true is already so.
    Owning up to it doesn’t make it worse.
    Not being open about it doesn’t make it go away.
    An because it’s true, it is what is there to be interacted with.
    Abything untrue isn’t there to be lived.
    People can stand what is true,
    for they are already enduring it.

    On shifting you belief probabilities

    In the microprocess of inquiry, your belief should always be evenly poised to shift in either direction. Not every point may suffice to blow the issue wide open - to shift belief from 70% to 30% probability - but if your current belief is 70%, you should be as ready to drop it to 69% as raising it to 71%.

    No escape from rationality laws

    No one can revoke the law that you need evidence to generate accurate beliefs. Not even a vote of the whole human species can obtain mercy in the court of Nature.

    Line of retreat

    “Don’t raise the pressure, lower the wall.”
    “The thought you cannot think controls you more than thoughts you speak aloud”


    Book 3: The Machine In The Ghost

    how minds work

    Evolution

    Evolution does not explain the origin of life; evolutionary biology is not supposed to explain the first replicator, because the first replicator does not come from another replicator. Evolution describes statistical trends in replication. The first replicator wasn’t a statistical trend, it was pure accident. The notion that evolution should explain the origin of life is a pure strawman - more creationist misrepresentation.

    Evolutionary biology

    I often recommend evolutionary biology to friends just because the modern field tries to train its students against rationalization, error calling forth correction. It’s good training for any thinker, but it is especially important if you want to think clearly about other weird mindish processes that do not work like you.

    real-life lesson from AI

    When the basic problem is your ignorance, clever strategies for bypassing your ignorance lead to shooting yourself in the fooot.

    Instrumental vs terminal values

    Instrumental values - desirable strictly conditional on their anticipated consequences. (I want to administer penicilin to my sister not because a penicilin-filled sister is an intrinsic good, but in anticipation of penicilin curing her flesh-eating pneumonia)
    Terminal values - desirable without conditioning on other consequences: “I want to save my sister’s life” has nothing to do with your anticipating whether she’ll get injected with penicilin after that.

    If you say that you want to ban guns in order to reduce crime, it may take a moment to realize that “reducing crime” isn’t a terminal value, it’s a superior instrumental value with links to terminal values for human lives and human happinesses. And then the one who advocates gun rights may have links to the superior instrumental value of “reducing crime” plus a link to a value for “freedom”, which might be a terminal value unto them, or another instrumental value …

    We don’t know what our own values are, or where they came from, and can’t find out except by undartaking error-prone projects of cognitive archaaeology.

    Wishes

    Wishes are leaky generalizations, derived from the huge but finite structure that is your entire morality; only by including this entire structure can you plug all the leaks.

    What is optimism?

    It is ranking thepossibilities by your own preference ordering, and selecting an outcome high in that preference ordering, and somehow that outcome ends up as your prediction. What kind of elaborate rationalizations were generated along the way is probably not so relevant as one might fondly believe; look at the cognitive history and it’s optimism in, optimism out. But Nature, or whatever other process is under discussion, is not actually, causally choosing between outcomes by ranking them in your preference ordering and picking them high. So the brain fails to synchronize with the environment, and the prediction fails to match reality.

    Lost purposes

    “With every new link that intervenes between the action and its consequence, intention has one moew chance to go astray. With every intervening link, information is lost, incentive is lost.” (example - education system)

    About words

    Logic does not help you, it just is

    Logic never dictates any empirical question; it never settles any real-worlds query which could, by any stretch of the imagination, go either way.

    Size of the map and territory

    Definitions can’t be expected to excatly match the empirical structure of thingspace in any event, because the map is smaller and much less complicated than the territory.

    On dictionaries

    Dictionaries are mere histories of past usagel if you treat the as supreme arbiters of meaning, it binds you to the wisdom of the past, forbidding you to do better.

    Taboo your words

    When you find yourself in philosophical dificulties, the first line of defense is not to define your problematic terms, but to see whether you can think without using those terms at all. Don’t use a signle handle.

    Word compression

    Where you see a single confusing thing, with protean and self-contradictory attributes, it is a good guess that your map is cramming too much into one point - you need to pry it apart and allocate some new buckets.

    Definition of a word

    Probabilistic inference does not rely on dictionary definitions or common usage; it relies on the universe containing empirical clusters of similar things.
    Wondering how to define a word means you’re looking at the problem the wrong way - searching for the mysterious essence of what is, in fact, a communication signal.

    On categories

    The way to carve reality at its joints, is to draw simple boundaries around concentrations of unusually high probability density in Thingspace.

    Hidden variables in questions

    If you have a question with a hidden variable, that evaluates to different expressions in different contexts, it feels like reality itself is unstable - what your mind’s eye sees, shifts around depending on where it looks.

    37 ways how words can be wrong

    1. A word fails to connect to reality in the first place.
    2. Your argument, if it worked, could coerce reality to go a different way by choosing a different word definition.
    3. You try to establish any sort of empirical proposition as being true “by definition”.
    4. You unconsciously slap the conventional label on something, without actually using the verbal definition you just gave.
    5. The act of labeling something with a word disguises a challengable inductive inference you are making.
    6. You try to define a word using words, in turn defined with even more abstract words, without being able to point to an example.
    7. The extension doesn’t match the intension.
    8. Your verbal definitions doesn’t capture more than a tiny fraction of the category’s shared characteristics, but you try to reason as if it does. (featherless byped “with broad nails”)
    9. You try to treat category membership as all-or-nothing, ignoring the existence of more and less typical subclusters.
    10. A verbal definition works well enough in practice to point out the intended cluster of similar things, but you nitpick exceptions. (humans with 9 fingers)
    11. You ask whether something “is” or “is not” a category member but can’t name the question you really want answered.
    12. You treat intuitively perceived hierarchical categories like the only correct way to parse the world, without realizing that other forms of statistical inference are possible even though your brain doesn’t use them.
    13. You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain.
    14. You argue about a category membership (is it a blegg?) even after screening off all questions that could possibly depend on a category-based inference.
    15. You let an argument to slide into being about definitions, even though it isn’t what you originally wanted to argue about.
    16. You think a word has a meaning, as a property of the word itself; rather than there being a label that your brain associates to a particualr concept.
    17. You argue over the meanings of a word, even after all sides understand perfectly well what the other sides are trying to say.
    18. You pull out a dictionary in the middle of an empirical or moral argument. (Dictionary editors are historians of usage, not legislators of language.)
    19. You pull out a dictionary in the middle of any argument ever.
    20. You defy common usage without a reason, making it gratuitously hard for others to understand you.
    21. You use complex renamings to create the illusion of inference.
    22. You get into arguments that you could avoid if you just didn’t use the word.
    23. The existence of a neat little word prevents you from seeing the details of the thing you’re trying to think about.
    24. You have only one word, but there are two or more different things-in-reality, so that all the facts about them get dumped into a single undifferentiated mental bucket.
    25. You see patterns where none exist, harvesting other characteristics from your definitions even when there is no similarity along that dimension.
    26. You try to sneak in the connotations of a word, by arguing from a definition that doesn’t include the connotations.
    27. You claim “X, by definition, is a Y!” On such occasions you’re almost certainly trying to sneak in a connotation of Y that wasn’t in your given definition.
    28. You claim “Ps, by definition, are Qs!”
    29. You try to establish membership in an empirical cluster “by definition”.
    30. Your definition draws a boundary around things that don’r realy belong together.
    31. You use a short word for something that you won’t need to describe often. This can result in inefficient thinking, or even misapplications of Occam’s Razor, if your mind thinks that short sentences sound “simpler”.
    32. You draw your boundary around a volume of space where there is no greater-than-usual density, meaning that the associated word does not correspond to any performable Bayesian inferences.
    33. You draw an unsimple boundary without any reason to do so.
    34. You use categorization to make inferences about properties that don’t have the appopriate empirical structure, namely, conditional independence given knowledge of the class, to be well-appoximated by Naive Bayes.
    35. You think that words are like tiny little LISP symbols in your mind, rather than words being labels that act as handles to direct complex mental paintbrushes that can paint detailed pictures in your sensory workspace.
    36. You use a word that has different meanings in different places as though it meant the same thing on each occasion, possibly creating the illusion of something protean and shifting.
    37. You think that definitions can’t be wrong, or that “I can define a word any way I like!”

    Bayes theorem

    P(A|X) = (P(X|A) x P(A)) / (P(X|A) x P(A) + P(X|-A) x P(-A))

    P(Q|P) really means P(Q,P|P), just stupid to specify the extra P all the time. The property you are investigating is Q - even though you’re looking at the size of group (Q,P) within group P.
    P(Q|P) means “if P has probability 1, what is the probability of Q?”

    The symmetry (of both side of the theorem) arises because the elementary causal relations are generally implications from facts to observations (e.g., from breast cancer to positive mammography). The elementary steps in reasoning are generally implications from observations to facts (e.g., from positive mammography to breast cancer).
    Implication is written from right-to-left, so we write P(cancer|positive) on the left sideof the equation. The right side describes the elementary causal steps
    Rational interference on the left end, physical causality on the right end; an equation with mind on one side and reality on the other.


    Book 4: Mere Reality

    natural world on its own right

    Soul

    Giulio Giorello: “Yes, we have a soul, but it’s made of lots of tin robots.” (Humans are built out of inhuman parts. The world of atoms looks nothing like the world as we ordinarily think of it.)

    On really dissolving questions

    If there is any lingering feeling of a remaining unanswered question, or of having been fast0talked into something, then this is a sign that you have not dissolved the question. A vague dissatisfaction should be as much warning as a shout. Really dissolving the question doesn’t leave anything behind

    Confusions exists in the map, not the territory

    Unanswerable questions do not mark places where magic enters the universe. They mark places where your mind runs skew of reality.

    Probability is in the mind

    Jaynes was of the opinion that probabilites were in the mind, not in the environment - that probabilities express ignorance, states of partial information; and if I am ignorant of a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon.

    Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind.

    Being true and reliable

    Truth involves comparing an internal belief to an external fact. Saying true compares a belief to reality.

    “Show your arguments are globally reliable by virtue of each step being locally reliable; don’t just compare the argument’s conclusions to your intuitions.”

    Think like reality

    Calling reality weird keeps you inside a viewpoint already proven erroneous. Probablity theory tells us that surprise is the measure of a poor hypothesis; if a model is consistently stupid - consistently hits on events the model assigns tiny probabilities - then it’s time to discard thet model. A good model makes reality look normal, not weird; a good model assigns high probability to that which is actually the case.
    Intuition is only a model by another name: poor intuitions are shocked by reality; good intuitions make reality feel natural.

    Reductionism

    Think laws not tools. Needing to calculate approximations to a law doesn’t change the law. Planes are still atoms, they aren’t governed by special exeptions in Nature for aerodynamic calculations. The approximation exists in the map, not in the terriotry.

    Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

    The reductionist thesis (as I formulate it) is that human minds, for reasons of efficiency, use a mulit-level map in which we separately think about things like atoms and quarks, hands and fingers, or heat and kinetic energy. Reality itself, on the other hand, is sigle-level in the sense that it does not seem to contain atoms as separate, additional, causally efficacious entities over and above quarks.

    Explaining vs explaining away

    “That which can be destroyed with mere truth, should be.”
    If rainbows are not fundamental, it does not mean they don’t exist.

    Taking joy in the merely mundane reality

    If I’m going to be happy anywhere,
    Or achieve greatness anywhere,
    Or learn true secrets anywhere,
    Or save the world anywhere,
    Or feel strongly anywhere,
    Or help people anywhere,
    I may as well do it in reality

    Physicists vs rationalists

    When physicists grow up, they learn to pla a new game called Follow the negentropy - which is really the same game they were playing all along; ony the rules are mathier, the game is more useful, and the principles are harder to wrap your mind around conceptually. (negentropy - energy is conserved and only canges its form)

    When rationalists grow up, they learn the game called Follow the improbability, the grownup version of “How do you know?” The rule of the rationalist’s game is that every improbable-seeming belief neds an equivalent amount of evidence to justify it.

    Both games have amazingly similar rules.

    Quantum explanations

    “Confusion exists in our models of world not in the world itself.”

    A configuration says “a photon here, a photon there,” not “this photon here that photon there.” You can’t factorize the physics of our universe to be about particles with individual identities.
    Part of the reason why humans have trouble coming to grips with the perfectly normal quantum physics, is that humans bizarrely keep trying to factor reality into a sum of individual real billiard balls.

    Falsifiable and testable (decoherence)

    Falsifiability is something you evaluate by looking at a single hypothesis, asking, “How narrowly does it concentrate its probability distribution over possible outcomes? How narrowly does it tell me what to expect? Can it explain some possible outcomes much better than others?”

    It is usually not possible to apply formal probability theory in real life, any more than you can predict the winner of a tenns match using quantum field theory. But probability theory can serve as a guide to practice, this is what it says: Reject useless complications in general, not just when they are new.

    Occam’s Razor

    “more complicated propositions require more evidence to believe, more complicated propositions also ought to require more work to raise to attention.”

    About quantum stuff - Quantum non-realism

    Egan’s Law: It all adds up to normality.
    Apples didn’t stop falling when Einstein disproved Newton’s theory of gravity.

    “How very hard it is to stay in a state of confessed confusion, without making up a story that gives you closure - how hard it is to avoid manipulating your ignorance as if it were definite knowledge that you possessed.”

    How science is supposed to work

    The ideal of how thing are supposed to work in science; to which all good scientists aspire. (not how it works in science in overall.)
    The tradition handed down through the generations says that a new physics theory comes up with new experimental predicitions that distinguish it from the old theory. You perform the test, and the new theory is confirmed or falsified. If it’s confirmed, you hold a huge celebration, call the newspapers, and had out Nobel Pirizes for everyone; any doddering old emeritus professors who refuse to convert are quietly humored. If the theory is disconfirmed, the lead proponent publicly recants, and gains a reputation for honesty.

    Science does not trust your rationality

    “The final upshot is that Science is not easily reconciled with probability theory If you do a probability-theoretic calculation correclty, you’re going to get the rational answer.” Science doesn’t trust your rationality, and it doesn’t rely on your ability to use probbility theory as the arbiter of truth. It wants you to set up a definitive experiment.

    About the speed of science:

    The method of science is to amass such an enormous mountain of evidence that even scientists cannot ignore it.
    “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” /Max Planck

    Book recommendations for a careful reasoning example:

    “I recommend to all aspiring rationalists that tehy study evolutionalry psychology simply to get a glimpse of what careful reasoning looks like. See particulary Tooby and Cosmides’s “The Psychological Foundations of Culture””

    Eliezer’s childhoood role model

    The most important role models are dreams: they come from within ourselves. To dream of anything less than what you conceive to be perfection is to draw on less than the full power of the part of yoursself that dreams.

    Interlude - A technical explanation of technical explanation

    “Īf you are ignorant, confess your ignorance; if you are confident, confess your confidence. We penalize you for being confident and wrong, but we also reward you for being confident and right.” That’s the virtue of a proper scoring rule.

    “The way human pscychology seems to work is that first we see something happen, and then we try to argue that it matches whatever hypothesis we had in mind beforehand. Rather than conserved probability mass, to distribute over advance predictions, we have a feeling of compatibility - the degree to which the explanation and the event seem to fit.

    This is a technical explanation because “it enables you to calculate exactly how technical an explanation is.”

    A hypothesis is something that controls your anticipations, the probabilities you assign to fitire experiences.

    If a chain of reasoning doesn’t make me nervous, in advance, about waking up with a tentacle, then that reasoning would be a poor explanation if the event did happen.

    Since the beginning
    Not one unusual thing
    Has ever happened.


    Book 5: Mere Goodness

    About the Value theory

    Detached level fallacy

    “So, th next time you see someone talking about how they’re going to raise an AI within a loving family, or in an environment suffused with liberal democratic values, just think of a control lever, pried off the bridge.”

    Recursive justification

    (“why do you think sun will rise today?” -“Because ir rose yesterday and all the days before.”)
    “When you ask strange beings (with until now false priors) why they kwwp using priors that never seem to work in real life… they reply, “Because it’s never worked for us before!”
    Lesson you might derive from this is “Don’t be born with a stupid prior!”

    The technique of rejecting beliefs that have no justification is in general an extremely importatn one. The fundamental question of rationality is “Why do you believe what you believe?“ I don’t even want to say something that sounds like it might allow a single exception to the rule that everything neeeds justification.

    Rationalists want to get as close to the truth as we can possibly manage. So, at the end of the day, I embrace the principle: “Question your brain, question your intuitions, question your principles of rationality, using the full current force of your mind, and doing the best you can do at every point.

    Metaethics:

    “Killing people is wrong” - that’s morality;
    “Killing people is wrong because God prohibited it” - that’s metaethics.
    And if your metaethics goes wrong, you should keep a line of retreat (so from “God prohibited killing” you don’t end up on “so killing is ok”).
    The most important line of retreat is: If our metaethic stops telling you to save lives, you can just drag the kid off the train tracks anyway.

    Value is fragile:

    “Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals will contain almost nothing of worth.” It would end up dull and pointless.

    The moral about scope insensitivity:

    “If you want to be an effective altruiist, you have to think it through with the part of your brain that processes those unexciting inky zeroes on paper, not just the part that gets real worked up about that poor struggling oil-soaked bird.”

    Shut up and multiply:

    “When music is concerned, I care about the journey. When lives are at stake, I shut up and multiply.”

    Out there to win:

    Rationality agents should WIN. Don’t lose reasonalby, WIN.
    “The point is not to have an elegant theory of winnning - the point s to win; elegnace is a side effect.”

    The 12 virtues of rationality:

    curiosity, relinquishment, lightness, evenness, argument, empiricism, simplicity, humility, perfectionism, precision, scholarship, and the void.


    Book 6: Becoming Stronger

    Motivation and Personal Experience

    In the art of rationality it is far more efficient to admit one huge mistake, than to admit lots of little mistakes.

    cracking = good, repair = bad.

    Useful Japanese terms

    • Tsuyoku Naritai - I want to become stronger!
    • Betsukai - make an extraordinary effort.

    Perseverence:

    To do things that are very difficult (or “impossible”),
    First you have to not run away. That takes seconds.
    Then you have to work. That takes hours.
    Then you have to stick at it. That takes years.

    On doing the impossible:

    Most of the people all of the time, and all of the people most of the time, should stick to the possible.

    Shut up and calculate. And don’t try - be a betsukai (be the best version of yourself) and win.

    Bystander apathy:

    “if you’re ever in emergency need of help, you point to one single bystander and ask them for help - making it very clear to whom you’re referring.

    Sins:

    The 3 greatest sins of aspiring rationalist are: underconfidence, coordinating groups, defeating akrasia?


    Extra reading materials

    For fun procrastination

    • computer stupidities site and programming subtopic

    To get a truly alien feeling

    • Vance - City of the Chasch,
    • Niven and Pournelle - The Mote in God’s Eye

    To read from Eliezer’s list

    • George Orwell: Politics and the English language
    • Eliezer Yudkowsky: Harry Potter and the Methods of Rationality
    • Center of applied rationality