Pen exercise: Diseases of the Scientific Discipline

Note: this was an essay I wrote as homework for one of my experimental physics classes (in Institute X, they bend over backwards to drill scientific integrity into students).

Constraint: react to On Being a Scientist: A Guide to Responsible Conduct in Research, by various authors.

First and foremost, the job of a scientist is to uncover the truth. His measure is the clarity and novelty of his thought. If he is exacting and thorough, lining every table, checking every decimal point, but in the end writes only echoes of his teachers, then he is no greater for it. If he becomes head of his department by a curious doggedness but in doing so neglects to pursue new ideas, then he is no closer to the business of science than pedestrians on the street. A scientist should keep his identity small: in doing so he avoids the rituals and inauthentic behaviour to which science is no less immune.

The Atomic Age thrust science in the public’s view. No longer were scientists seen alone in their labs, concocting various mechanisms to demonstrate in cramped classrooms. We started writing letters to governments, affecting foreign and economic policies. We began dominating intellectual circles for better or for worse: in the 1920s it was customary to apply “relativity” to the world’s moral dilemmas [insert citation here]. This narrative is not new, but herein lies an important distinction: all science is a set of tools. If these tools are used for nefarious goals, if they are used for the betterment of humanity, they are still truthful knowledge. The responsibility lies in the person who finds their use. It is because of this that we must distinguish the scientist from the human: we hold ourselves culpable for consequences of our research because we are human, not because we sought dangerous knowledge that must not be sought.

To this end, I shall call ‘scientist’ that who seeks knowledge for knowledge’s sake and ‘science-practitioner’ the scientist with human faults and foibles. The science-practitioner in today’s world faces two main challenges: to collect citations, and to secure funding. Immediately this brings us to various conflicts of interest that may hamper the progress of scientific research. The first one is the maligned focus on credit. We are extremely protective of our ideas. It is as if the conception of an idea in one’s mind (particularly if one has reason to claim priority) leaves us with the same psychological imprint as finding a coin in the street. This is evident in the well-known rivalry of Isaac Newton and Gottfried Leibniz on the discovery of calculus, where what could have been a fruitful correspondence turned into a 20-year-long state-backed superiority contest no better than the red-team, blue-team bickering often seen in sports or politics. The scientist, in his pursuit of knowledge, must learn to avoid this pitfall lest this primal need for social status overwhelms his path to his original goal.

This issue of credit is such an important point that it warrants a longer discussion. Aside from priority, scientists also bicker about authorship. A great importance is placed on the relative order of authors in a paper. First authorship implies a bulk of the intellectual contribution to the paper; a project is after all never equally divided. More issues arise when advisors and researchers with higher administrative positions demand inclusion in the list of authors of a study, regardless of their intellectual contribution to it. Thus, a scientist is never alone. In his pursuit of knowledge, he is haunted by the spectre of his heroes, he must walk briskly keep pace with his highly competitive peers, he must let students trail behind him to carry on when he falls by the wayside, and he must carry the ever-increasing burden of administrative responsibilities on his back.

A more subtle issue on authorship arises when different parts of a paper are written by different authors. A broth is spoiled by many cooks, so why should a paper be any different? The coherence of such a multi-authored paper is rarely apparent, and problems arise when one of the authors is involved in an academic scandal. The case of Jan Hendrik Schön, an experimental physicist who was found to have fabricated semiconductor data, is now a classic warning to budding scientists. The larger question, however, is the culpability of his co-authors. Are they as guilty as the fraudulent physicist by having signed off the paper for publication? What about the people who peer-reviewed his papers? How apt is it to fault them for Schön’s career to have lasted as long as it did?

A naive attempt to cut the Gordian knot would lead one to express culpability for all those involved. “Everyone gets the whip” is a common sentiment for those who would like to get on with their lives after scandals such as these. However, a brief pause would lead one to conclude that this undermines the foundation of trust that the scientific community puts on its members. The pure scientist demands empiricism in everything, but practicality compels the science-practitioner to leave the replication to others in their respective fields and focus on extracting the pieces of information relevant to his problem. To say that we must check and double-check our co-authors for fraudulent behaviour undermines this web of trust and wastes precious hours that could have gone to research.

Of course, it is lunacy to suggest that checks and balances be done away with. That web of trust only works if it can be trusted (and this is not a trivial tautology). What I am advocating rather, is for the community to spend some of its research-hours into building automated verification systems. The advance of machine learning has increased steadily in previous years. Everywhere, we are experiencing the fruits of breakthroughs in recognition systems, from self-driving cars to a cleaner inbox. There is no physical law that prohibits spam classifiers to be aimed at scientific papers instead. Perhaps, it would even be possible to automatically rate the credibility of a researcher and possible conflicts of interest. Science may be methodical, but it need not be manual.

This brings me to my next point. Science, as a profession that defines itself in how it prods and tests the borders of human knowledge, is strangely resistant to alternative systems. There is a huge pressure to converge on professional standards detailing and constraining the various aspects of one’s work. Strange, that we extol empiricism in everything but our own practice, that we so easily peer outside the public window of discourse yet fail to see more efficient research processes. Science may pride itself on being methodical, but it need not be slow.

Consider how a random civilisation would develop its scientific community. Would you imagine that it would start with lone individuals speaking out against the mores of society, like our Grecian philosophers of old? Would you imagine mathematics intertwined with the practicalities of business and war and the economy, until its parts are distilled one by one? A Galileo perhaps, waging a public battle against old institutions. Then guilds and universities. Then the Industrial Revolution. If so, then one must work to broaden one’s horizons. It is a fallacy to suppose that societies will converge to our own, an implicit assumption of the superiority of one’s culture. If not, then you understand: there is a vast number of paths our science could have taken to get to this point. Therefore, there is also a vast number of scientific processes that could have gained foothold by the arrival of global communication. Our science is the conglomeration of different ways of expressing empiricism, some more affected by extenuating socioeconomic goals than others. It is therefore a curious and frightening prospect that our science was grown, not designed.

The second aspect of a science-practitioner’s career centers on funding. Money is the prime mover. It directs the actions of humans much more so that it is polite to admit. Its main use to the science-practitioner is in the procurement of devices necessary to conduct research. Computer systems, laboratory equipment, technicians and operating crew for heavy apparatuses: the list goes on and on. Money buys tools for the toolmaker, and as tools are said to multiply forces, so do they expand the range of phenomena within reach. A scientist without tools is left to use only his mind, and the mind can only carry so much.

What compels science-practitioners to spend an inordinate amount of hours writing grant proposals? The production of tools for toolmakers is an economy unto itself. There is a huge variety of laboratory equipment accessible to research institutions, if they have the money. Always there is a drive to purchase better and better equipment, and this is not entirely unreasonable: all the eyes in the world could never have guessed the existence of microbes without having seen one for itself. As the phenomena we investigate get more and more exotic, so must our sensory capabilities expand.

By virtue of interacting with the economy, however, this procurement process gains its own incentives. A pure scientist will whittle away everything he has to spend on furthering his research. A science-practitioner, by having to exist in reality, must treat his money as a resource and strategically place his bets on lines of study that to him would prove most fruitful. Immediately, this wrests control from the scientist to pursue his own research directions and gives it to the ebbs and flows of the economy.

There is no problem with this picture: usually we do not have perfect knowledge of what we must know (if we did, it would not be called research). This foible more than makes up for the efficiency we would otherwise gain from giving complete control to the scientist.


A (long) personal account of a Bad Kid

So it turns I have ADHD.

The Diagnostic and Statistical Manual of Mental Disorders 5th Edition, or simply DSM-V, splits ADHD into two axes: the inattentive kind (ADHD-PI) and the hyperactive-impulsive kind (ADHD-PH). If you have symptoms of both, you get the fusioned version ADHD-C. I think everyone knows at this point what sorts of traits psychiatrists look for when they diagnose people with ADHD, so let’s move on and simply note that: a) indeed, ADHD is a childhood onset disorder and that, b) even though symptoms of hyperactivity tend to disappear in adulthood, most of the internal symptoms like inattentiveness and inability to keep on task remain (Faraone, 2006).


When I was a eight years old, I got lost in a citadel.

Fig. 1: Should have a sign saying: “Dangers ahead, crossers beware!”

In this particular one (it’s a very famous landmark in Country X), there’s this bridge that goes from the park area to the place where they keep the jail cells. It was my first big field trip. I was an excitable kid. So when a shady guy bequeathed the sacred knowledge of where our national hero’s unguarded jail cell was, I trotted along the big walls of the fort like the carefree, idiot child I was. In doing so, I delayed what should have been a short lunch break for more than two hours, forcing a handful of my classmates’ parents to look for me as I crawled on the ground crying and wet under the downpour. And do you know what the best part was? This wasn’t the first time they had to1.

Fig. 2: The famous guy’s jail cell which I supposedly found unguarded. In retrospect, it might have been a false memory that I did.


ADHD is both overdiagnosed and underdiagnosed. How come? Well, suppose it’s breast cancer we’re talking about instead and we invent a mammogram that’s 99% accurate: given 100 women with breast cancer (let’s pretend men don’t have breasts for the moment) it will ding! positive for 99 of them on average and fail to detect the remaining unlucky person. Suppose also that, a priori, 1 out of 100 of all women have breast cancer. Unfortunately, our Mammogram-3000 also happens to incorrectly diagnose non-cancerous women by a measly 6.4% (that is, 64 out of 1000 women without breast cancer will also get a positive diagnosis).

If you’re a random, responsible adult female and you get a positive result, what are the odds that you actually have breast cancer?














The answer is 13.5%.

Most people who should be on Adderall aren’t and most people who are shouldn’t be. The Conners Continuous Performance Test is one of the most frequently used ADHD tests for children. In analogy to our Mammogram-3000, it has a “sensitivity of 75% and a specificity of 73%” meaning 75% of people with ADHD are correctly diagnosed, whilst (100-73)% = 27% of those without ADHD are also diagnosed (Strauss et al., 2006). A 2007 meta-analysis by Polanczyk et al. puts worldwide ADHD prevalence at 5.29%. If you imagine then 10 000 children, a priori 529 of them will have ADHD and 10 000 - 529 = 9471 will not. Thus:

  • 529 * 75% ~ 397 children with a positive diagnosis and actual ADHD
  • ​529 * (100 - 75)% ~ 132 children with a negative diagnosis and actual ADHD
  • 9471 * 27% ~ 2557 children with a positive diagnosis without actual ADHD
  • 9471 * (100 - 27)% ~ 6914 children WHOSE LIVES ARE FINE AND HAPPY

Fig. 3: A cake made with blood, sweat, and tears.

Hence, we always get a lot more of the hard blue, positive-result-but-without-ADHD children (overdiagnosis) and a couple of light orange, negative-result-but-with-ADHD children (underdiagnosis)2.


There are geniuses even in psychology. Karl John Friston is a British neuroscientist who happened to be a collector of aquatic fauna and flora in the form of drawings. When he was 10, he designed “a self-righting robot involving mercury levels and feedback actuators that would enable a little robot table to traverse uneven surfaces”. When he was in high school, he derived Schrödinger wave equation from scratch and by the time he shifted from medicine to physics he managed to fit the entirety of undergraduate quantum mechanics on a single page. “But why do all this?” you ask. Because of an extreme obsession with parsimony. He collected drawings inasmuch as it would help him explain how the shapes of living things come to be. He designed a robot in a naive foray into self-sustaining control systems. He tried to pare down undergraduate physics to its essential core. And now, his obsessive drive to integrate and simplify has given us mortals a supposed explanation of thinking, perceiving, acting, and maintaining one’s body.

Fig. 4: This could pass for a page torn from the Voynich Manuscript.

Predictive coding is NOT Friston’s principle3. Predictive coding is a theory of the brain claiming that, insofar as the brain responds to inputs from the senses, it also tries to predict the inputs it would get (as a sort of efficiency-improving mechanism). It turns out that predictive coding offers us a partial answer as to why ADHD and Autism Spectrum Disorder (ASD) exist. In a 2015 brain imaging study by Gonzalez-Gadea et al, they found out that:

“…children with ASD showed reduced superior frontal cortex (FC) responses to unexpected events but increased dorsolateral prefrontal cortex (PFC) activation to expected events. In contrast, children with ADHD exhibited reduced cortical responses in superior FC to expected events but strong PFC activation to unexpected events.”

In over-simplified terms, this suggests that the brains of people with ASD systematically overpredict from prior experience in unfamiliar situations (making them uncomfortable with changes in their routine) while those with ADHD systematically underpredict from them (making them susceptible to distraction).


This post has gotten too long so I’ll end with the more personal and more emotional aspects of ADHD that people don’t really talk about much4. The first is that we have an interest-based brain.

An interest-based brain stands in contrast to a normal person’s priority-based brain. We do things based on what is interesting, not what needs to be prioritised. And the trouble with that is we don’t really have much of a choice in what we’ll find interesting. None. Nada. You’re probably thinking, “why can’t you find ways to make your work interesting?” And that question is the ADHD-version of asking a depressed person, “why not find ways to be happy?” Think about it: if we could, then WE WON’T HAVE TO DRINK PITCHERS OF COFFEE TO FINISH PAPERS AND PROBLEM SETS AND PSYCH CLINICS WOULD JUST CLOSE DOWN AND EVERYONE WOULD BE HAPPY FOREVER AND EVER. Either that or condemn us to moral deficiency.

The second one is emotional hyperarousal. People like me have a permanent x4 multiplier to their thoughts and emotions. Tell me “you reek!” and like following one hyperlink after another I’ll hear that as “Crap, is that why you sat opposite me the other day?” then as “Crap, is that why no one’s been inviting me recently?” and then as “Crap, have people just been tolerating my presence since high school?” in two seconds flat. But just as absurd as our emotions can get, so does the transience of their duration. This is the cause of all our sleepless nights (one thought leading to four and so on is how I count sheep), our impulsive flings, our reckless abandon (for some, particularly when it comes to drinking).

There is a particular emotion that holds a special place in our hearts, an emotion so intense that it sometimes forces me to take a walk around the Acad Oval even at 3 AM. Rejection Sensivity Dysphoria (RSD), the final prong of our trident, is very pronounced in people with ADHD. As many as 98-99% of adolescents with ADHD claim to have it, and it sucks that even therapy can’t help with it. RSD is an extreme sensitivity to criticism, teasing, and the perception of failure (for me, the last one dominates).

I’ll be frank. All my life, all the adults around me have been telling me that I can achieve so much, that I can be whatever I want, that all those aptitude tests mean something, that I can have perfect grades, that I can become a billionaire, the next Newton5, etc. IF ONLY I CAN GET MY SHIT TOGETHER. Well, I tried to conduct my life according to your visions in one way or another and now I’m here, two years too long in college and barely hanging onto a company I started with good friends. I’m tired of this decadence of “potential”. I can’t reach your measuring sticks. And by god, I now know why.


Right now, my psych prescribed me 40 mg of Strattera a day (generic name, atomoxetine) which would hopefully let me go from either-zero-or-eight-hours-of-focus mode to a much saner attention profile. This would finally enable me to follow schedules and stick to deadlines and perhaps sit down and actually do homework for once. The trouble is, it costs $3.84 PER FRICKIN’ PILL in Country X and I don’t know from which hand of Baal I’m going to get that kind of money. Does anyone else know where I can get a cheaper variant? I know it can go as low as $0.77 (Php 41.10) per pill in the US so maybe it is possible to buy it in bulk there? Feel free to ask me questions (or give me advice) even if we haven’t talked for 77 years or if you accidentally put gum in my hair in third grade. Don’t worry,

I promise I won’t be the Bad Kid anymore.


  • Faraone, S. V., Biederman, J., & Mick, E. (2006). The age-dependent decline of attention deficit hyperactivity disorder: a meta-analysis of follow-up studies. Psychological medicine, 36(2), 159-165.

  • Friston, K. (2018). Am I autistic? An intellectual autobiography. ALIUS Bulletin, 2, 45-52.

  • Gonzalez-Gadea ML, Chennu S, Bekinschtein TA, et al. Predictive coding in autism spectrum disorder and attention deficit hyperactivity disorder. Journal of Neurophysiology. 2015;114(5):2625-2636. doi:10.1152/jn.00543.2015.

  • Polanczyk, Guilherme & De Lima, Mauricio & Horta, Bernardo & Biederman, Joseph & Augusto Rohde, Luis. (2007). The Worldwide Prevalence of ADHD: A Systematic Review and Metaregression Analysis. The American journal of psychiatry. 164. 942-8. 10.1176/appi.ajp.164.6.942.

  • Strauss, E., Sherman, E. M., & Spreen, O. (2006). A compendium of neuropsychological tests. Print.



Country X
where I live


  1. In first grade, I got scolded for refusing to shut up during an exam. As punishment, I was told I’d had to take my next set of exams in the other class. Kid me thought, “hell, I dun know dem folks” and instead happily trotted along the hallways of my school and into the high school building where I forced 40-something-year-old adults to play hide-and-seek. I lost, unfortunately. 
  2. See Scott Alexander’s Joint Over- and Underdiagnosis for a clearer argument. 
  3. Friston’s free energy principle is a lower-level explanation of predictive coding, and is summarised by Scott Alexander briefly as this: “The brain tries to minimize its free energy with respect to the world, ie minimize the difference between its models and reality. Sometimes it does that by updating its models of the world. Other times it does that by changing the world to better match its models.” 
  4. I’m just rehashing Dr. Dodson’s argument in this article
  5. Which is a classic case of not being able to distinguish levels above your own

Voices in my head: Looney Tunes (with stock drum loops)

Earlier this year, I wrote a piece based on the Looney Tunes intro tune. I neglected to mention that the reason I did so was to explore a particular musical idea I haven’t heard very much: sticking out-of-sync rhythms into tonal harmonies.

I’m not talking about bebop, nor EDM. It’s…a bit difficult to explain, so lemme fumble around for a bit trying to do so.

Fig. 1: This guy almost gets it.

When people think of polyphonic music, they usually think multiple independent melodies that are sounded together à la Bach1. But polyphony as a property can be abstracted away from that. We can think of “polyphony” in rhythms (i.e., polyrhythms) and while we’re at it, polyphonic polyrhythms2, which is what I’m trying to explore here.

In brief, once you have the idea of “independent things sounded together”, you can then see what sorts of hijinks you can do by sounding together different things following different ideas. Hence, this piece:


  1. And by Baroque standards, the degree to which your music sounds great is how clever you can get away with while still having those independent melodies harmonise. 
  2. I just made this up. 

Review: The Useful Idea of Truth, by Eliezer Yudkowsky

Note: This is Part 1 of X of a review of Eliezer Yudkowsky’s Highly Advanced Epistemology 101 for Beginners sequence. See Part 2 here.


This sequence is Eliezer’s second attempt at The Great Canon of Modern Rationality. He took the insight-porn-style of the original and recast it into a sleeker (and quite frankly, more useful) theory-then-exercise format. Not unlike the original, however, he begins by digging beneath the idea of truth:

The Sally-Anne False-Belief task is an experiment used to tell whether a child understands the difference between belief and reality. It goes as follows:

  1. The child sees Sally hide a marble inside a covered basket, as Anne looks on.
  2. Sally leaves the room, and Anne takes the marble out of the basket and hides it inside a lidded box.
  3. Anne leaves the room, and Sally returns.
  4. The experimenter asks the child where Sally will look for her marble.

Immediately, he cleaves his epistemology into two things: beliefs and reality. The former, a thing in mindspace. The latter, a thing in–well, the thing where all things are (for now). He then goes on to emphasise what it means for beliefs to have a truth-condition, or whether or not one’s belief corresponds to reality in the Tarskian “The sentence ‘X’ is true iff X” kind of way.

The first thing I’ve noticed here is just how much the words “belief” and “reality” sweep under the rug. If you’re a pure reductionist, this is the time to feel cheated. A grand epistemology that does not reduce to atoms or quantum fields? Pshaw. But by treating these two concepts as more or less primitives, Eliezer avoids making a substantial amount of nontrivial science a prerequisite to his worldview. If you’re trying to build an AI from scratch, or maybe just trying to describe an epistemology that can be useful to intelligences in general, then you had better start by assuming the minimum set of possible tools available to them.

To use a Yudkowskian epistemology, therefore, you need three objects:

  • things
  • a way of describing things
  • a way of verifying your description

This broth is beginning to smell delicious. What happens if we squint our eyes a bit and let our imaginations run wild?

  • events
  • probability distributions
  • Bayes’ theorem

Whew. But we’re getting ahead of ourselves.

Can we do interesting things already with what we have? Absolutely! We can compare beliefs with each other like we compare sentences. For instance:

“The marble is inside the basket.”


“The marble is inside the box.”

are saying two different things. Likewise, even though (human) beliefs are just particular patterns of neural structures and/or activity in a particular brain1, and even though you need those patterns to undergo complex neural gymnastics before something resembling interpretation can be done on them, in most (working) brains they resolve in a similar enough manner to sentences that we can assign both content2 and truth-condition to them as properties. And it turns out that what we’re really supposed to be interested in is the second one.

A fistful of monads

We’re in a bit of a bind here. We can compare beliefs with beliefs, sure. But how do we let beliefs and reality interact? Eliezer constructs a tower of beliefs by allowing belief in belief or beliefs that behave similar to the sentence “I believe that I should have faith in X.” and recursive descriptions thereof. Then he promptly collapses this potentially problematic edifice by asserting that

…saying ‘I believe the sky is blue, and that’s true!’ typically conveys the same information as ‘I believe the sky is blue’ or just saying ‘The sky is blue’…

But this collapse of Belief of Belief of Belief…of X to Belief of X ends there. We don’t have a way to get X out. The belief that “The sky is blue.” and whether or not the sky is actually blue are still different things, and only our epistemology so far can only say something about the former. We’re stuck in a monad3.

How do we unpack X from Belief of X? By evaluating the latter’s truth-condition.

The truth of a belief, for Eliezer, is what you end up with when a chain of causes and effects in reality extends all the way down to it (which we mentioned was a pattern in the brain, and therefore very much a thing in reality). It is the result of the process:

Sun emits photon -> photon hits shoelace -> shoelace reemits photon -> photon hits retina -> cone/rod gets activated -> neural impulse travels down optical pathway -> impulse reaches visual center -> visual center activates object recognition center -> you recognise the shoelace => Belief in Shoelace

What is the problem here? Well, what happens when you’re crazy? When you fail at the very last step of that process? Eliezer isn’t stupid. He has dealt with this sort of thing before. And I think his point is that, indeed, not all brains are created equal. If your brain churns out the wrong interpretation of this causality-based information we call “truth”, you are going to have a bad time to the extent that your interpretation is wrong. And by “bad time”, I mean you will have certain beliefs that will cause you to act in a certain way which will most likely not produce the effects you believed would happen (such as expecting to find food in your fridge and finding none).

Apollonian forces

The reply I gave…was that my beliefs determine my experimental predictions, but only reality gets to determine my experimental results.

Yes, perfect timing.

It’s time to introduce more concepts.

Let’s introduce the concept of anticipation. To anticipate something is to predict or believe that that something will be the result of some interaction of things in reality. What is a result? Like allowable moves on a chessboard, a particular state of reality. Now, one can have beliefs about results4, so let’s say to anticipate a result means to believe that reality will assume that particular state at a particular point in the future5. And while we’re at it, let’s call the set of all beliefs of a particular brain a map of reality. We can then imagine how anticipation helps us iteratively improve our maps, in a process that goes something like this:

  1. Take a functional brain.
  2. Let it draw a map of reality using whatever cognitive algorithms it’s supposed to have.
  3. Have it observe its environment using its senses.
  4. Is the map true according to step (3)? If yes, then you’re done.
  5. If not, how far away is the map from reality? Which steps in the cause-and-effect chain are missing or are dubious? Adjust accordingly and go to step (2).

Okay, so actually I cheated a bit here, in step (3). Sensory information isn’t the only sort of information you can use to establish the truth of your beliefs (or in particular, any step in its cause-and-effect chain). For instance, we cannot directly observe photons being emitted from the Sun (otherwise it wouldn’t hit our worn-out shoelace) nor can we feel each of our individual neurons firing, yet we consider the chain of causality we outlined above as plausible. Why is this?

Because truth as a rule is a generalisable property of maps.

What do I mean by this? Extracting truth from reality is amenable to inquiry. We can imagine our truth-extracting tools as a separate discipline not unlike science or natural philosophy before that or metaphysics before that or even mere campfire tales of gods and their erratic whims before that as well. By saying that truth generalises over maps, we say that certain techniques are better at extracting truth from reality than others, better at improving the accuracy of our own maps. We have already seen one such technique, namely, look out the window and see for yourself. But there are other techniques, such as find patterns and calculate their consequences, assuming they hold for all relevant instances. This latter technique is what justifies our belief that photons are emitted by the sun, because we know what the sun is, and we know that Sun-stuff (which is similar to certain stuff we can make back home) emits photons.

If a tree falls in a forest

Eliezer ends his post with a preview of the next in the sequence. Suppose you are writing papers for a class on esoteric English literature. You are tasked to identify if a particular author named Elaine is “post-utopian”, which your professor defined on a previous lecture as an author whose writing contains elements of “colonial alienation”. How would you do it?

Fig. 1: Why post-utopia sucks: because they still use XML as a generic data representation. Credits to the illustrator of The Useful Idea of Truth.

If we use the Tarskian schema mentioned at the start of this piece, we get:

The sentence “Elaine is post-utopian.” is true iff Elaine is post-utopian.

So we unpack from the definition. We look for elements of “colonial alienation” in Elaine’s work. We sample a few literature professors and ask if they consider Elaine to be post-utopian. But the thing is, literary theory is ripe with alternative interpretations and arguable definitions and a pernicious subjectivism that everyone is entitled to believe what they want. So whither the truth of Elaine’s post-utopianism?

The danger of using words willy-nilly is that it can produce what Eliezer calls floating beliefs. These are beliefs that, while having a chain of cause-and-effect to back them up, participate in very few (if at all) in the cause-and-effect chains of other beliefs6. Perhaps there was one person back in the day who knew what post-utopianism is, but now she’s dead and her students just memorised who the post-utopian authors are to pass their exams, and their students, and their students’ students, until the cause-and-effect chain settled unto your professor.

Can post-utopianism be true? Sure, but it sure as hell impossible now for your professor to anticipate any state of the world that can cleave the set of all authors into post-utopians and not-post-utopians.

Some of you might think: “But he can! Just imagine atoms in the universe going one way if Elaine were post-utopian, and another way if she were not.”

But under this rule7,

Then the theory of quantum mechanics would be meaningless a priori, because there’s no way to arrange atoms to make the theory of quantum mechanics true.

And when we discovered that the universe was not made of atoms, but rather quantum fields, all meaningful statements everywhere would have been revealed as false – since there’d be no atoms arranged to fulfill their truth-conditions.

Eliezer brands this as a particular instance of verificationism, the idea that we can only evaluate the truth of our beliefs using our senses (which interact with matter), and only those verifiable as such are meaningful.

Before we pick up on this point of what things mean, we’ll take a detour in the next post to extirpate the sophistry surrounding the word “rational” that has built up over the years, in a way similar to what we’ve done for the word “truth” here.

  1. Which are almost but not quite specific to that brain. See grandmother cell and alternatives
  2. One way of carving up this notion is by distinguishing between references and their referents. I’ll refer you to the Stanford Encyclopedia of Philosophy on this point. 
  3. In programming, subroutines can be identified as pure if they satisfy two properties: a) they don’t produce observable side effects when run, and b) their result only depends on their arguments at the point of evaluation. However, a program with no side-effects cannot really affect the real-world much, so what we can do is put them in boxes which we can query with questions like “If your contents happen to have side-effects, what would be the result of its evaluation?” to avoid impurity in the rest of our code. We call these boxes monads. See LINK
  4. Remember that beliefs live in the patterns inside your head and that you may only compare beliefs directly. 
  5. I am having trouble formulating this without invoking the concept of time. 
  6. Actually, Eliezer is more forceful here. He considers floating beliefs as entirely disconnected from your web of beliefs.l 
  7. This is another useful truth-extraction technology: avoid proving too much

The Land after Time

Note: I’m currently publishing pieces from my vault. This was supposed to be published on 2017 January 13.


I had an enlightening talk once with The Giving Tree1 about his plan to start a VR-focused organisation in University X-1. In X-1, you have to do a couple of things before you can start your own club:

  • fill up an information sheet containing various bureaucratic necessities, including a mission/vision statement
  • convince a professor to do your laundry, or at least to attend and vouch for your events
  • write down an account of whatever amount of money you don’t have since you’re just starting out
  • list down at least 15 people whom you’d suckered into joining
  • register online and at the Securities and Exchange Commission (yes, you have to register as a nonprofit first)
  • plagiarise the constitution of the United States

I’m particularly interested in the last bit because of my work under Organisation X as one of the folks who get to decide on matters of membership. Organisations in UP have the tedious but often necessary tradition of forcing potential members to memorise their constitutions. In a world where memorisation is seen as a Really Bad Thing, this practice has probably saved more organisations from cultural decay than fluoride in toothpaste has from tooth decay.


University X-1
an offshoot of University X around 80 km away