A (long) personal account of a Bad Kid

So it turns I have ADHD.

The Diagnostic and Statistical Manual of Mental Disorders 5th Edition, or simply DSM-V, splits ADHD into two axes: the inattentive kind (ADHD-PI) and the hyperactive-impulsive kind (ADHD-PH). If you have symptoms of both, you get the fusioned version ADHD-C. I think everyone knows at this point what sorts of traits psychiatrists look for when they diagnose people with ADHD, so let’s move on and simply note that: a) indeed, ADHD is a childhood onset disorder and that, b) even though symptoms of hyperactivity tend to disappear in adulthood, most of the internal symptoms like inattentiveness and inability to keep on task remain.


When I was a eight years old, I got lost in a citadel.

Fig. 1: Should have a sign saying: “Dangers ahead, crossers beware!”

In this particular one (it’s a very famous landmark in Country X), there’s this bridge that goes from the park area to the place where they keep the jail cells. It was my first big field trip. I was an excitable kid. So when a shady guy bequeathed the sacred knowledge of where our national hero’s unguarded jail cell was, I trotted along the big walls of the fort like the carefree, idiot child I was. In doing so, I delayed what should have been a short lunch break for more than two hours, forcing a handful of my classmates’ parents to look for me as I crawled on the ground crying and wet under the downpour. And do you know what the best part was? This wasn’t the first time they had to1.

Fig. 2: The famous guy’s jail cell which I supposedly found unguarded. In retrospect, it might have been a false memory that I did.

EDIT (2018/11/26): I visited the place again. Yep, all of it was true, down to the moss-covered walkways and ruins and confusing turns. Even the chains protecting Famous Guy’s cell from idiots like me were still there, albeit repainted gray and has probably been replaced many times since.


ADHD is both overdiagnosed and underdiagnosed. How come? Well, suppose it’s breast cancer we’re talking about instead and we invent a mammogram that’s 99% accurate: given 100 women with breast cancer (let’s pretend men don’t have breasts for the moment) it will ding! positive for 99 of them on average and fail to detect the remaining unlucky person. Suppose also that, a priori, 1 out of 100 of all women have breast cancer. Unfortunately, our Mammogram-3000 also happens to incorrectly diagnose non-cancerous women by a measly 6.4% (that is, 64 out of 1000 women without breast cancer will also get a positive diagnosis).

If you’re a random, responsible adult female and you get a positive result, what are the odds that you actually have breast cancer?

The answer is 13.5%.

Most people who should be on Adderall aren’t and most people who are shouldn’t be. The Conners Continuous Performance Test is one of the most frequently used ADHD tests for children. In analogy to our Mammogram-3000, it has a “sensitivity of 75% and a specificity of 73%” meaning 75% of people with ADHD are correctly diagnosed, whilst (100-73)% = 27% of those without ADHD are also diagnosed (Strauss et al., 2006). A 2007 meta-analysis by Polanczyk et al. puts worldwide ADHD prevalence at 5.29%. If you imagine then 10 000 children, a priori 529 of them will have ADHD and 10 000 - 529 = 9471 will not. Thus:

  • 529 * 75% ~ 397 children with a positive diagnosis and actual ADHD
  • 529 * (100 - 75)% ~ 132 children with a negative diagnosis and actual ADHD
  • 9471 * 27% ~ 2557 children with a positive diagnosis without actual ADHD
  • 9471 * (100 - 27)% ~ 6914 children WHOSE LIVES ARE FINE AND HAPPY

Fig. 3: A cake made with blood, sweat, and tears.

Hence, we always get a lot more of the hard blue, positive-result-but-without-ADHD children (overdiagnosis) and a couple of light orange, negative-result-but-with-ADHD children (underdiagnosis)2.


There are geniuses even in psychology. Karl John Friston is a British neuroscientist who happened to be a collector of aquatic fauna and flora in the form of drawings. When he was 10, he designed “a self-righting robot involving mercury levels and feedback actuators that would enable a little robot table to traverse uneven surfaces”. When he was in high school, he derived Schrödinger wave equation from scratch and by the time he shifted from medicine to physics he managed to fit the entirety of undergraduate quantum mechanics on a single page.

“But why do all this?” you ask. Because of an extreme obsession with parsimony. He collected drawings inasmuch as it would help him explain how the shapes of living things come to be. He designed a robot in a naive foray into self-sustaining control systems. He tried to pare down undergraduate physics to its essential core. And now, his obsessive drive to integrate and simplify has given us mortals a supposed explanation of thinking, perceiving, acting, and maintaining one’s body.

Fig. 4: This could pass for a page torn from the Voynich Manuscript.

Predictive coding is NOT Friston’s principle3. Predictive coding is a theory of the brain claiming that, insofar as the brain responds to inputs from the senses, it also tries to predict the inputs it would get (as a sort of efficiency-improving mechanism). It turns out that predictive coding offers us a partial answer as to why ADHD and Autism Spectrum Disorder (ASD) exist. In a 2015 brain imaging study by Gonzalez-Gadea et al, they found out that:

“…children with ASD showed reduced superior frontal cortex (FC) responses to unexpected events but increased dorsolateral prefrontal cortex (PFC) activation to expected events. In contrast, children with ADHD exhibited reduced cortical responses in superior FC to expected events but strong PFC activation to unexpected events.”

In over-simplified terms, this suggests that the brains of people with ASD systematically overpredict from prior experience in unfamiliar situations (making them uncomfortable with changes in their routine) while those with ADHD systematically underpredict from them (making them susceptible to distraction).


This post has gotten too long so I’ll end with the more personal and more emotional aspects of ADHD that people don’t really talk about much4. The first is that we have an interest-based brain.

An interest-based brain stands in contrast to a normal person’s priority-based brain. We do things based on what is interesting, not what needs to be prioritised. And the trouble with that is we don’t really have much of a choice in what we’ll find interesting. None. Nada. You’re probably thinking, “why can’t you find ways to make your work interesting?” And that question is the ADHD-version of asking a depressed person, “why not find ways to be happy?” Think about it: if we could, then WE WON’T HAVE TO DRINK PITCHERS OF COFFEE TO FINISH PAPERS AND PROBLEM SETS AND PSYCH CLINICS WOULD JUST CLOSE DOWN AND EVERYONE WOULD BE HAPPY FOREVER AND EVER. Either that or condemn us to moral deficiency.

The second one is emotional hyperarousal. People like me have a permanent x4 multiplier to their thoughts and emotions. Tell me “you reek!” and like following one hyperlink after another I’ll hear that as “Crap, is that why you sat opposite me the other day?” then as “Crap, is that why no one’s been inviting me recently?” and then as “Crap, have people just been tolerating my presence since high school?” in two seconds flat. But just as absurd as our emotions can get, so does the transience of their duration. This is the cause of all our sleepless nights (one thought leading to four and so on is how I count sheep), our impulsive flings, our reckless abandon (for some, particularly when it comes to drinking).

There is a particular emotion that holds a special place in our hearts, an emotion so intense that it sometimes forces me to take a walk around campus even at 3 AM. Rejection Sensivity Dysphoria (RSD), the final prong of our trident, is very pronounced in people with ADHD. As many as 98-99% of adolescents with ADHD claim to have it, and it sucks that even therapy can’t help with it. RSD is an extreme sensitivity to criticism, teasing, and the perception of failure (for me, the last one dominates).

Note: Scott has debunked RSD as a sine qua non symptom of ADHD. I will never cite something without citations again.

I’ll be frank. All my life, all the adults around me have been telling me that I can achieve so much, that I can be whatever I want, that all those aptitude tests mean something, that I can have perfect grades, that I can become a billionaire, the next Newton5, etc. IF ONLY I CAN GET MY SHIT TOGETHER. Well, I tried to conduct my life according to your visions in one way or another and now I’m here, two years too long in college and barely hanging onto a company I started with good friends. I’m tired of this decadence of “potential”. I can’t reach your measuring sticks. And by god, I now know why.


Right now, my psych prescribed me 40 mg of Strattera a day (generic name, atomoxetine) which would hopefully let me go from either-zero-or-eight-hours-of-focus mode to a much saner attention profile. This would finally enable me to follow schedules and stick to deadlines and perhaps sit down and actually do homework for once. The trouble is, it costs $3.84 PER FRICKIN’ PILL in Country X and I don’t know from which hand of Baal I’m going to get that kind of money. Does anyone else know where I can get a cheaper variant? I know it can go as low as $0.77 (Php 41.10) per pill in the US so maybe it is possible to buy it in bulk there? Feel free to ask me questions (or give me advice) even if we haven’t talked for 77 years or if you accidentally put gum in my hair in third grade. Don’t worry,

I promise I won’t be a Bad Kid anymore.


  • Faraone, S. V., Biederman, J., & Mick, E. (2006). The age-dependent decline of attention deficit hyperactivity disorder: a meta-analysis of follow-up studies. Psychological medicine, 36(2), 159-165.
  • Friston, K. (2018). Am I autistic? An intellectual autobiography. ALIUS Bulletin, 2, 45-52.
  • Gonzalez-Gadea ML, Chennu S, Bekinschtein TA, et al. Predictive coding in autism spectrum disorder and attention deficit hyperactivity disorder. Journal of Neurophysiology. 2015;114(5):2625-2636. doi:10.1152/jn.00543.2015.
  • Polanczyk, Guilherme & De Lima, Mauricio & Horta, Bernardo & Biederman, Joseph & Augusto Rohde, Luis. (2007). The Worldwide Prevalence of ADHD: A Systematic Review and Metaregression Analysis. The American journal of psychiatry. 164. 942-8. 10.1176/appi.ajp.164.6.942.
  • Strauss, E., Sherman, E. M., & Spreen, O. (2006). A compendium of neuropsychological tests. Print.


Country X
where I live

  1. In first grade, I got scolded for refusing to shut up during an exam. As punishment, I was told I’d had to take my next set of exams in the other class. Kid me thought, “hell, I dun know dem folks” and instead of sitting my exams I happily trotted along the hallways of my school and into the high school building where I forced 40-something-year-old adults to play hide-and-seek to bring me back. I lost, unfortunately.
  2. See Scott Alexander’s Joint Over- and Underdiagnosis for a clearer argument.
  3. Friston’s free energy principle is a lower-level explanation of predictive coding, and is summarised by Scott Alexander briefly as this: “The brain tries to minimize its free energy with respect to the world, ie minimize the difference between its models and reality. Sometimes it does that by updating its models of the world. Other times it does that by changing the world to better match its models.”
  4. I’m just rehashing Dr. Dodson’s argument in this article.
  5. Which is a classic case of not being able to distinguish levels above your own.

Voices in my head: Looney Tunes (with stock drum loops)

Earlier this year, I wrote a piece based on the Looney Tunes intro tune. I neglected to mention that the reason I did so was to explore a particular musical idea I haven’t heard very much: sticking out-of-sync rhythms into tonal harmonies.

I’m not talking about bebop, nor EDM. It’s…a bit difficult to explain, so lemme fumble around for a bit trying to do so.

Fig. 1: This guy almost gets it.

When people think of polyphonic music, they usually think multiple independent melodies that are sounded together à la Bach1. But polyphony as a property can be abstracted away from that. We can think of “polyphony” in rhythms (i.e., polyrhythms) and while we’re at it, polyphonic polyrhythms2, which is what I’m trying to explore here.

In brief, once you have the idea of “independent things sounded together”, you can then see what sorts of hijinks you can do by sounding together different things following different ideas. Hence, this piece:


  1. And by Baroque standards, the degree to which your music sounds great is how clever you can get away with while still having those independent melodies harmonise. 
  2. I just made this up. 

Review: The Useful Idea of Truth, by Eliezer Yudkowsky

Note: This is Part 1 of X of a review of Eliezer Yudkowsky’s Highly Advanced Epistemology 101 for Beginners sequence. See Part 2 here.


This sequence is Eliezer’s second attempt at The Great Canon of Modern Rationality. He took the insight-porn-style of the original and recast it into a sleeker (and quite frankly, more useful) theory-then-exercise format. Not unlike the original, however, he begins by digging beneath the idea of truth:

The Sally-Anne False-Belief task is an experiment used to tell whether a child understands the difference between belief and reality. It goes as follows:

  1. The child sees Sally hide a marble inside a covered basket, as Anne looks on.
  2. Sally leaves the room, and Anne takes the marble out of the basket and hides it inside a lidded box.
  3. Anne leaves the room, and Sally returns.
  4. The experimenter asks the child where Sally will look for her marble.

Immediately, he cleaves his epistemology into two things: beliefs and reality. The former, a thing in mindspace. The latter, a thing in–well, the thing where all things are (for now). He then goes on to emphasise what it means for beliefs to have a truth-condition, or whether or not one’s belief corresponds to reality in the Tarskian “The sentence ‘X’ is true iff X” kind of way.

The first thing I’ve noticed here is just how much the words “belief” and “reality” sweep under the rug. If you’re a pure reductionist, this is the time to feel cheated. A grand epistemology that does not reduce to atoms or quantum fields? Pshaw. But by treating these two concepts as more or less primitives, Eliezer avoids making a substantial amount of nontrivial science a prerequisite to his worldview. If you’re trying to build an AI from scratch, or maybe just trying to describe an epistemology that can be useful to intelligences in general, then you had better start by assuming the minimum set of possible tools available to them.

To use a Yudkowskian epistemology, therefore, you need three objects:

  • things
  • a way of describing things
  • a way of verifying your description

This broth is beginning to smell delicious. What happens if we squint our eyes a bit and let our imaginations run wild?

  • events
  • probability distributions
  • Bayes’ theorem

Whew. But we’re getting ahead of ourselves.

Can we do interesting things already with what we have? Absolutely! We can compare beliefs with each other like we compare sentences. For instance:

“The marble is inside the basket.”


“The marble is inside the box.”

are saying two different things. Likewise, even though (human) beliefs are just particular patterns of neural structures and/or activity in a particular brain1, and even though you need those patterns to undergo complex neural gymnastics before something resembling interpretation can be done on them, in most (working) brains they resolve in a similar enough manner to sentences that we can assign both content2 and truth-condition to them as properties. And it turns out that what we’re really supposed to be interested in is the second one.

A fistful of monads

We’re in a bit of a bind here. We can compare beliefs with beliefs, sure. But how do we let beliefs and reality interact? Eliezer constructs a tower of beliefs by allowing belief in belief or beliefs that behave similar to the sentence “I believe that I should have faith in X.” and recursive descriptions thereof. Then he promptly collapses this potentially problematic edifice by asserting that

…saying ‘I believe the sky is blue, and that’s true!’ typically conveys the same information as ‘I believe the sky is blue’ or just saying ‘The sky is blue’…

But this collapse of Belief of Belief of Belief…of X to Belief of X ends there. We don’t have a way to get X out. The belief that “The sky is blue.” and whether or not the sky is actually blue are still different things, and only our epistemology so far can only say something about the former. We’re stuck in a monad3.

How do we unpack X from Belief of X? By evaluating the latter’s truth-condition.

The truth of a belief, for Eliezer, is what you end up with when a chain of causes and effects in reality extends all the way down to it (which we mentioned was a pattern in the brain, and therefore very much a thing in reality). It is the result of the process:

Sun emits photon -> photon hits shoelace -> shoelace reemits photon -> photon hits retina -> cone/rod gets activated -> neural impulse travels down optical pathway -> impulse reaches visual center -> visual center activates object recognition center -> you recognise the shoelace => Belief in Shoelace

What is the problem here? Well, what happens when you’re crazy? When you fail at the very last step of that process? Eliezer isn’t stupid. He has dealt with this sort of thing before. And I think his point is that, indeed, not all brains are created equal. If your brain churns out the wrong interpretation of this causality-based information we call “truth”, you are going to have a bad time to the extent that your interpretation is wrong. And by “bad time”, I mean you will have certain beliefs that will cause you to act in a certain way which will most likely not produce the effects you believed would happen (such as expecting to find food in your fridge and finding none).

Apollonian forces

The reply I gave…was that my beliefs determine my experimental predictions, but only reality gets to determine my experimental results.

Yes, perfect timing.

It’s time to introduce more concepts.

Let’s introduce the concept of anticipation. To anticipate something is to predict or believe that that something will be the result of some interaction of things in reality. What is a result? Like allowable moves on a chessboard, a particular state of reality. Now, one can have beliefs about results4, so let’s say to anticipate a result means to believe that reality will assume that particular state at a particular point in the future5. And while we’re at it, let’s call the set of all beliefs of a particular brain a map of reality. We can then imagine how anticipation helps us iteratively improve our maps, in a process that goes something like this:

  1. Take a functional brain.
  2. Let it draw a map of reality using whatever cognitive algorithms it’s supposed to have.
  3. Have it observe its environment using its senses.
  4. Is the map true according to step (3)? If yes, then you’re done.
  5. If not, how far away is the map from reality? Which steps in the cause-and-effect chain are missing or are dubious? Adjust accordingly and go to step (2).

Okay, so actually I cheated a bit here, in step (3). Sensory information isn’t the only sort of information you can use to establish the truth of your beliefs (or in particular, any step in its cause-and-effect chain). For instance, we cannot directly observe photons being emitted from the Sun (otherwise it wouldn’t hit our worn-out shoelace) nor can we feel each of our individual neurons firing, yet we consider the chain of causality we outlined above as plausible. Why is this?

Because truth as a rule is a generalisable property of maps.

What do I mean by this? Extracting truth from reality is amenable to inquiry. We can imagine our truth-extracting tools as a separate discipline not unlike science or natural philosophy before that or metaphysics before that or even mere campfire tales of gods and their erratic whims before that as well. By saying that truth generalises over maps, we say that certain techniques are better at extracting truth from reality than others, better at improving the accuracy of our own maps. We have already seen one such technique, namely, look out the window and see for yourself. But there are other techniques, such as find patterns and calculate their consequences, assuming they hold for all relevant instances. This latter technique is what justifies our belief that photons are emitted by the sun, because we know what the sun is, and we know that Sun-stuff (which is similar to certain stuff we can make back home) emits photons.

If a tree falls in a forest

Eliezer ends his post with a preview of the next in the sequence. Suppose you are writing papers for a class on esoteric English literature. You are tasked to identify if a particular author named Elaine is “post-utopian”, which your professor defined on a previous lecture as an author whose writing contains elements of “colonial alienation”. How would you do it?

Fig. 1: Why post-utopia sucks: because they still use XML as a generic data representation. Credits to the illustrator of The Useful Idea of Truth.

If we use the Tarskian schema mentioned at the start of this piece, we get:

The sentence “Elaine is post-utopian.” is true iff Elaine is post-utopian.

So we unpack from the definition. We look for elements of “colonial alienation” in Elaine’s work. We sample a few literature professors and ask if they consider Elaine to be post-utopian. But the thing is, literary theory is ripe with alternative interpretations and arguable definitions and a pernicious subjectivism that everyone is entitled to believe what they want. So whither the truth of Elaine’s post-utopianism?

The danger of using words willy-nilly is that it can produce what Eliezer calls floating beliefs. These are beliefs that, while having a chain of cause-and-effect to back them up, participate in very few (if at all) in the cause-and-effect chains of other beliefs6. Perhaps there was one person back in the day who knew what post-utopianism is, but now she’s dead and her students just memorised who the post-utopian authors are to pass their exams, and their students, and their students’ students, until the cause-and-effect chain settled unto your professor.

Can post-utopianism be true? Sure, but it sure as hell impossible now for your professor to anticipate any state of the world that can cleave the set of all authors into post-utopians and not-post-utopians.

Some of you might think: “But he can! Just imagine atoms in the universe going one way if Elaine were post-utopian, and another way if she were not.”

But under this rule7,

Then the theory of quantum mechanics would be meaningless a priori, because there’s no way to arrange atoms to make the theory of quantum mechanics true.

And when we discovered that the universe was not made of atoms, but rather quantum fields, all meaningful statements everywhere would have been revealed as false – since there’d be no atoms arranged to fulfill their truth-conditions.

Eliezer brands this as a particular instance of verificationism, the idea that we can only evaluate the truth of our beliefs using our senses (which interact with matter), and only those verifiable as such are meaningful.

Before we pick up on this point of what things mean, we’ll take a detour in the next post to extirpate the sophistry surrounding the word “rational” that has built up over the years, in a way similar to what we’ve done for the word “truth” here.

  1. Which are almost but not quite specific to that brain. See grandmother cell and alternatives
  2. One way of carving up this notion is by distinguishing between references and their referents. I’ll refer you to the Stanford Encyclopedia of Philosophy on this point. 
  3. In programming, subroutines can be identified as pure if they satisfy two properties: a) they don’t produce observable side effects when run, and b) their result only depends on their arguments at the point of evaluation. However, a program with no side-effects cannot really affect the real-world much, so what we can do is put them in boxes which we can query with questions like “If your contents happen to have side-effects, what would be the result of its evaluation?” to avoid impurity in the rest of our code. We call these boxes monads. See LINK
  4. Remember that beliefs live in the patterns inside your head and that you may only compare beliefs directly. 
  5. I am having trouble formulating this without invoking the concept of time. 
  6. Actually, Eliezer is more forceful here. He considers floating beliefs as entirely disconnected from your web of beliefs.l 
  7. This is another useful truth-extraction technology: avoid proving too much

The Land after Time

Note: I’m currently publishing pieces from my vault. This was supposed to be published on 2017 January 13.


I had an enlightening talk once with The Giving Tree1 about his plan to start a VR-focused organisation in University X-1. In X-1, you have to do a couple of things before you can start your own club:

  • fill up an information sheet containing various bureaucratic necessities, including a mission/vision statement
  • convince a professor to do your laundry, or at least to attend and vouch for your events
  • write down an account of whatever amount of money you don’t have since you’re just starting out
  • list down at least 15 people whom you’d suckered into joining
  • register online and at the Securities and Exchange Commission (yes, you have to register as a nonprofit first)
  • plagiarise the constitution of the United States

I’m particularly interested in the last bit because of my work under Organisation X as one of the folks who get to decide on matters of membership. Organisations in UP have the tedious but often necessary tradition of forcing potential members to memorise their constitutions. In a world where memorisation is seen as a Really Bad Thing, this practice has probably saved more organisations from cultural decay than fluoride in toothpaste has from tooth decay.


University X-1
an offshoot of University X around 80 km away

Dimes, demons and desperation

Note: I’m currently publishing pieces from my vault. This was supposed to be published on 2016 July 31 when I was on the fence about dropping out. In the end, I didn’t.


Buzzword sandwich
1) n. a carbohydrate-rich environment composed of “virtual reality”, “startups”, and a bunch of empty space in between
2) n. where I currently find myself in


It’s been three days since I left academia for money and I’ve already seen a lot of improvement in my skin tone. Doctors hate me now! Kidding aside, it’s my first time in this dog-eat-dog industry and my team and I are definitely not past the Great Filter for Startups yet 1. I have been a reader of HN for more than half a decade now so I’m kind of cheating here but as with all fields, the gulf between theory and practice can go on for miles.

The Highs

Everything I do in the office feels like productive work. From distributing shares (in a three-person team) to the cleaning of the all-powerful whiteboard, every little thing hides a rush rarely seen in timid environments like, say, in front of a chalkboard. Everything is new and learning feels like doing something, but it is difficult to hide the gnawing behind me that it’s all a temporary farce that pulls me away from the Important Stuff like actually creating value. Still, it ain’t so bad to enjoy oneself every once in a while.

Some observations:

  • When what you do is measured in $$$, you tend to feel the loss of your time (e.g., coding what you did not know has already been done) more painfully.
  • It is easier to be honest with yourself about what you can and cannot do. Yes, this means you have to face the latter more often but it also makes it easier to extend the former. I.e., it’s a Double-edged Sword.
  • You will have ideas that sound good. Some of them you will be able to test and some will even pan out. But you will only be able to give birth to a tiny fraction of them. Thinking is much, much faster than making.

Confessions of a green coder

Prior to this, I’ve never been part of a large coding project. And when you’re as bad as me upon entering this rat race, you are going to want to catch up pretty soon. If you’re like me, the best way to do that is to try and jump a couple of levels up the Dreyfus model2: from novice to competent in one fell swoop. How do you do it?

[insert “unity best coding practices” Google search GIF here]

The problem with this approach is that it’s a textbook example of cargo-culting.  Mr. Feynman has really done a lot for us by giving this thing a name and it is what it is. A game of pretend, of blindly copying the advice of many years of software engineering experience distilled in a couple of lines on Stack Overflow. I don’t even know at this point what singletons can help me with but it sounds like a good idea so let’s do it, this kind of thing.

When you’re googling “best practices” for everything, you tend to climb up the abstraction elevator3 without knowing which floor you really want to go to. And this freezes you because there are so many possible abstractions that you can decorate your code with and you don’t really have the intuition to decide which one to use, let alone if you ought to use an abstraction at all. The net effect of this is that you go farther and farther away from what’s important, i.e., creating something that works.

That’s Lesson #1. The razor of all philosophical razors. The way of the void. If you learn anything from shoveling the sludge of software engineering advice, let this be it.


The hackathon is probably the most overdone team-building activity for tech companies out there. So it’s a no-brainer for us4 to join one.

The first few hours of a hackathon will probably feel the most productive. My team and I set up this sticky-note-like system very roughly inspired by Jared’s Scrum implementation in Silicon Valley5. We had six boards and a flow that went like this:

1) Notes and reminders, mostly used for sharing IDs and the idea description
2) Milestones, which in retrospect was full of fluff
3) Things to implement, a feature request board
4) Bugs to squash, self-explanatory
5) Tasks, which had features or bugs that were being currently worked on
6) Completed, which had to be beautiful by the end

(If you were wondering, no, we didn’t bring six physical boards with us. We used Trello.)

The way we allocated tasks was to assign colors to everyone, make them colorise the notes under Things to implement or Bugs to squash using their assigned color, and move said tasks under Tasks. Hence the quip about beauty. You might be wondering though, why color? Color is evil. It’s not C-f or C-s searchable and it reminds us of those people who used six to n highlighters when taking notes back in uni. One of the first principles you must learn in code sprints like this is:

Always pick the simplest thing that works.

Hold your applause. This is a standard, internet-tested, Good Thing slogan so it’s hard to actually pause and think about it before moving on. All general advice is mostly useless and this one’s no exception. So let’s make it more specific. What do we mean by “simple”?

People brandish “simple” as if it’s immediately obvious to everyone what the simple thing to do is. Most of the time, you describe something as simple only after the fact and only if you understand enough of the underlying principles that it is possible to appear simple to you. In other words, you have to be ready for simple. So suppose you did your homework and know all the relevant bits. Here’s an expanded view of how we decided the color issue:

We have a set of tasks. We need a quick way of showing who’s responsible for what. We could go with colors or labels. For sets of a few well-defined elements, it is quicker and easier to distinguish by color than by label. Hence, color it is.

Decisions like these happen at the speed of thought. Sometimes you think of two or three approaches, sometimes none. But the process is clear: your purpose should tell you what’s simple. And in a hackathon, your purpose is to quickly make something that works. So trust your gut and choose — technical debt be damned!

Two bottles of wine

It’s a sad world we live in that people cheat during hackathons6. It’s even sadder that me and my team also did. Why?

  • We brought a VR headset, thereby cashing in on a trend.
  • We made some of our models beforehand.

I might be naive but the second one is particularly insidious. Hackathons are where you flex your fast-twitch mental muscles prototyping as fast as you can. So I was stuck between having to argue our only artist away who insisted on making his models beforehand or accepting that the primary reason we joined was to improve our teamwork. I chose the latter. Sorry.

It was a classic case of “Everyone is doing it, so why shouldn’t we?” Now compare “No one else is doing it but us.” and you’d be disappointed at the level of doublethink people are willing to sustain to stay optimistic in a startup.

I’d have to share some of the guilt here however. I dabble in the Dark Arts. If I believe that a point must be made, that it is of absolute importance that my team takes to heart what I am saying, despite all considerations of integrity I am willing to use fallacious arguments to get there. My bullshit reason so that I don’t think about it too much? I don’t actively promote rationality in my team because being only-sorta-rational is at odds with being in a startup. There’s a region in the range of rational ability where you’d balk at the level of risk you’re taking if you’re in one. So you have to embrace it fully or it will be your death.

There, I said it. Sorry.

Note: After some thought, I realised it isn’t right to feel this way. Cheating put me in a really bad mood, I caught a really nasty case of cognitive dissonance and it bled out here. In the end, I think it didn’t really matter that I didn’t stick to my guns (partly because we only came in third place anyway) but I still kind of wish I didn’t make as big a moral concession.



Dark Arts
exploiting the wonky ways in which the human mind works for one’s benefit
Double-edged Sword
the phenomenon where seeming disadvantages become advantages in a different situation, and vice versa
Hacker News

  1. XX% of startups fail in their first year, according to YY. 
  2. A discredited model of skill aquisition, used here for illustrative purposes. See Wikipedia
  3. I like Joel Spolsky’s name for this thing: Architecture Astronomy
  4. By the way, I co-founded a VR company with Chromo and WIB. See this post to know more about them. 
  5. Silicon Valley is a TV series about, well, the Silicon Valley lifestyle. From what I hear, there’s a lot of truth in it. 
  6. Probably because of the prizes.