Voices in my head: Looney Tunes (with stock drum loops)

Earlier this year, I wrote a piece based on the Looney Tunes intro tune. I neglected to mention that the reason I did so was to explore a particular musical idea I haven’t heard very much: sticking out-of-sync rhythms into tonal harmonies.

I’m not talking about bebop, nor EDM. It’s…a bit difficult to explain, so lemme fumble around for a bit trying to do so.

Fig. 1: This guy almost gets it.

When people think of polyphonic music, they usually think multiple independent melodies that are sounded together à la Bach1. But polyphony as a property can be abstracted away from that. We can think of “polyphony” in rhythms (i.e., polyrhythms) and while we’re at it, polyphonic polyrhythms2, which is what I’m trying to explore here.

In brief, once you have the idea of “independent things sounded together”, you can then see what sorts of hijinks you can do by sounding together different things following different ideas. Hence, this piece:

 


  1. And by Baroque standards, the degree to which your music sounds great is how clever you can get away with while still having those independent melodies harmonise. 
  2. I just made this up. 
Advertisements

Review: The Useful Idea of Truth, by Eliezer Yudkowsky

Note: This is Part 1 of X of a review of Eliezer Yudkowsky’s Highly Advanced Epistemology 101 for Beginners sequence. See Part 2 here.


 

This sequence is Eliezer’s second attempt at The Great Canon of Modern Rationality. He took the insight-porn-style of the original and recast it into a sleeker (and quite frankly, more useful) theory-then-exercise format. Not unlike the original, however, he begins by digging beneath the idea of truth:

The Sally-Anne False-Belief task is an experiment used to tell whether a child understands the difference between belief and reality. It goes as follows:

  1. The child sees Sally hide a marble inside a covered basket, as Anne looks on.
  2. Sally leaves the room, and Anne takes the marble out of the basket and hides it inside a lidded box.
  3. Anne leaves the room, and Sally returns.
  4. The experimenter asks the child where Sally will look for her marble.

Immediately, he cleaves his epistemology into two things: beliefs and reality. The former, a thing in mindspace. The latter, a thing in–well, the thing where all things are (for now). He then goes on to emphasise what it means for beliefs to have a truth-condition, or whether or not one’s belief corresponds to reality in the Tarskian “The sentence ‘X’ is true iff X” kind of way.

The first thing I’ve noticed here is just how much the words “belief” and “reality” sweep under the rug. If you’re a pure reductionist, this is the time to feel cheated. A grand epistemology that does not reduce to atoms or quantum fields? Pshaw. But by treating these two concepts as more or less primitives, Eliezer avoids making a substantial amount of nontrivial science a prerequisite to his worldview. If you’re trying to build an AI from scratch, or maybe just trying to describe an epistemology that can be useful to intelligences in general, then you had better start by assuming the minimum set of possible tools available to them.

To use a Yudkowskian epistemology, therefore, you need three objects:

  • things
  • a way of describing things
  • a way of verifying your description

This broth is beginning to smell delicious. What happens if we squint our eyes a bit and let our imaginations run wild?

  • events
  • probability distributions
  • Bayes’ theorem

Whew. But we’re getting ahead of ourselves.

Can we do interesting things already with what we have? Absolutely! We can compare beliefs with each other like we compare sentences. For instance:

“The marble is inside the basket.”

and

“The marble is inside the box.”

are saying two different things. Likewise, even though (human) beliefs are just particular patterns of neural structures and/or activity in a particular brain1, and even though you need those patterns to undergo complex neural gymnastics before something resembling interpretation can be done on them, in most (working) brains they resolve in a similar enough manner to sentences that we can assign both content2 and truth-condition to them as properties. And it turns out that what we’re really supposed to be interested in is the second one.

A fistful of monads

We’re in a bit of a bind here. We can compare beliefs with beliefs, sure. But how do we let beliefs and reality interact? Eliezer constructs a tower of beliefs by allowing belief in belief or beliefs that behave similar to the sentence “I believe that I should have faith in X.” and recursive descriptions thereof. Then he promptly collapses this potentially problematic edifice by asserting that

…saying ‘I believe the sky is blue, and that’s true!’ typically conveys the same information as ‘I believe the sky is blue’ or just saying ‘The sky is blue’…

But this collapse of Belief of Belief of Belief…of X to Belief of X ends there. We don’t have a way to get X out. The belief that “The sky is blue.” and whether or not the sky is actually blue are still different things, and only our epistemology so far can only say something about the former. We’re stuck in a monad3.

How do we unpack X from Belief of X? By evaluating the latter’s truth-condition.

The truth of a belief, for Eliezer, is what you end up with when a chain of causes and effects in reality extends all the way down to it (which we mentioned was a pattern in the brain, and therefore very much a thing in reality). It is the result of the process:

Sun emits photon -> photon hits shoelace -> shoelace reemits photon -> photon hits retina -> cone/rod gets activated -> neural impulse travels down optical pathway -> impulse reaches visual center -> visual center activates object recognition center -> you recognise the shoelace => Belief in Shoelace

What is the problem here? Well, what happens when you’re crazy? When you fail at the very last step of that process? Eliezer isn’t stupid. He has dealt with this sort of thing before. And I think his point is that, indeed, not all brains are created equal. If your brain churns out the wrong interpretation of this causality-based information we call “truth”, you are going to have a bad time to the extent that your interpretation is wrong. And by “bad time”, I mean you will have certain beliefs that will cause you to act in a certain way which will most likely not produce the effects you believed would happen (such as expecting to find food in your fridge and finding none).

Apollonian forces

The reply I gave…was that my beliefs determine my experimental predictions, but only reality gets to determine my experimental results.

Yes, perfect timing.

It’s time to introduce more concepts.

Let’s introduce the concept of anticipation. To anticipate something is to predict or believe that that something will be the result of some interaction of things in reality. What is a result? Like allowable moves on a chessboard, a particular state of reality. Now, one can have beliefs about results4, so let’s say to anticipate a result means to believe that reality will assume that particular state at a particular point in the future5. And while we’re at it, let’s call the set of all beliefs of a particular brain a map of reality. We can then imagine how anticipation helps us iteratively improve our maps, in a process that goes something like this:

  1. Take a functional brain.
  2. Let it draw a map of reality using whatever cognitive algorithms it’s supposed to have.
  3. Have it observe its environment using its senses.
  4. Is the map true according to step (3)? If yes, then you’re done.
  5. If not, how far away is the map from reality? Which steps in the cause-and-effect chain are missing or are dubious? Adjust accordingly and go to step (2).

Okay, so actually I cheated a bit here, in step (3). Sensory information isn’t the only sort of information you can use to establish the truth of your beliefs (or in particular, any step in its cause-and-effect chain). For instance, we cannot directly observe photons being emitted from the Sun (otherwise it wouldn’t hit our worn-out shoelace) nor can we feel each of our individual neurons firing, yet we consider the chain of causality we outlined above as plausible. Why is this?

Because truth as a rule is a generalisable property of maps.

What do I mean by this? Extracting truth from reality is amenable to inquiry. We can imagine our truth-extracting tools as a separate discipline not unlike science or natural philosophy before that or metaphysics before that or even mere campfire tales of gods and their erratic whims before that as well. By saying that truth generalises over maps, we say that certain techniques are better at extracting truth from reality than others, better at improving the accuracy of our own maps. We have already seen one such technique, namely, look out the window and see for yourself. But there are other techniques, such as find patterns and calculate their consequences, assuming they hold for all relevant instances. This latter technique is what justifies our belief that photons are emitted by the sun, because we know what the sun is, and we know that Sun-stuff (which is similar to certain stuff we can make back home) emits photons.

If a tree falls in a forest

Eliezer ends his post with a preview of the next in the sequence. Suppose you are writing papers for a class on esoteric English literature. You are tasked to identify if a particular author named Elaine is “post-utopian”, which your professor defined on a previous lecture as an author whose writing contains elements of “colonial alienation”. How would you do it?

Fig. 1: Why post-utopia sucks: because they still use XML as a generic data representation. Credits to the illustrator of The Useful Idea of Truth.

If we use the Tarskian schema mentioned at the start of this piece, we get:

The sentence “Elaine is post-utopian.” is true iff Elaine is post-utopian.

So we unpack from the definition. We look for elements of “colonial alienation” in Elaine’s work. We sample a few literature professors and ask if they consider Elaine to be post-utopian. But the thing is, literary theory is ripe with alternative interpretations and arguable definitions and a pernicious subjectivism that everyone is entitled to believe what they want. So whither the truth of Elaine’s post-utopianism?

The danger of using words willy-nilly is that it can produce what Eliezer calls floating beliefs. These are beliefs that, while having a chain of cause-and-effect to back them up, participate in very few (if at all) in the cause-and-effect chains of other beliefs6. Perhaps there was one person back in the day who knew what post-utopianism is, but now she’s dead and her students just memorised who the post-utopian authors are to pass their exams, and their students, and their students’ students, until the cause-and-effect chain settled unto your professor.

Can post-utopianism be true? Sure, but it sure as hell impossible now for your professor to anticipate any state of the world that can cleave the set of all authors into post-utopians and not-post-utopians.

Some of you might think: “But he can! Just imagine atoms in the universe going one way if Elaine were post-utopian, and another way if she were not.”

But under this rule7,

Then the theory of quantum mechanics would be meaningless a priori, because there’s no way to arrange atoms to make the theory of quantum mechanics true.

And when we discovered that the universe was not made of atoms, but rather quantum fields, all meaningful statements everywhere would have been revealed as false – since there’d be no atoms arranged to fulfill their truth-conditions.

Eliezer brands this as a particular instance of verificationism, the idea that we can only evaluate the truth of our beliefs using our senses (which interact with matter), and only those verifiable as such are meaningful.

Before we pick up on this point of what things mean, we’ll take a detour in the next post to extirpate the sophistry surrounding the word “rational” that has built up over the years, in a way similar to what we’ve done for the word “truth” here.


  1. Which are almost but not quite specific to that brain. See grandmother cell and alternatives
  2. One way of carving up this notion is by distinguishing between references and their referents. I’ll refer you to the Stanford Encyclopedia of Philosophy on this point. 
  3. In programming, subroutines can be identified as pure if they satisfy two properties: a) they don’t produce observable side effects when run, and b) their result only depends on their arguments at the point of evaluation. However, a program with no side-effects cannot really affect the real-world much, so what we can do is put them in boxes which we can query with questions like “If your contents happen to have side-effects, what would be the result of its evaluation?” to avoid impurity in the rest of our code. We call these boxes monads. See LINK
  4. Remember that beliefs live in the patterns inside your head and that you may only compare beliefs directly. 
  5. I am having trouble formulating this without invoking the concept of time. 
  6. Actually, Eliezer is more forceful here. He considers floating beliefs as entirely disconnected from your web of beliefs.l 
  7. This is another useful truth-extraction technology: avoid proving too much

The Land after Time

Note: I’m currently publishing pieces from my vault. This was supposed to be published on 2017 January 13.


 

I had an enlightening talk once with The Giving Tree1 about his plan to start a VR-focused organisation in University X-1. In X-1, you have to do a couple of things before you can start your own club:

  • fill up an information sheet containing various bureaucratic necessities, including a mission/vision statement
  • convince a professor to do your laundry, or at least to attend and vouch for your events
  • write down an account of whatever amount of money you don’t have since you’re just starting out
  • list down at least 15 people whom you’d suckered into joining
  • register online and at the Securities and Exchange Commission (yes, you have to register as a nonprofit first)
  • plagiarise the constitution of the United States

I’m particularly interested in the last bit because of my work under Organisation X as one of the folks who get to decide on matters of membership. Organisations in UP have the tedious but often necessary tradition of forcing potential members to memorise their constitutions. In a world where memorisation is seen as a Really Bad Thing, this practice has probably saved more organisations from cultural decay than fluoride in toothpaste has from tooth decay.


Definitions

University X-1
an offshoot of University X around 80 km away

Dimes, demons and desperation

Note: I’m currently publishing pieces from my vault. This was supposed to be published on 2016 July 31 when I was on the fence about dropping out. In the end, I didn’t.


 

Buzzword sandwich
1) n. a carbohydrate-rich environment composed of “virtual reality”, “startups”, and a bunch of empty space in between
2) n. where I currently find myself in

 

It’s been three days since I left academia for money and I’ve already seen a lot of improvement in my skin tone. Doctors hate me now! Kidding aside, it’s my first time in this dog-eat-dog industry and my team and I are definitely not past the Great Filter for Startups yet 1. I have been a reader of HN for more than half a decade now so I’m kind of cheating here but as with all fields, the gulf between theory and practice can go on for miles.

The Highs

Everything I do in the office feels like productive work. From distributing shares (in a three-person team) to the cleaning of the all-powerful whiteboard, every little thing hides a rush rarely seen in timid environments like, say, in front of a chalkboard. Everything is new and learning feels like doing something, but it is difficult to hide the gnawing behind me that it’s all a temporary farce that pulls me away from the Important Stuff like actually creating value. Still, it ain’t so bad to enjoy oneself every once in a while.

Some observations:

  • When what you do is measured in $$$, you tend to feel the loss of your time (e.g., coding what you did not know has already been done) more painfully.
  • It is easier to be honest with yourself about what you can and cannot do. Yes, this means you have to face the latter more often but it also makes it easier to extend the former. I.e., it’s a Double-edged Sword.
  • You will have ideas that sound good. Some of them you will be able to test and some will even pan out. But you will only be able to give birth to a tiny fraction of them. Thinking is much, much faster than making.

Confessions of a green coder

Prior to this, I’ve never been part of a large coding project. And when you’re as bad as me upon entering this rat race, you are going to want to catch up pretty soon. If you’re like me, the best way to do that is to try and jump a couple of levels up the Dreyfus model2: from novice to competent in one fell swoop. How do you do it?

[insert “unity best coding practices” Google search GIF here]

The problem with this approach is that it’s a textbook example of cargo-culting.  Mr. Feynman has really done a lot for us by giving this thing a name and it is what it is. A game of pretend, of blindly copying the advice of many years of software engineering experience distilled in a couple of lines on Stack Overflow. I don’t even know at this point what singletons can help me with but it sounds like a good idea so let’s do it, this kind of thing.

When you’re googling “best practices” for everything, you tend to climb up the abstraction elevator3 without knowing which floor you really want to go to. And this freezes you because there are so many possible abstractions that you can decorate your code with and you don’t really have the intuition to decide which one to use, let alone if you ought to use an abstraction at all. The net effect of this is that you go farther and farther away from what’s important, i.e., creating something that works.

That’s Lesson #1. The razor of all philosophical razors. The way of the void. If you learn anything from shoveling the sludge of software engineering advice, let this be it.

Chromesthesia

The hackathon is probably the most overdone team-building activity for tech companies out there. So it’s a no-brainer for us4 to join one.

The first few hours of a hackathon will probably feel the most productive. My team and I set up this sticky-note-like system very roughly inspired by Jared’s Scrum implementation in Silicon Valley5. We had six boards and a flow that went like this:

1) Notes and reminders, mostly used for sharing IDs and the idea description
2) Milestones, which in retrospect was full of fluff
3) Things to implement, a feature request board
4) Bugs to squash, self-explanatory
5) Tasks, which had features or bugs that were being currently worked on
6) Completed, which had to be beautiful by the end

(If you were wondering, no, we didn’t bring six physical boards with us. We used Trello.)

The way we allocated tasks was to assign colors to everyone, make them colorise the notes under Things to implement or Bugs to squash using their assigned color, and move said tasks under Tasks. Hence the quip about beauty. You might be wondering though, why color? Color is evil. It’s not C-f or C-s searchable and it reminds us of those people who used six to n highlighters when taking notes back in uni. One of the first principles you must learn in code sprints like this is:

Always pick the simplest thing that works.

Hold your applause. This is a standard, internet-tested, Good Thing slogan so it’s hard to actually pause and think about it before moving on. All general advice is mostly useless and this one’s no exception. So let’s make it more specific. What do we mean by “simple”?

People brandish “simple” as if it’s immediately obvious to everyone what the simple thing to do is. Most of the time, you describe something as simple only after the fact and only if you understand enough of the underlying principles that it is possible to appear simple to you. In other words, you have to be ready for simple. So suppose you did your homework and know all the relevant bits. Here’s an expanded view of how we decided the color issue:

We have a set of tasks. We need a quick way of showing who’s responsible for what. We could go with colors or labels. For sets of a few well-defined elements, it is quicker and easier to distinguish by color than by label. Hence, color it is.

Decisions like these happen at the speed of thought. Sometimes you think of two or three approaches, sometimes none. But the process is clear: your purpose should tell you what’s simple. And in a hackathon, your purpose is to quickly make something that works. So trust your gut and choose — technical debt be damned!

Two bottles of wine

It’s a sad world we live in that people cheat during hackathons6. It’s even sadder that me and my team also did. Why?

  • We brought a VR headset, thereby cashing in on a trend.
  • We made some of our models beforehand.

I might be naive but the second one is particularly insidious. Hackathons are where you flex your fast-twitch mental muscles prototyping as fast as you can. So I was stuck between having to argue our only artist away or accepting that the reason we joined was to improve our teamwork. I chose the latter. Sorry.

It was a classic case of “Everyone is doing it, so why shouldn’t we?” Now compare “No one else is doing it but us.” and you’d be disappointed at the level of doublethink people are willing to sustain to stay optimistic in a startup.

I’d have to share some of the guilt here however. I dabble in the Dark Arts. If I believe that a point must be made, that it is of absolute importance that my team takes to heart what I am saying, despite all considerations of integrity I am willing to use fallacious arguments to get there. My bullshit reason so that I don’t think about it too much? I don’t actively promote rationality in my team because being only-sorta-rational is at odds with being in a startup. There’s a region in the range of rational ability where you’d balk at the level of risk you’re taking if you’re in one. So you have to embrace it fully or it will be your death.

There, I said it. Sorry.

 


Definitions

Dark Arts
exploiting the wonky ways in which the human mind works for one’s benefit
Double-edged Sword
the phenomenon where seeming disadvantages become advantages in a different situation, and vice versa
HN
Hacker News

  1. XX% of startups fail in their first year, according to YY. 
  2. A discredited model of skill aquisition, used here for illustrative purposes. See Wikipedia
  3. I like Joel Spolsky’s name for this thing: Architecture Astronomy
  4. By the way, I co-founded a VR company with Chromo and WIB. See this post to know more about them. 
  5. Silicon Valley is a TV series about, well, the Silicon Valley lifestyle. From what I hear, there’s a lot of truth in it. 
  6. Probably because of the prizes. 

I have no idea what to say

Everyone who writes for a living will eventually write about themselves. It’s almost a natural law. Heck, someone else has probably thought of a name for it. (Hint: it starts with an ‘N’.)

So why this piece?

In my first post, I laid down certain laws. Rough guides on what I consider good writing. Indeed they’re more of a summary of whose writing I’ve read and enjoyed so far, people such as:

a) Isaac Asimov
b) Paul Graham
c) hacker/preachers from the bygone era of Usenet
d) storytelling novelty accounts on Reddit

So whatever comes out of my mental mouth will probably sound like them. But how do I write? I do something I call channeling, or simulating the writing style of people after reading them.

There’s this idea that Michelangelo didn’t have to be taught how to sculpt, rather he had to be taught how NOT to. And I don’t know where I heard it from. It sounds like something Paul Graham would say, but cursory Googling has planted doubts1. What I can tell you is, I’m no Michelangelo. I don’t know of anything I’d have to be wrested away from. I don’t breath mathematics, nor programming, nor actually hacking stuff. And those are just about the only things I know I ought to care about.

Maybe there’s something here. I’ve recently gotten hold of the idea that whatever stuff I deal with, it’s the singular pursuit of it that makes me giddy with excitement. It’s like I’m tickled pink not by actual things but by the methods people use to achieve them. It’s why I want to build a computing monastery before I die (which is also partly supported by my having known The Promised Land through Steven Levy’s account of the TMRC in the post-war era).

I’m also no Scott Alexander. In raw verbal skill, I’m 2-3 standard deviations from the national mean (in a country whose average IQ is a standard deviation below the global mean).

I’m no windytan, whose tinkering with digital signals is one of the purest expressions of the hacker ethic.

I’m no Linus Torvalds who was hacking away at OSs before being allowed to drink.

So who am I? And what can I do?

I’m an imperfect citizen of the Universe.

I inhabit an imperfect body which almost shat itself to death before graduating from kindergarten. This event has had ramifications I am only beginning to witness.

My toolkit is imperfect. I wield problem solving techniques like a caveman (who ought to have been rather good at what they did or else I won’t be here typing this, but you get the point). I can learn things quickly but only until I’m good enough.

I have perfect standards though, in the sense that whatever I do must be perfect for sufficiently reasonable values of perfect.

This then is the crux of the matter: I reach for the sky on insect wings.


  1. Another round of cursory Googling suggests that this is in fact a Perlisism as quoted by Peter Norvig in his most famous essay

Pen exercise: on curiosity as a driving force

Note: I wrote this when I had nothing better to do in a bus, so yes, the style is deliberate.


 

Constraint: write something while on a 3-hour long bus trip, using only your phone and Google Keep

Thinking is hard.

There are days when a simple graphing problem brings down my entire chain of thought. But then there are also days when I can pull back the curtains on Fully General Abstract Nonsense and have time for milk and biscuits. Why is this? A straightforward answer might be that I am just particularly sensitive to my environment, to the time of day, to temperature, etc. So let’s explore this notion first.

First, we consider the time of day. I usually think best during quiet moments in the wee hours of morning. It is during these hours that whole essays on code and craft or on the philosophical underpinnings of math fall quickly into place inside my head. Why is this so? Perhaps it’s the quiet, the freedom from social distractions. Distractions vary in severity. On one end of the spectrum we have the quick, bite-sized notifications on various devices and social networking sites about mundane happenstance. PMs on Reddit, notifs on Facebook, replies on Twitter, and e-mail are just some of the things that fall under this category. They are compellingly seductive for they follow a random reward structure similar to systems that promote gambling addiction. But their mechanism is strictly psychological and thus can be overcome via psychological techniques (or simply with enough exertion of willpower). On the other end of the spectrum are those that involve direct changes in one’s biochemistry like physical contact and binge eating. These are more pernicious and are much harder to deal with. My only recourse when struck by these temptations is to completely remove myself from the environment or circumstance in which they manifest. Nevertheless, the serene calm of very early morning is usually enough to tame these vices, if only for durations a bit longer than that in daylight.

Another limiting factor to my attaining focus is the atmosphere (in the literal sense). I simply cannot think well in a hot and humid environment, which is rather unfortunate for I reside in a latitude where it most commonly occurs. This has been rectified somewhat by the increasing prevalence of air conditioning but the unfortunate matter is, I can only count on this fact whenever my university is open. Which again is unfortunate for my university strictly adheres to the antiquated rule of the Sabbath (sucks to live in a sectarian part of the world). Another particularly insidious barrier to my work is akrasia. Sometimes I simply cannot bring myself to work on what I have to do. However, through my suffering this I have discovered two methods that work against it. First is habit formation. If the object of akrasia is a regular occurrence, then the task, however gargantuan, is easily amenable to honest effort on particular times of day. The other sharp tool I use is inspiration. The mere act of reading about the lives of my personal heroes is enough to make me “get off my ass”, so to speak. There is a limitation though, for this technique falls flat during my off days when my measure of myself is too meager.

So what have I learned in the perpetual struggle between my short-term desires and my long-term understanding? It is this: no amount of fiddling or finagling can make up for a lack of genuine interest. It is the force that builds bridges over mental crevasses and diverts powerful rivers around obstacles of will. When an activity invites curiosity, it requires very little effort from one’s part. This state we mistakenly constrain to the mind of children is a cornerstone of the human condition. Without it, we are mere monkeys flinging poo at each other.

But why talk about all this? Are we not disciples of the Way, which teaches us that the only right way is the way that works? And is “curiosity” not an arbitrary and only occasionally useful designation in the space of human ideas? The fact that it is occasionally useful is precisely the reason, for it is this fuzzy trait that better people than I credit for the creation of my personal heroes. Peter Samson would not have stumbled upon the TX-0 were exploration of the unknown not one of his main impulses, nor would DEK belabor himself to produce tomes on algorithms. Hence, it is worthwhile to explore its validity and explanatory power as the main cause for human intellectual achievement.

So what is curiosity? When and where does it arise? Is it merely an epiphenomenon of various underlying processes or a basic, self-contained impulse in itself? How does it fare as a causal agent of achievement in the face of reproductive instincts? I can immediately answer the last question. If reproductive instincts better explain intellectual achievements than curiosity, why is the majority of intellectuals celibate or have few children? And why do so many of them die in the name of their ideas for that matter? Perhaps this is a misguided issue, for instincts are at best an approximation of their purpose. It might be the case that curiosity arose as a byproduct of evolutionary pressures,  but is not in and of itself immediately useful unlike the instinct to hunt when hungry. Unfortunately, our evolutionary history is seldom available for us to decipher so this might not obtain a resolution anytime soon.

There is yet another question lurking under this line of thought: why do other species seem to exhibit curiosity as well? A Silverback gorilla stops to examine his reflection in a mirror (and at times becomes hostile towards it). A crow observes human traffic and learns to use it to its advantage. If curiosity is found in others but does not spur them to understand, does this not count as negative evidence towards curiosity as an explanation of intellectual achievement? It may be the case that curiosity is merely a necessary but insufficient condition.We achieve things because we are curious (and have opposable thumbs, and big brains, and cook, etc…). On an intraspecies level however, not all humans are curious about the same things. But what exactly spurs our curiosity? Is it novelty? An intriguing or strange presentation?

Let’s suppose that curiosity is indeed required but insufficient to explain human achievement. What other dimensions can we add to our model to take it closer to the truth? Take single-mindedness, or the ability to concentrate one’s entire being on a single task. This could explain Gauss, whose mathematical treks were so inescapably deep and thorough that it is yet to be surpassed. But there are many…


Definitions

Pen exercise series
writing exercises, simple as that