Pen exercise: My Feelings on Grothendieck

Constraint: write about what you feel when you read excerpts about The God of Abstraction.


I’ve been a physics student for seven years now, and if you include my youthful incursions into the subject, even longer. In my journey I have seen a lot of clues, a lot of suggestions from a variety of sources: the motion and push and pull of things I can see, the way our experiments (or what I would like to think of as ‘penetration tests’ of reality) resolve themselves in particular ways, even the solutions we get from the idealised world in our problem sets. There’s a unity behind all those things, a vantage point from which everything becomes simple, or if not as simple as possible then a vantage point from which all have their own natural places, at least.

Whereas The God of Abstraction has bequeathed us a unifying worldview for the discrete and the continuous (and indeed a proof-by-example that abstraction, or rather the right abstraction, can be used to resolve confusion), the situation in physics is not as fortunate. We live in only one of many possible realities, whereas mathematics is a vast ocean that extends infinitely in all directions and suggests an incomprehensible depth we can only sense in vague ways via computational complexity considerations.

Where then is the conception of reality that is simple, true, calculable, and inevitable? We express our models in austere equations yet they stumble from even minute deviations from the idealised case. We attempt to explain reality by introducing small deviations from our ideal models, and in a particular way where we can ignore higher-order effects? Two days ago I was confronted with a humongous equation describing in full analytic glory the dynamics of a (2+2)-body system, by which one can take to mean two Earth-Moon systems orbiting each other. This equation filled many pages and I daresay no mortal can ever hope to understand it if only on its own terms. And this is okay? So long as it reduces to nearby ideal cases?

Models of equivalent explanatory power are interchangeable, hence we are free to choose whichever suits our human preconceptions the best. If we cannot take in the most accurate map of reality in our minds (and we cannot, for the only truly accurate model is reality itself), then perhaps we can get away with a simpler model with simple boundaries and simple regions. And there are only very few things which are simple to us, conceptions which we can take for granted a priori by virtue of being a species forged from the dry African savannah. Barry Mazur gives us an incomplete list, with a particular emphasis on our mathematical cognitions (really they aren’t as specialised and as particular to mathematics as they seem):

First, our physical intuition. Our understanding of causes and effects, of push and pull, of movements and their causes as we see them in the world. It is what enables us to take objects or phenomena out of reality and into our heads and run them as if in a computer simulation.

Second, our computational intuition. Processes, algorithms, counting, our counterargument to the Halting problem. It is what we use to anticipate the completion of processes and induct from a laughably few examples (really, an impossibly huge number of examples obscured only by our unawareness of our own sensory data).

Third, our geometric intuition. To build from basic notions a specific thing, like a bird building its nest from humble twigs. The fuel that powers our engineering marvels and the metaphorical fuel that will (eventually) bring us to the stars.

Lastly, our algebraic intuition. Our capacity for abstraction, to sense the essence of things, or in the spirit of one of its greatest possessors, to see the vantage point from which things in consideration appear easy and most natural. This is our capacity to compress our models of reality and draw simple boundaries along its joins.

Four intuitions from a single cluster in mindspace. The task to win, to survive until the last black hole evaporates and perhaps a little longer.

Physics brands itself as the fundamental vantage point from which everything can be understood (in principle). Really, there is no ‘physics’, no ‘chemistry’, no ‘biology’, no ‘philosophy’. There is only reality, the vast swaths of which we are yet bereft of an explanation, let alone of control. To our children we shall seem blind and deaf. To our children’s children, a mere higher-order effect they can handwave away, impossibly small and weak, like motes of dust in a sandstorm. But this future is not guaranteed. It is but one of many, one where we do not fail in our inexorable march towards winning this race towards the natural chaos where all things eventually go (and perhaps a bit further).

So unity. We have so many toy models, submodels, barely compatible pockets of coherence in our best-possible-map-of-reality we call our ‘physics’. Rare are those who simplify and compress: the last great practitioner of this lost art was Maxwell, and even then he failed to utilise the simplest possible formalism available in his time. Now there is no one. A tradition in dire need of resurrection. A true resurrection, not the goings-about by textbook authors who seem more concerned with imitation and deference to Nobel Prize-winning authorities than with a genuine presentation (really, the charitable interpretation is that it’s difficult to write down our best-possible-model at all rather than a conscious attempt at copying).

But how will this resurrection be made possible? A clue is found in both the youth of the God of Epistemic Rationality and the God of Abstraction. The former arrived at and extended Quinean naturalism by himself, the latter reinvented Lebesgue integration by himself. It is therefore this brave and arduous act of independent cartography, of seeing what is out there and making the map yourself which produces the quality of thinking needed to even begin at the task of simplification and generalisation of our best-possible-map.

This then is our crossroads. Of the people who have the physiological capacity think independently for extended periods of time, there are very few who get the corresponding (and it seems, necessary) period of isolation in which they would be able to practice this capacity in full. The opportunity for a Lebesgue-integration moment is not available to all, especially at a time when it is most needed (i.e., during one’s formative years).

If one has missed this fertile period of one’s intellectual development, can one still compensate by undergoing such an activity at a later date? Perhaps, but it should be noted that the fertile ground available to a would-be cartographer recedes in proportion to one’s schooling, for it might happen that one exhausts cursory familiarity with all fields accessible to a beginner, thus forever tainting the independence of one’s work.

Still, I believe reality is sufficiently large that we are not yet, in these modern times, at the point where such an exhaustion of fields can take place. There will always be fertile ground for an aspiring rationalist-empiricist, and if that ground recedes unreachable for everyone it shall be our duty as a society to enforce a temporary ignorance in special institutions in order to allow such an important activity to those who shall need it.

Coddle our cartographers. They are our only hope for sanity in this vast and lonely universe.


Definitions

The God of Abstraction
Alexander Grothendieck
The God of Epistemic Rationality
Eliezer Yudkowsky, since E. T. Jaynes is dead
Advertisements

Pen exercise: Diseases of the Scientific Discipline

Note: this was an essay I wrote as homework for one of my experimental physics classes (in Institute X, they bend over backwards to drill scientific integrity into students).


Constraint: react to On Being a Scientist: A Guide to Responsible Conduct in Research, by various authors.

First and foremost, the job of a scientist is to uncover the truth. His measure is the clarity and novelty of his thought. If he is exacting and thorough, lining every table, checking every decimal point, but in the end writes only echoes of his teachers, then he is no greater for it. If he becomes head of his department by a curious doggedness but in doing so neglects to pursue new ideas, then he is no closer to the business of science than pedestrians on the street. A scientist should keep his identity small: in doing so he avoids the rituals and inauthentic behaviour to which science is no less immune.

The Atomic Age thrust science in the public’s view. No longer were scientists seen alone in their labs, concocting various mechanisms to demonstrate in cramped classrooms. We started writing letters to governments, affecting foreign and economic policies. We began dominating intellectual circles for better or for worse: in the 1920s it was customary to apply “relativity” to the world’s moral dilemmas [insert citation here]. This narrative is not new, but herein lies an important distinction: all science is a set of tools. If these tools are used for nefarious goals, if they are used for the betterment of humanity, they are still truthful knowledge. The responsibility lies in the person who finds their use. It is because of this that we must distinguish the scientist from the human: we hold ourselves culpable for consequences of our research because we are human, not because we sought dangerous knowledge that must not be sought.

To this end, I shall call ‘scientist’ that who seeks knowledge for knowledge’s sake and ‘science-practitioner’ the scientist with human faults and foibles. The science-practitioner in today’s world faces two main challenges: to collect citations, and to secure funding. Immediately this brings us to various conflicts of interest that may hamper the progress of scientific research. The first one is the maligned focus on credit. We are extremely protective of our ideas. It is as if the conception of an idea in one’s mind (particularly if one has reason to claim priority) leaves us with the same psychological imprint as finding a coin in the street. This is evident in the well-known rivalry of Isaac Newton and Gottfried Leibniz on the discovery of calculus, where what could have been a fruitful correspondence turned into a 20-year-long state-backed superiority contest no better than the red-team, blue-team bickering often seen in sports or politics. The scientist, in his pursuit of knowledge, must learn to avoid this pitfall lest this primal need for social status overwhelms his path to his original goal.

This issue of credit is such an important point that it warrants a longer discussion. Aside from priority, scientists also bicker about authorship. A great importance is placed on the relative order of authors in a paper. First authorship implies a bulk of the intellectual contribution to the paper; a project is after all never equally divided. More issues arise when advisors and researchers with higher administrative positions demand inclusion in the list of authors of a study, regardless of their intellectual contribution to it. Thus, a scientist is never alone. In his pursuit of knowledge, he is haunted by the spectre of his heroes, he must walk briskly keep pace with his highly competitive peers, he must let students trail behind him to carry on when he falls by the wayside, and he must carry the ever-increasing burden of administrative responsibilities on his back.

A more subtle issue on authorship arises when different parts of a paper are written by different authors. A broth is spoiled by many cooks, so why should a paper be any different? The coherence of such a multi-authored paper is rarely apparent, and problems arise when one of the authors is involved in an academic scandal. The case of Jan Hendrik Schön, an experimental physicist who was found to have fabricated semiconductor data, is now a classic warning to budding scientists. The larger question, however, is the culpability of his co-authors. Are they as guilty as the fraudulent physicist by having signed off the paper for publication? What about the people who peer-reviewed his papers? How apt is it to fault them for Schön’s career to have lasted as long as it did?

A naive attempt to cut the Gordian knot would lead one to express culpability for all those involved. “Everyone gets the whip” is a common sentiment for those who would like to get on with their lives after scandals such as these. However, a brief pause would lead one to conclude that this undermines the foundation of trust that the scientific community puts on its members. The pure scientist demands empiricism in everything, but practicality compels the science-practitioner to leave the replication to others in their respective fields and focus on extracting the pieces of information relevant to his problem. To say that we must check and double-check our co-authors for fraudulent behaviour undermines this web of trust and wastes precious hours that could have gone to research.

Of course, it is lunacy to suggest that checks and balances be done away with. That web of trust only works if it can be trusted (and this is not a trivial tautology). What I am advocating rather, is for the community to spend some of its research-hours into building automated verification systems. The advance of machine learning has increased steadily in previous years. Everywhere, we are experiencing the fruits of breakthroughs in recognition systems, from self-driving cars to a cleaner inbox. There is no physical law that prohibits spam classifiers to be aimed at scientific papers instead. Perhaps, it would even be possible to automatically rate the credibility of a researcher and possible conflicts of interest. Science may be methodical, but it need not be manual.

This brings me to my next point. Science, as a profession that defines itself in how it prods and tests the borders of human knowledge, is strangely resistant to alternative systems. There is a huge pressure to converge on professional standards detailing and constraining the various aspects of one’s work. Strange, that we extol empiricism in everything but our own practice, that we so easily peer outside the public window of discourse yet fail to see more efficient research processes. Science may pride itself on being methodical, but it need not be slow.

Consider how a random civilisation would develop its scientific community. Would you imagine that it would start with lone individuals speaking out against the mores of society, like our Grecian philosophers of old? Would you imagine mathematics intertwined with the practicalities of business and war and the economy, until its parts are distilled one by one? A Galileo perhaps, waging a public battle against old institutions. Then guilds and universities. Then the Industrial Revolution. If so, then one must work to broaden one’s horizons. It is a fallacy to suppose that societies will converge to our own, an implicit assumption of the superiority of one’s culture. If not, then you understand: there is a vast number of paths our science could have taken to get to this point. Therefore, there is also a vast number of scientific processes that could have gained foothold by the arrival of global communication. Our science is the conglomeration of different ways of expressing empiricism, some more affected by extenuating socioeconomic goals than others. It is therefore a curious and frightening prospect that our science was grown, not designed.

The second aspect of a science-practitioner’s career centers on funding. Money is the prime mover. It directs the actions of humans much more so that it is polite to admit. Its main use to the science-practitioner is in the procurement of devices necessary to conduct research. Computer systems, laboratory equipment, technicians and operating crew for heavy apparatuses: the list goes on and on. Money buys tools for the toolmaker, and as tools are said to multiply forces, so do they expand the range of phenomena within reach. A scientist without tools is left to use only his mind, and the mind can only carry so much.

What compels science-practitioners to spend an inordinate amount of hours writing grant proposals? The production of tools for toolmakers is an economy unto itself. There is a huge variety of laboratory equipment accessible to research institutions, if they have the money. Always there is a drive to purchase better and better equipment, and this is not entirely unreasonable: all the eyes in the world could never have guessed the existence of microbes without having seen one for itself. As the phenomena we investigate get more and more exotic, so must our sensory capabilities expand.

By virtue of interacting with the economy, however, this procurement process gains its own incentives. A pure scientist will whittle away everything he has to spend on furthering his research. A science-practitioner, by having to exist in reality, must treat his money as a resource and strategically place his bets on lines of study that to him would prove most fruitful. Immediately, this wrests control from the scientist to pursue his own research directions and gives it to the ebbs and flows of the economy.

There is no problem with this picture: usually we do not have perfect knowledge of what we must know (if we did, it would not be called research). This foible more than makes up for the efficiency we would otherwise gain from giving complete control to the scientist.

Pen exercise: On curiosity as a driving force

Note: I wrote this when I had nothing better to do in a bus, so yes, the style is deliberate. Oh and it ends abruptly (because I had come to my stop).


Constraint: write something while on a 3-hour long bus trip, using only your phone and Google Keep

Thinking is hard.

There are days when a simple graphing problem brings down my entire chain of thought. But then there are also days when I can pull back the curtains on Fully General Abstract Nonsense and have time for milk and biscuits. Why is this? A straightforward answer might be that I am just particularly sensitive to my environment, to the time of day, to temperature, etc. So let’s explore this notion first.

First, we consider the time of day. I usually think best during quiet moments in the wee hours of morning. It is during these hours that whole essays on code and craft or on the philosophical underpinnings of math fall quickly into place inside my head. Why is this so? Perhaps it’s the quiet, the freedom from social distractions. Distractions vary in severity. On one end of the spectrum we have the quick, bite-sized notifications on various devices and social networking sites about mundane happenstance. PMs on Reddit, notifs on Facebook, replies on Twitter, and e-mail are just some of the things that fall under this category. They are compellingly seductive for they follow a random reward structure similar to systems that promote gambling addiction. But their mechanism is strictly psychological and thus can be overcome via psychological techniques (or simply with enough exertion of willpower). On the other end of the spectrum are those that involve direct changes in one’s biochemistry like physical contact and binge eating. These are more pernicious and are much harder to deal with. My only recourse when struck by these temptations is to completely remove myself from the environment or circumstance in which they manifest. Nevertheless, the serene calm of very early morning is usually enough to tame these vices, if only for durations a bit longer than that in daylight.

Another limiting factor to my attaining focus is the atmosphere (in the literal sense). I simply cannot think well in a hot and humid environment, which is rather unfortunate for I reside in a latitude where it most commonly occurs. This has been rectified somewhat by the increasing prevalence of air conditioning but the unfortunate matter is, I can only count on this fact whenever my university is open. Which again is unfortunate for my university strictly adheres to the antiquated rule of the Sabbath (sucks to live in a sectarian part of the world). Another particularly insidious barrier to my work is akrasia. Sometimes I simply cannot bring myself to work on what I have to do. However, through my suffering this I have discovered two methods that work against it. First is habit formation. If the object of akrasia is a regular occurrence, then the task, however gargantuan, is easily amenable to honest effort on particular times of day. The other sharp tool I use is inspiration. The mere act of reading about the lives of my personal heroes is enough to make me “get off my ass”, so to speak. There is a limitation though, for this technique falls flat during my off days when my measure of myself is too meager.

So what have I learned in the perpetual struggle between my short-term desires and my long-term understanding? It is this: no amount of fiddling or finagling can make up for a lack of genuine interest. It is the force that builds bridges over mental crevasses and diverts powerful rivers around obstacles of will. When an activity invites curiosity, it requires very little effort from one’s part. This state we mistakenly constrain to the mind of children is a cornerstone of the human condition. Without it, we are mere monkeys flinging poo at each other.

But why talk about all this? Are we not disciples of the Way, which teaches us that the only right way is the way that works? And is “curiosity” not an arbitrary and only occasionally useful designation in the space of human ideas? The fact that it is occasionally useful is precisely the reason, for it is this fuzzy trait that better people than I credit for the creation of my personal heroes. Peter Samson would not have stumbled upon the TX-0 were exploration of the unknown not one of his main impulses, nor would DEK belabor himself to produce tomes on algorithms. Hence, it is worthwhile to explore its validity and explanatory power as the main cause for human intellectual achievement.

So what is curiosity? When and where does it arise? Is it merely an epiphenomenon of various underlying processes or a basic, self-contained impulse in itself? How does it fare as a causal agent of achievement in the face of reproductive instincts? I can immediately answer the last question. If reproductive instincts better explain intellectual achievements than curiosity, why is the majority of intellectuals celibate or have few children? And why do so many of them die in the name of their ideas for that matter? Perhaps this is a misguided issue, for instincts are at best an approximation of their purpose. It might be the case that curiosity arose as a byproduct of evolutionary pressures,  but is not in and of itself immediately useful unlike the instinct to hunt when hungry. Unfortunately, our evolutionary history is seldom available for us to decipher so this might not obtain a resolution anytime soon.

There is yet another question lurking under this line of thought: why do other species seem to exhibit curiosity as well? A Silverback gorilla stops to examine his reflection in a mirror (and at times becomes hostile towards it). A crow observes human traffic and learns to use it to its advantage. If curiosity is found in others but does not spur them to understand, does this not count as negative evidence towards curiosity as an explanation of intellectual achievement? It may be the case that curiosity is merely a necessary but insufficient condition.We achieve things because we are curious (and have opposable thumbs, and big brains, and cook, etc…). On an intraspecies level however, not all humans are curious about the same things. But what exactly spurs our curiosity? Is it novelty? An intriguing or strange presentation?

Let’s suppose that curiosity is indeed required but insufficient to explain human achievement. What other dimensions can we add to our model to take it closer to the truth? Take single-mindedness, or the ability to concentrate one’s entire being on a single task. This could explain Gauss, whose mathematical treks were so inescapably deep and thorough that it is yet to be surpassed. But there are many…


Definitions

Pen exercise series
writing exercises, simple as that