A Very Slow Takeoff...

Oh, 2020 started already?

Hey folks, sorry for the tardy start to 2020, a year in which I have far too much on my plate, and have had too much random zemblanity coming at me in the Life 🤬😖😬 Happens department.

Which means the projects where I have the murkiest intentions/commitments to myself, like this one, are getting the most derailed. I last sent out a newsletter/podcast on Dec 6, so it’s been like 7 weeks. I’m still not yet ready to get going with this year’s breaking smart (breaking slow?), and this is mostly just a quick Hello-2020 newsletter.

For the next few months, I’m going to slow this thing down to once a month, while I rebalance activities and commitments. For this week, no original content, but here’s some stuff from other channels you guys may like.

I’ll be back with new stuff hopefully in a few weeks, around February end. It’s been a while since I’ve sat down and brainstormed how to evolve this thing, and I need to do that too, since this thing has admittedly mission-creeped and drifted quite a bit over the years, not to mention gone into deep denial about Season 2, which has supposedly been in the works since 2015 and is now over 3 years late. It’s like a military contract project at this point. I know many of you would like to know if/when it’s ever coming out. I would too 🤡🥁.

But we’ll get this show back on the road properly at some point.

Inventing Time


Today I want to talk about time, which is a subject I’m researching quite a lot these days. In particular I want to talk about two of the most-quoted lines in technology conversations that are about time.

The first one is Alan Kay’s, famous line: it is easier to invent the future than to predict it. Alan Kay is a famous computer scientist who was at PARC.

And the second line is from William Gibson, the pioneering cyberpunk science fiction writer, who is famous for the line: the future is already here, it is just unevenly distributed.

What I want to do in this episode is change your understanding of such lines from figurative to literal, where the idea of the future being invented is not in the sense of specific things or events “contained” by the future so to speak, or from the future and “contained” in the present, but time itself as something that is invented.

Let’s start with a few examples.

In the last few years I’ve experienced a few technologies from the unevenly distributed future, as I’m sure many of you working in technology have. And I want to talk about four in particular: riding in a Tesla, trying on an Oculus VR headset, making a cryptocurrency transaction, and trying on a Magic Leap AR headset.

So the interesting thing is, my reaction to these four experiences was different in each case.

On one end of the spectrum we have Magic Leap and crypto. Both of those things, when I tried them, they were interesting, exciting, and stimulating, it was fun to try these things. But neither felt like an inevitable part of the future, at least to me, so subjectively speaking they did not feel like an inevitable part of the future.

In terms of Alan Kay’s line, they were auditioning for the role of being part of the invented future, but they were not decisively part of it yet, at least as far as I’m concerned. And in terms of William Gibson’s line, they may or may not be part of the actual unevenly distributed future. They felt like they might equally well be part of a fork future we may not go down, like I imagine it felt to play a BetaMax tape when it was still a competitor to VHS back in the day. That’s an important idea to recognize, right, that there are technological options we discover, uncover, and develop, but don’t necessarily exercise, and go down the future they create.

The Oculus headset, now that felt a little more substantial, like it was definitely part of the future being invented, but not necessarily an actual piece of the unevenly distributed future that I was experiencing in the present. Something like it seems inevitable, it feels like it rhymes with something from the future, but perhaps what we will actually see in the future is not that exact kind of thing. You can think of it as the future in a beta-test form, or at least that’s what it felt like to me. So I’m emphasizing repeatedly the subjective aspect here because what we’re talking about here is a gut experience of the temporal quality of a technological experience. We’re not talking about rational assessments of future probabilities, we’re talking about how real a sense of time feels.

And finally, riding in a Tesla made the electric vehicle future seem utterly inevitable in a way that kinda killed the present for me. Suddenly I could no longer look at gasoline cars the same way. Driving in my own car felt different, like I was stuck in the past, waiting for the price of the future to come down to the point where I could afford to live in it. So a Tesla creates the future in the sense of both the Alan Kay and William Gibson quotes. It makes the future real in a deep way that is like making time itself real. And you know this because the feel of the present feels different, like you’re heading down a dead-end, a lame-duck future. You’ll have to either abandon it as soon as you can, or end up dying with it.

Stepping back, I think it is important to understand innovation as the process of literally inventing time itself. The mark of success is that the present starts to feel dead, like the past, and the beachhead of the future in the present, let’s call it a Gibsonian temporal colony, feels like a portal for getting back into the present. So it’s almost like there’s been a time shift and you’ve been shifted back into the past and you have to step through a portal to get back to the present. There is a sense of inevitability to your experience of the new technology, and a sense of derealization — things seeming not quite real — in your continued experience of existing incumbent technologies.

You have to get very sensitive to this feeling in your gut if you want to do good work in the world of technology, even though of course it can be very misleading. There is a chance that feeling in your gut, that deep down sense that this is the future being invented, that this is time that is more real than the time I’m living in, that can be misleading. It could be that you’re mistaken. So that’s why I again emphasize this is a subjective feeling. But I think it is a very reliable indicator. When you get that feeling, there is a much stronger chance that you’re going to be right than wrong.

So you have to get very sensitive to this feeling if you want to do good technology, whether as an engineer, an entrepreneur, an investor, or an early adopter making new culture with it. And this is not the same thing as feeling excited or stimulated by the future. It is not the same thing as logically and rationally concluding that a certain scenario is the most likely future, and investing in it. It’s a sort of all-in psychological investment of identity into a sense of time that feels more real than the one you’re in. It’s a sense of switching timelines.

And this feeling can be evoked by very mundane and unexciting things. It doesn’t have to be a big flash-bang feeling.

An example of this: when I first moved to the US, I used a microwave oven for the first time, since they were not yet popular in India. And an Indian friend of mine taught me the trick of microwaving papads, usually called papadums when you get then in restaurants in the US, which are these little dried lentil crackers you typically either deep fry or roast on an open flame. But the microwave cooked it perfectly, and that was the moment when it was suddenly clear to me that that was the future of the Indian kitchen. So that’s a pretty mundane example. It’s not like experiencing space travel or something science-fictiony like that. It’s a very mundane example of switching timelines and feeling that one kind of invented future involving a certain technology is more real than the time you’re experiencing right now.

Once you get sensitized to this feeling of going down one fork of time rather than another, and the idea of more or less real timelines, I think you’re psychologically equipped to be much smarter about how you relate to technology. You’re equipped to be bolder about how you engage with the future. So it’s a skill worth cultivating. In a way, it’s learning a kind of time travel within the present.

And learning time travel is probably figuratively the most important skill you can develop as a technologist. And I know it sounds weird, but this is the reason all of us in technology tend to love science fiction and sort of reach for ways of to think about experiencing time in much more real ways. We are actually training our gut, we’re training our sense of time being real or unreal, learning to make forks and sort of fork-switching decisions at the right time, and getting a sense of are we in the past, are we in the future, are we in the present, how do we get back into the present, how do we actually make part of the future more real and bring it into the present. So these are all sort of temporal mechanics skills that you learn once you start to cultivate this feeling.

So that’s my topic for the day, let me know what you think. We’re just at the 10 minute mark, so looks like I’m back to slightly shorter podcast lengths, and I’ll be back again next week or the week after with my next episode, thanks.

Econtalk and Zion 2.0 Podcasts

Two guest appearances

Still getting into a rhythm here in the podcasting world, so I didn’t have an episode for the last two weeks. I don’t have one this week either, but I do have two guest appearances on other people’s podcasts to share, which amount to a couple of hours worth of me talking.

First, I went on Russ Roberts’ Econtalk podcast to talk about Waldenponding, which I’ve written about on this newsletter several times now (the links are on the podcast page, along with several other links you might want to explore).


Second, I went on Collin Morris’ Zion 2.0 podcast, which was a much more free-form conversation around themes I write about both here and on ribbonfarm.

ZION 2.0 with Collin Morris

It’s an interesting exercise for me to contrast my “home” monologue podcast style with the style I end up adopting in conversation with specific people.

We’ll be back to regular programming with my short-format monologues next week.

Spacewalks and the Species


Two things happened this week. Last Friday, Alexey Leonov, the first human to walk in space (in 1965), passed away. And this morning, Christina Koch and Jessica Meir went on the first all-women spacewalk on the ISS. So two historic events. And they got me thinking about the meaning of we in its most universalist, species-level sense.

Let’s take them in order.

Alexey Leonov was the first human to walk in space. He was also on the crew of one of the earliest experiments in international space cooperation, the Apollo-Soyuz Test Project, ASTP. So this was at the height of the Cold War.

Before we had the ISS, the ASTP mission was as close as humans have ever come to a Star Trek like Federation. In some ways, it was a more impressive technical, social and political achievement, since it was at the height of the Cold War, and the mission had to be designed around the existing US and Soviet programs, which used different designs, unlike the ISS, which was designed collaboratively by multiple nations.

As a kid, I owned a beautifully written and illustrated book by Leonov about ASTP (he was also an accomplished artist, if you google him, you’ll find a bunch of his paintings, including space paintings), and though this may sound cheesy, that book was probably one of the things that got me interested in space technology and end up going to graduate school for aerospace engineering, where I worked on space mission problems for my PhD.

I remember a drawing of the Apollo-Soyuz docking mechanism in particular in Leonov’s book, and wondering at the time about the general problem of linking two incompatible technologies, which I think is kinda really symbolic of the whole problem of species-level human coordination. I want to digress a bit to talk about that.

If you’re an engineer you know this: in any design, any time two parts come together to form a coupling, they tend to be designed asymmetrically, because that tends to be the easiest way. So one part gets designated male and the other part female, and the logic is the logic you would expect. It’s one of the rare funny bits of sexual logic in the generally sexless world of engineering jargon.

Later, as an adult, long after I read Leonov’s book, I heard this story (I don’t know how true it is) that one of the bones of contention — which Leonov didn’t talk about in his book — was making sure the docking system design was symmetrical, in the form of what is known as an androgynous coupling, because neither side wanted to be the “female” side. Apparently the nickname of the system was “androgynous brothers”, which I find hilarious. The official justification was of course, more technical: that with an androgynous coupling, either side could play the active or passive role, and that would make for greater mission flexibility and system-level redundancy. But I kinda buy the theory that the system ended up ungendered for less technical reasons. It sorta makes sense for that era of technology.

You could say ASTP was consciously designed to not just be a showcase of global cooperation, briefly forgetting the divide between the two sides of the Cold War. It also ended up being unintentionally gender-neutral for what were perhaps the wrong reasons. Long before we had culture wars about gender-neutral bathrooms here on earth.

And speaking of gender and space, that brings us to the second historic event of the week.

This morning, I just happened to catch a retweet of the NASA live feed of the space walk. I had no idea it was going on, but I am always willing to interrupt whatever I’m doing to watch space stuff. So I started watching, and I found myself drawn to the very basic shared human things that space forces us to grapple with. For example, I found myself noting and counting the orientation words the astronauts were using, like up, down, aft, fore etc and wondering about how humans think and talk about orientation in microgravity, where there is no natural direction of up, which is of course one of the most basic shared human things, a shared sense of which way is up.

At the back of my mind I was also wondering if women coordinate and communicate any differently on complex tasks than men. The ground control person was also a woman, so the entire audio-track for the live broadcast was female, which was interesting. But the gender aspect was less interesting to me than the basic human aspect. Here we are, as bodies in space, being governed by the laws of classical physics. Inertia, movement, velocities, accelerations. That was the more interesting part.

In fact, I didn’t realize till later, when I read up on the event, that it was a historic all-women spacewalk that had to be canceled once before because they didn’t have two spacesuits of the right size.

Anyhow, the two events together got me thinking about our sense of collective nouns like “us” and “we” and how in everyday life, they tend to factor across obvious tribal, gender, or other sorts of identity faultlines. Sometimes, it can seem like there is no such thing as a shared “we” that applies to humanity as a whole.

In my more cynical moments, I tend to think that every use of the word “we” is a disingenuous attempt to humanize some people at the expense of others. My line about this is a version of the principle: you cut the cake, I’ll pick the bigger half. The identitarian version is: you decide what rights are basic human rights, I’ll decide who counts as human. Which is the version that has historically been the most common one practiced. When people say “we the people,” they typically mean a particular subset of people counting as human.

Space missions are a reminder that there is substance to both the differences and commonalities that make us human.

On the one hand, space missions reinforce the sense of idealism that yes, there is in fact such a thing as a non-vacuous universal “we” that includes all humans, and perhaps all living things. When any human does something interesting in space, we all participate in the moment. When I logged on this morning, there were 14,000 viewers of the live feed. That’s fascinating. Honestly, I’d be very interested in seeing the demographic breakdown of that audience.

When any human does something in space, what they do is human at a very basic level: they move, they breathe air, the grip things, they communicate. All the trivial unconscious shared humanity, including a sense of up, that we forget here on earth, becomes a very live concern in space. So yeah, the idealism has substance.

Hell, even a dog or monkey in space evokes identification.

Recently, a Chinese lunar lander recently even grew a sapling on the Moon, and frankly, I identify with that sapling too. Life in space is a very powerful reminder of how much all of life has in common. Yes, there is a Hobbesian struggle of nature-red-in-tooth-and-claw aspect, so there is that aspect of nature as well, but it is amazing how much life has in common.

But on the other hand, space is also a reminder that we can’t pretend identity issues are entirely made-up political bullshit.

We’ve had a complex bit of space technology, the ASTP docking system, possibly designed a certain way because of gender sensitivities. We apparently had the first all-female spacewalk delayed because they didn’t have two suits in the right size. And these are not cosmetic matters. It’s not all virtue signaling or identity signaling. Matters of life and death hinge on things like spacesuits being the right size. The live video showed this starkly: periodically the ground controller would ask the astronauts for suit checks. So, it’s real life-and-death stuff.

So yeah, space missions show us that both our differences and commonalities have deep substance to them.

But overall, the moral of the story of space exploration as revealed by the events of this week, reflecting on the life of Alexey Leonov, ASTP and the historic event of the first all-woman spacewalk, is a pretty uplifting one.

It’s hard, but we don’t have to choose between immutably essentialized identities on the one hand, and universalist tendencies to identity with all life on the other. Our differences and similarities are both real, and they both matter, and we — and I do mean we as a species now — we have to learn to accommodate both in our collectivist tendencies. They both matter, differences and commonalities.

And to bring this back to earth from space, when we think about this in terms of all the things that absorb us here on earth everyday as part of the culture wars, and the news headlines. And you make that seemingly sophisticated argument, whenever somebody says we must do this, we must combat climate change, we must combat sexism, we must not let identity and political correctness destroy things. Whichever side you’re on, there’s a lot of we and us words being used in conversation, and most of the time, they indicate we’s and us’es that are less than universal, and we all recognize that, and sometimes we call each other out on it.

Like one of the most common sophomoric debate tactics is, when an opponent says something like we must do X, you challenge them on what we are we talking about here. Even though this is a tactic you learn in college, it is important to call out, and force people to define and defend the level of collectivism at which they think good things are good and evil things are evil.

You kinda have to make people take ownership of their we’s and us’es.

So that’s the reflection of the week on the lessons of space walks and historic space events here on earth. If you didn’t know any of this history, I recommend taking 15 minutes to google and learn about it. It’s fascinating stuff, especially the ASTP mission.

The Ascent of Conflict

From pitched battles to magical weirding

This week’s post is very visual, with 2 pictures, so no podcast version. For the first picture, I have a 2-level 2x2 for you to puzzle over, positing the evolution of a new regime of conflict — magical weirding — beyond the most advanced kind of the last century, OODA-loop conflict. This is fairly early stage thinking, so I haven’t yet distilled it down to a very clear account, so bear with me while I work it all out :)

The outer 2x2 is rules of engagement (how you fight) versus values (why you fight), with both axes running from shared to not-shared.

The inner 2x2 is whether the unshared outer attribute is symmetrically or asymmetrically legible (ie whether or not both sides understand each other on the not-shared attribute(s) equally well/poorly).

The inner 2x2 is only full-rank in the upper right of the outer 2x2. In the other outer quadrants it is degenerate as one or both of the legibility symmetry axes become moot due to the corresponding trait being shared and therefore not subject to conflict per se (though this is something of an idealization): 2x1, 1x2, or 1x1. So you get 1+2+2+4=9 cases.

To read this recursive 2x2:

  • Start in the bottom left of the outer 2x2, which is the degenerate case of the simplest kind of conflict, a pitched conflict with shared values and shared rules of engagement, and no asymmetries.

  • Then check out the 2x1 and 1x2 cases in the off-diagonal quadrants (rivalry and open competition, each with 2 subtypes, corresponding to the non-degenerate legibility asymmetry).

  • Finally visit the top right (total war, 4 subtypes).

The most complex kind of conflict is where both values and rules of engagement are not shared, and within this dissonant condition, both are asymmetrically legible.

This is the regime I think of as magical weirding, where all sides feel like something magical is going on, whether they are winning or losing. Magical weirding is the conflict regime induced by everybody simultaneously trying to figure out the capabilities of a new generation of technology, equally unfamiliar to all at the start. The Great Weirding we’re in now is obviously due to the emergence of a software-eaten world.

The point of the nested 2x2 is to show the emergence of a new dimensionality to conflict via symmetry breaking and unflattening. We bootstrap from 2 dimensions to 4 by adding legibility asymmetry along 0, 1 or 2 axes.

Dimensional emergence is a kind of arrow of time (the future is higher dimensional, with hidden degenerate dimensions becoming visible and non-degenerate via symmetry breaks; somebody award me an honorary crackpot physics degree already).

If the nested 2x2s are confusing, here is the same set of ideas illustrated in a flattened, serialized, evolutionary view. Some information is lost in this view, but the “ascent of conflict” aspect via increasing conflict dimensionality is clearer. On the plus side, this view reveals the evolving temporal structure of the conflict patterns more clearly, which the nested 2x2 view does not.

Brief descriptions of all 9 regimes, in roughly increasing order of both conflict complexity and temporal complexity (hence “ascent of conflict”). This is also the rough historical order of evolution of conflict, but all 9 types can be found in at least nascent form throughout history.

  1. Pitched conflict/Sports: Shared Rules of Engagement (RoE), shared values, all around unbroken symmetry. Example: sports with low levels of cheating and no evolution in rules. This is an atemporal kind of conflict, ie with no difference between future and past. This is a kind of conflict AIs are eventually guaranteed to win.

  2. Rivalry/Honor conflict: Shared RoE, unshared values, symmetric values legibility. Example: tribal warfare of BigEndian vs. LittleEndian variety. Note that symmetric legibility does not mean high legibility. So both sides might equally misunderstand the other sides values, leading to more genuine anger in the conflict. This too is an atemporal kind of conflict. For any random battle in an honor conflict, the future and past both look like the same kind of pointless, infinitely iterated bloodshed, with forgotten origins and no end in sight (if you want to be fussy, you could call this very weakly temporal, since the past is distinguished by a mythological conflict-origin story, which however has no practical import).

  3. Rivalry/Beef: Shared RoE, unshared values, asymmetric values legibility. One side, the initiator of the beef, feels like they understand the other sides values, but are themselves beyond the understanding of the rival, due to having achieved a moral evolution the other cannot comprehend. The other side is therefore seen as being left behind in a state of irredeemable sin and darkness. It took me a while to see this, but a beef in the modern sense (as it emerged in music in particular) is actually an asymmetric honor conflict that must be deliberately provoked by one side on the basis of a claimed moral (or what is almost the same thing, aesthetic) evolution. It can’t just be an atemporal tribal blood-feud. Unlike an honor conflict, in a beef, one side expects to win absolutely because they are on the more evolved “right” side of a predestined history. A beef is the simplest kind of asymmetric conflict, and the first one with a temporality to it, ie with an arrow of time to it (one side feels believes it knows the direction towards the “right” side of history).

  4. Open Competition/Systematic Conflict: Shared values, unshared RoE, symmetric legibility RoE. This is a typical business competition where both sides value the same things for the same reasons (like market share), but bring different rules of engagement that the other side understands but rejects as inferior. This is often the friendliest kind of conflict, conducted in a spirit of “may the best side win” that is almost scientific, like experimental A/B testing. This is a temporal conflict, with 2 arrows of time (ie a possible-worlds fork in history), but the superior temporality is discovered as an outcome of the conflict rather than assumed at the outset as in a beef. This is a judgment-of-history type temporal conflict (honor conflicts and beefs, by contrast, are judgment-of-heaven). Before the conflict, we don’t know if VHS or Betamax is the right side of history. After the conflict, we do. Note that this is not the same as a values judgment; the losers of a systematic conflict may still think they are superior on values and the outcome as evidence of historical decline). Systematic conflict therefore decouples presumptions about moral good from historical evolution, creating broader consensus notions of inevitable progress and decline among winners and losers.

  5. Open Competition/Disruption Conflict: Shared values, unshared RoE, asymmetric legibility RoE. This is typical business disruption where one side understands the other better, and disrupts it. The other side does not understand what is going on until too late. This is the beginning of true time-based competition, with a live race between competing temporalities rather than a 1-point fork in history. One common line about disruption suggests how/why: can the disruptor figure out distribution before the incumbent figures out innovation? Crucially, both sides agree that the disruptor is “ahead” temporally (and temporarily) in at least a narrow sense. The question is whether the incumbent can accelerate and overtake from behind. There is an element of a race between historical times, S-curve vs. S-curve. This is a new phenomenon in our evolutionary ascent model, the idea that “overtaking” is possible because historical time order is not fixed, and to be “discovered” as a sense of progress/decline, but something to be won. The winner gets to invent not just the future, but the definition of “progress” itself, in the short term.

  6. Total War/Crusades: Unshared values and RoE, symmetric legibility on both values and RoE. The early Crusades are a great example. The combatants were two different religions (and associated historicist eschatologies), with different military heritages, but understood the differences. Each side viewed the other side’s playbook as somewhat dishonorable, its values as evil/against nature, and its historicism as false. Crusades are the simplest kind of total war, and all kinds of total war are true time-based conflicts. Races among competing temporalities, with history itself being the prize. In crusades, either or both sides may believe they are “ahead” or “behind” in historical time, depending on whether they see the overall arc of history as progress or decline (obviously, you want to be behind if history is in a decline condition, and ahead if it is in a progress condition, with the goals being to decelerate and reverse or accelerate historical time as it heads towards an imagined dystopia or utopia). Crusades are often about literal disagreements about whether or not a new event in time is one already prophesied, or noise; for example, “is this person the second coming of that person?" (in Asimov’s psychohistory, this is taken very literally, with the Mule representing an actual derailing of history)

  7. Total War/Order vs. Chaos: Unshared values and RoE, symmetric legibility on RoE, asymmetric legibility on values. In this type of conflict, which dominated the second half of the 20th century, the basic dynamic is order versus chaos, often reducible to conservatism vs. progressivism, where one side’s values are illegible to the other side, and are therefore viewed as cosmically nihilist profanity and degeneration rather than just different or morally inferior. To the side that views itself as representing Order, the conflict seems like an existential decline-and-fall-of-civilization conflict, presenting a save-the-world imperative. The temporality race here is between temporalities of different orders. It’s not a question of whether Christian or Islamic eschatology is more correct. It is a question of abstract order racing against the forces of chaos. The side viewed as chaos believes it has discovered a nascent new order waiting to be born; a belief in having uncovered epistemic rather than moral progress. So what the conservative side views as chaos, the progressive side views as discovered new knowledge waiting to be decrypted to reveal its meaning. This is a level of conflict where the idea of novelty and progress are fundamentally comprehended and accommodated (“compression progress” in the sense of Schmidhuber or what I called the Freytag staircase in Tempo). In Asimovian psychohistory terms, this would be a case of the Mule being viewed as a turn of history to be understood and accommodated through steering in a new direction, rather than a disturbance to be rejected to preserve a “Seldon Plan”.

  8. Total War/Inside the OODA loop: Unshared values and RoE, symmetric legibility on values, asymmetric legibility on RoE. This was the state of the art 20 years ago, before the internet. Each side understood what the other side was fighting for, but one side’s style of play was inscrutable to the other, and this translated into one side’s asymmetric ability to get inside the OODA loop of the other, create FUD, and collapse it from within, directly destroying the other side’s temporality and reducing it to an atemporality (there is evidence that this is an actually measurable psychological effect, not just a metaphysical idea). One side experiences serendipity, the other zemblanity. This describes a lot of both military conflict and globalized business competition (between US and Japanese businesses for example) after WW2. Unlike Crusades or Order vs. Chaos conflicts, an OODA conflict does not require assumption of an absolute historicist temporality of progress or decline, only a thermodynamic, entropic one. OODA conflict is not just time-based competition, but relativistic time-based competition, with no need to assume that one or the other side is more “advanced” or “degenerate” historically, in either a values-based or rules-of-engagement sense. The winner is not so much on the right side of history, as it is the side that “wins” the right to history in a non-deterministic way. The understanding of historical order here is emergent and constructivist, not a natural or divine predestination to be uncovered. The temporal structure of conflict is also fine-grained, direct and tactical, rather than coarsely historicist. It is a battle over the now of conflict itself (the initiative/freedom to set the tempo) rather than past and future, before/after the conflict. The other side’s experience of time itself is collapsed through conflict. Temporal relativism does not mean moral relativism though. OODA conflict can still be based on absolute values, so long as they are consistent with a thermodynamic arrow of time. So the idea that liberal democracy represents the end of history for instance, is an OODA-style values doctrine. It is just an emergent, constructivist historicism rather than a received one. Another example is in the later books of Asimov’s Foundation saga, where the mysterious Gaians get “inside the OODA loop” of the Second Foundation, keeping the Seldon Plan unreasonably on track, as ongoing invention rather than prophesy unfolding on schedule. You could say that following Alan Kay’s dictum, the Gaians were inventing the future because it was easier than predicting it.

  9. Total War/Magical Weirding: The most extreme and evolved form of conflict, which in some sense is as close to a Darwinian evolutionary competition for a niche as you can get, while still being consciously engaged in conflict at the human intentionality level rather than genetic level. Here, the structure and evolution of the conflict is at some level highly surprising to all, whether they are winning or losing, because the conflict itself is generating intelligence and discovery new to all, and potential for win-win resolutions through the conflict. The outcome feels beyond serendipitous even to the winner, and creates an imperative to try and understand what happened to consolidate gains and prevent backsliding. I call this magical weirding for two reasons. It is magical in the sense of Arthur C. Clarke: any sufficiently advanced technology is indistinguishable from magic, even to the inventors themselves. You could say the winning side figured out how to use a magical new weapon available to all, by trial and error, but still does not understand the laws of the new magic. The weirding part refers to the subjective experience. It is neither the FUD of losing or the sense of confident, serendipitous mastery of winning. It is a sense of more going on than you can understand or control, even if you are willing. In terms of temporality, magical weirding moves beyond even relativistic temporality to full-blown multitemporality. The conflict is creating more time than it is destroying, so winning does not guarantee mastery of time.

So that’s my work-in-progress theory of the ascent of conflict, understood as an increasing dimensionality, complexifying-temporality evolutionary path (this is actually a rough-cut outtake from work I’m doing on my multitemporality project).

Loading more posts…