Inventing Time

  
0:00
-10:26

Today I want to talk about time, which is a subject I’m researching quite a lot these days. In particular I want to talk about two of the most-quoted lines in technology conversations that are about time.

The first one is Alan Kay’s, famous line: it is easier to invent the future than to predict it. Alan Kay is a famous computer scientist who was at PARC.

And the second line is from William Gibson, the pioneering cyberpunk science fiction writer, who is famous for the line: the future is already here, it is just unevenly distributed.

What I want to do in this episode is change your understanding of such lines from figurative to literal, where the idea of the future being invented is not in the sense of specific things or events “contained” by the future so to speak, or from the future and “contained” in the present, but time itself as something that is invented.

Let’s start with a few examples.

In the last few years I’ve experienced a few technologies from the unevenly distributed future, as I’m sure many of you working in technology have. And I want to talk about four in particular: riding in a Tesla, trying on an Oculus VR headset, making a cryptocurrency transaction, and trying on a Magic Leap AR headset.

So the interesting thing is, my reaction to these four experiences was different in each case.

On one end of the spectrum we have Magic Leap and crypto. Both of those things, when I tried them, they were interesting, exciting, and stimulating, it was fun to try these things. But neither felt like an inevitable part of the future, at least to me, so subjectively speaking they did not feel like an inevitable part of the future.

In terms of Alan Kay’s line, they were auditioning for the role of being part of the invented future, but they were not decisively part of it yet, at least as far as I’m concerned. And in terms of William Gibson’s line, they may or may not be part of the actual unevenly distributed future. They felt like they might equally well be part of a fork future we may not go down, like I imagine it felt to play a BetaMax tape when it was still a competitor to VHS back in the day. That’s an important idea to recognize, right, that there are technological options we discover, uncover, and develop, but don’t necessarily exercise, and go down the future they create.

The Oculus headset, now that felt a little more substantial, like it was definitely part of the future being invented, but not necessarily an actual piece of the unevenly distributed future that I was experiencing in the present. Something like it seems inevitable, it feels like it rhymes with something from the future, but perhaps what we will actually see in the future is not that exact kind of thing. You can think of it as the future in a beta-test form, or at least that’s what it felt like to me. So I’m emphasizing repeatedly the subjective aspect here because what we’re talking about here is a gut experience of the temporal quality of a technological experience. We’re not talking about rational assessments of future probabilities, we’re talking about how real a sense of time feels.

And finally, riding in a Tesla made the electric vehicle future seem utterly inevitable in a way that kinda killed the present for me. Suddenly I could no longer look at gasoline cars the same way. Driving in my own car felt different, like I was stuck in the past, waiting for the price of the future to come down to the point where I could afford to live in it. So a Tesla creates the future in the sense of both the Alan Kay and William Gibson quotes. It makes the future real in a deep way that is like making time itself real. And you know this because the feel of the present feels different, like you’re heading down a dead-end, a lame-duck future. You’ll have to either abandon it as soon as you can, or end up dying with it.

Stepping back, I think it is important to understand innovation as the process of literally inventing time itself. The mark of success is that the present starts to feel dead, like the past, and the beachhead of the future in the present, let’s call it a Gibsonian temporal colony, feels like a portal for getting back into the present. So it’s almost like there’s been a time shift and you’ve been shifted back into the past and you have to step through a portal to get back to the present. There is a sense of inevitability to your experience of the new technology, and a sense of derealization — things seeming not quite real — in your continued experience of existing incumbent technologies.

You have to get very sensitive to this feeling in your gut if you want to do good work in the world of technology, even though of course it can be very misleading. There is a chance that feeling in your gut, that deep down sense that this is the future being invented, that this is time that is more real than the time I’m living in, that can be misleading. It could be that you’re mistaken. So that’s why I again emphasize this is a subjective feeling. But I think it is a very reliable indicator. When you get that feeling, there is a much stronger chance that you’re going to be right than wrong.

So you have to get very sensitive to this feeling if you want to do good technology, whether as an engineer, an entrepreneur, an investor, or an early adopter making new culture with it. And this is not the same thing as feeling excited or stimulated by the future. It is not the same thing as logically and rationally concluding that a certain scenario is the most likely future, and investing in it. It’s a sort of all-in psychological investment of identity into a sense of time that feels more real than the one you’re in. It’s a sense of switching timelines.

And this feeling can be evoked by very mundane and unexciting things. It doesn’t have to be a big flash-bang feeling.

An example of this: when I first moved to the US, I used a microwave oven for the first time, since they were not yet popular in India. And an Indian friend of mine taught me the trick of microwaving papads, usually called papadums when you get then in restaurants in the US, which are these little dried lentil crackers you typically either deep fry or roast on an open flame. But the microwave cooked it perfectly, and that was the moment when it was suddenly clear to me that that was the future of the Indian kitchen. So that’s a pretty mundane example. It’s not like experiencing space travel or something science-fictiony like that. It’s a very mundane example of switching timelines and feeling that one kind of invented future involving a certain technology is more real than the time you’re experiencing right now.

Once you get sensitized to this feeling of going down one fork of time rather than another, and the idea of more or less real timelines, I think you’re psychologically equipped to be much smarter about how you relate to technology. You’re equipped to be bolder about how you engage with the future. So it’s a skill worth cultivating. In a way, it’s learning a kind of time travel within the present.

And learning time travel is probably figuratively the most important skill you can develop as a technologist. And I know it sounds weird, but this is the reason all of us in technology tend to love science fiction and sort of reach for ways of to think about experiencing time in much more real ways. We are actually training our gut, we’re training our sense of time being real or unreal, learning to make forks and sort of fork-switching decisions at the right time, and getting a sense of are we in the past, are we in the future, are we in the present, how do we get back into the present, how do we actually make part of the future more real and bring it into the present. So these are all sort of temporal mechanics skills that you learn once you start to cultivate this feeling.

So that’s my topic for the day, let me know what you think. We’re just at the 10 minute mark, so looks like I’m back to slightly shorter podcast lengths, and I’ll be back again next week or the week after with my next episode, thanks.

Econtalk and Zion 2.0 Podcasts

Two guest appearances

Still getting into a rhythm here in the podcasting world, so I didn’t have an episode for the last two weeks. I don’t have one this week either, but I do have two guest appearances on other people’s podcasts to share, which amount to a couple of hours worth of me talking.

First, I went on Russ Roberts’ Econtalk podcast to talk about Waldenponding, which I’ve written about on this newsletter several times now (the links are on the podcast page, along with several other links you might want to explore).

walden-pond.jpg

Second, I went on Collin Morris’ Zion 2.0 podcast, which was a much more free-form conversation around themes I write about both here and on ribbonfarm.

ZION 2.0 with Collin Morris

It’s an interesting exercise for me to contrast my “home” monologue podcast style with the style I end up adopting in conversation with specific people.

We’ll be back to regular programming with my short-format monologues next week.

Spacewalks and the Species

  
0:00
-13:58

Two things happened this week. Last Friday, Alexey Leonov, the first human to walk in space (in 1965), passed away. And this morning, Christina Koch and Jessica Meir went on the first all-women spacewalk on the ISS. So two historic events. And they got me thinking about the meaning of we in its most universalist, species-level sense.

Let’s take them in order.

Alexey Leonov was the first human to walk in space. He was also on the crew of one of the earliest experiments in international space cooperation, the Apollo-Soyuz Test Project, ASTP. So this was at the height of the Cold War.

Before we had the ISS, the ASTP mission was as close as humans have ever come to a Star Trek like Federation. In some ways, it was a more impressive technical, social and political achievement, since it was at the height of the Cold War, and the mission had to be designed around the existing US and Soviet programs, which used different designs, unlike the ISS, which was designed collaboratively by multiple nations.

As a kid, I owned a beautifully written and illustrated book by Leonov about ASTP (he was also an accomplished artist, if you google him, you’ll find a bunch of his paintings, including space paintings), and though this may sound cheesy, that book was probably one of the things that got me interested in space technology and end up going to graduate school for aerospace engineering, where I worked on space mission problems for my PhD.

I remember a drawing of the Apollo-Soyuz docking mechanism in particular in Leonov’s book, and wondering at the time about the general problem of linking two incompatible technologies, which I think is kinda really symbolic of the whole problem of species-level human coordination. I want to digress a bit to talk about that.

If you’re an engineer you know this: in any design, any time two parts come together to form a coupling, they tend to be designed asymmetrically, because that tends to be the easiest way. So one part gets designated male and the other part female, and the logic is the logic you would expect. It’s one of the rare funny bits of sexual logic in the generally sexless world of engineering jargon.

Later, as an adult, long after I read Leonov’s book, I heard this story (I don’t know how true it is) that one of the bones of contention — which Leonov didn’t talk about in his book — was making sure the docking system design was symmetrical, in the form of what is known as an androgynous coupling, because neither side wanted to be the “female” side. Apparently the nickname of the system was “androgynous brothers”, which I find hilarious. The official justification was of course, more technical: that with an androgynous coupling, either side could play the active or passive role, and that would make for greater mission flexibility and system-level redundancy. But I kinda buy the theory that the system ended up ungendered for less technical reasons. It sorta makes sense for that era of technology.

You could say ASTP was consciously designed to not just be a showcase of global cooperation, briefly forgetting the divide between the two sides of the Cold War. It also ended up being unintentionally gender-neutral for what were perhaps the wrong reasons. Long before we had culture wars about gender-neutral bathrooms here on earth.

And speaking of gender and space, that brings us to the second historic event of the week.

This morning, I just happened to catch a retweet of the NASA live feed of the space walk. I had no idea it was going on, but I am always willing to interrupt whatever I’m doing to watch space stuff. So I started watching, and I found myself drawn to the very basic shared human things that space forces us to grapple with. For example, I found myself noting and counting the orientation words the astronauts were using, like up, down, aft, fore etc and wondering about how humans think and talk about orientation in microgravity, where there is no natural direction of up, which is of course one of the most basic shared human things, a shared sense of which way is up.

At the back of my mind I was also wondering if women coordinate and communicate any differently on complex tasks than men. The ground control person was also a woman, so the entire audio-track for the live broadcast was female, which was interesting. But the gender aspect was less interesting to me than the basic human aspect. Here we are, as bodies in space, being governed by the laws of classical physics. Inertia, movement, velocities, accelerations. That was the more interesting part.

In fact, I didn’t realize till later, when I read up on the event, that it was a historic all-women spacewalk that had to be canceled once before because they didn’t have two spacesuits of the right size.

Anyhow, the two events together got me thinking about our sense of collective nouns like “us” and “we” and how in everyday life, they tend to factor across obvious tribal, gender, or other sorts of identity faultlines. Sometimes, it can seem like there is no such thing as a shared “we” that applies to humanity as a whole.

In my more cynical moments, I tend to think that every use of the word “we” is a disingenuous attempt to humanize some people at the expense of others. My line about this is a version of the principle: you cut the cake, I’ll pick the bigger half. The identitarian version is: you decide what rights are basic human rights, I’ll decide who counts as human. Which is the version that has historically been the most common one practiced. When people say “we the people,” they typically mean a particular subset of people counting as human.

Space missions are a reminder that there is substance to both the differences and commonalities that make us human.

On the one hand, space missions reinforce the sense of idealism that yes, there is in fact such a thing as a non-vacuous universal “we” that includes all humans, and perhaps all living things. When any human does something interesting in space, we all participate in the moment. When I logged on this morning, there were 14,000 viewers of the live feed. That’s fascinating. Honestly, I’d be very interested in seeing the demographic breakdown of that audience.

When any human does something in space, what they do is human at a very basic level: they move, they breathe air, the grip things, they communicate. All the trivial unconscious shared humanity, including a sense of up, that we forget here on earth, becomes a very live concern in space. So yeah, the idealism has substance.

Hell, even a dog or monkey in space evokes identification.

Recently, a Chinese lunar lander recently even grew a sapling on the Moon, and frankly, I identify with that sapling too. Life in space is a very powerful reminder of how much all of life has in common. Yes, there is a Hobbesian struggle of nature-red-in-tooth-and-claw aspect, so there is that aspect of nature as well, but it is amazing how much life has in common.

But on the other hand, space is also a reminder that we can’t pretend identity issues are entirely made-up political bullshit.

We’ve had a complex bit of space technology, the ASTP docking system, possibly designed a certain way because of gender sensitivities. We apparently had the first all-female spacewalk delayed because they didn’t have two suits in the right size. And these are not cosmetic matters. It’s not all virtue signaling or identity signaling. Matters of life and death hinge on things like spacesuits being the right size. The live video showed this starkly: periodically the ground controller would ask the astronauts for suit checks. So, it’s real life-and-death stuff.

So yeah, space missions show us that both our differences and commonalities have deep substance to them.

But overall, the moral of the story of space exploration as revealed by the events of this week, reflecting on the life of Alexey Leonov, ASTP and the historic event of the first all-woman spacewalk, is a pretty uplifting one.

It’s hard, but we don’t have to choose between immutably essentialized identities on the one hand, and universalist tendencies to identity with all life on the other. Our differences and similarities are both real, and they both matter, and we — and I do mean we as a species now — we have to learn to accommodate both in our collectivist tendencies. They both matter, differences and commonalities.

And to bring this back to earth from space, when we think about this in terms of all the things that absorb us here on earth everyday as part of the culture wars, and the news headlines. And you make that seemingly sophisticated argument, whenever somebody says we must do this, we must combat climate change, we must combat sexism, we must not let identity and political correctness destroy things. Whichever side you’re on, there’s a lot of we and us words being used in conversation, and most of the time, they indicate we’s and us’es that are less than universal, and we all recognize that, and sometimes we call each other out on it.

Like one of the most common sophomoric debate tactics is, when an opponent says something like we must do X, you challenge them on what we are we talking about here. Even though this is a tactic you learn in college, it is important to call out, and force people to define and defend the level of collectivism at which they think good things are good and evil things are evil.

You kinda have to make people take ownership of their we’s and us’es.

So that’s the reflection of the week on the lessons of space walks and historic space events here on earth. If you didn’t know any of this history, I recommend taking 15 minutes to google and learn about it. It’s fascinating stuff, especially the ASTP mission.

The Ascent of Conflict

From pitched battles to magical weirding

This week’s post is very visual, with 2 pictures, so no podcast version. For the first picture, I have a 2-level 2x2 for you to puzzle over, positing the evolution of a new regime of conflict — magical weirding — beyond the most advanced kind of the last century, OODA-loop conflict. This is fairly early stage thinking, so I haven’t yet distilled it down to a very clear account, so bear with me while I work it all out :)

The outer 2x2 is rules of engagement (how you fight) versus values (why you fight), with both axes running from shared to not-shared.

The inner 2x2 is whether the unshared outer attribute is symmetrically or asymmetrically legible (ie whether or not both sides understand each other on the not-shared attribute(s) equally well/poorly).

The inner 2x2 is only full-rank in the upper right of the outer 2x2. In the other outer quadrants it is degenerate as one or both of the legibility symmetry axes become moot due to the corresponding trait being shared and therefore not subject to conflict per se (though this is something of an idealization): 2x1, 1x2, or 1x1. So you get 1+2+2+4=9 cases.

To read this recursive 2x2:

  • Start in the bottom left of the outer 2x2, which is the degenerate case of the simplest kind of conflict, a pitched conflict with shared values and shared rules of engagement, and no asymmetries.

  • Then check out the 2x1 and 1x2 cases in the off-diagonal quadrants (rivalry and open competition, each with 2 subtypes, corresponding to the non-degenerate legibility asymmetry).

  • Finally visit the top right (total war, 4 subtypes).

The most complex kind of conflict is where both values and rules of engagement are not shared, and within this dissonant condition, both are asymmetrically legible.

This is the regime I think of as magical weirding, where all sides feel like something magical is going on, whether they are winning or losing. Magical weirding is the conflict regime induced by everybody simultaneously trying to figure out the capabilities of a new generation of technology, equally unfamiliar to all at the start. The Great Weirding we’re in now is obviously due to the emergence of a software-eaten world.

The point of the nested 2x2 is to show the emergence of a new dimensionality to conflict via symmetry breaking and unflattening. We bootstrap from 2 dimensions to 4 by adding legibility asymmetry along 0, 1 or 2 axes.

Dimensional emergence is a kind of arrow of time (the future is higher dimensional, with hidden degenerate dimensions becoming visible and non-degenerate via symmetry breaks; somebody award me an honorary crackpot physics degree already).

If the nested 2x2s are confusing, here is the same set of ideas illustrated in a flattened, serialized, evolutionary view. Some information is lost in this view, but the “ascent of conflict” aspect via increasing conflict dimensionality is clearer. On the plus side, this view reveals the evolving temporal structure of the conflict patterns more clearly, which the nested 2x2 view does not.

Brief descriptions of all 9 regimes, in roughly increasing order of both conflict complexity and temporal complexity (hence “ascent of conflict”). This is also the rough historical order of evolution of conflict, but all 9 types can be found in at least nascent form throughout history.

  1. Pitched conflict/Sports: Shared Rules of Engagement (RoE), shared values, all around unbroken symmetry. Example: sports with low levels of cheating and no evolution in rules. This is an atemporal kind of conflict, ie with no difference between future and past. This is a kind of conflict AIs are eventually guaranteed to win.

  2. Rivalry/Honor conflict: Shared RoE, unshared values, symmetric values legibility. Example: tribal warfare of BigEndian vs. LittleEndian variety. Note that symmetric legibility does not mean high legibility. So both sides might equally misunderstand the other sides values, leading to more genuine anger in the conflict. This too is an atemporal kind of conflict. For any random battle in an honor conflict, the future and past both look like the same kind of pointless, infinitely iterated bloodshed, with forgotten origins and no end in sight (if you want to be fussy, you could call this very weakly temporal, since the past is distinguished by a mythological conflict-origin story, which however has no practical import).

  3. Rivalry/Beef: Shared RoE, unshared values, asymmetric values legibility. One side, the initiator of the beef, feels like they understand the other sides values, but are themselves beyond the understanding of the rival, due to having achieved a moral evolution the other cannot comprehend. The other side is therefore seen as being left behind in a state of irredeemable sin and darkness. It took me a while to see this, but a beef in the modern sense (as it emerged in music in particular) is actually an asymmetric honor conflict that must be deliberately provoked by one side on the basis of a claimed moral (or what is almost the same thing, aesthetic) evolution. It can’t just be an atemporal tribal blood-feud. Unlike an honor conflict, in a beef, one side expects to win absolutely because they are on the more evolved “right” side of a predestined history. A beef is the simplest kind of asymmetric conflict, and the first one with a temporality to it, ie with an arrow of time to it (one side feels believes it knows the direction towards the “right” side of history).

  4. Open Competition/Systematic Conflict: Shared values, unshared RoE, symmetric legibility RoE. This is a typical business competition where both sides value the same things for the same reasons (like market share), but bring different rules of engagement that the other side understands but rejects as inferior. This is often the friendliest kind of conflict, conducted in a spirit of “may the best side win” that is almost scientific, like experimental A/B testing. This is a temporal conflict, with 2 arrows of time (ie a possible-worlds fork in history), but the superior temporality is discovered as an outcome of the conflict rather than assumed at the outset as in a beef. This is a judgment-of-history type temporal conflict (honor conflicts and beefs, by contrast, are judgment-of-heaven). Before the conflict, we don’t know if VHS or Betamax is the right side of history. After the conflict, we do. Note that this is not the same as a values judgment; the losers of a systematic conflict may still think they are superior on values and the outcome as evidence of historical decline). Systematic conflict therefore decouples presumptions about moral good from historical evolution, creating broader consensus notions of inevitable progress and decline among winners and losers.

  5. Open Competition/Disruption Conflict: Shared values, unshared RoE, asymmetric legibility RoE. This is typical business disruption where one side understands the other better, and disrupts it. The other side does not understand what is going on until too late. This is the beginning of true time-based competition, with a live race between competing temporalities rather than a 1-point fork in history. One common line about disruption suggests how/why: can the disruptor figure out distribution before the incumbent figures out innovation? Crucially, both sides agree that the disruptor is “ahead” temporally (and temporarily) in at least a narrow sense. The question is whether the incumbent can accelerate and overtake from behind. There is an element of a race between historical times, S-curve vs. S-curve. This is a new phenomenon in our evolutionary ascent model, the idea that “overtaking” is possible because historical time order is not fixed, and to be “discovered” as a sense of progress/decline, but something to be won. The winner gets to invent not just the future, but the definition of “progress” itself, in the short term.

  6. Total War/Crusades: Unshared values and RoE, symmetric legibility on both values and RoE. The early Crusades are a great example. The combatants were two different religions (and associated historicist eschatologies), with different military heritages, but understood the differences. Each side viewed the other side’s playbook as somewhat dishonorable, its values as evil/against nature, and its historicism as false. Crusades are the simplest kind of total war, and all kinds of total war are true time-based conflicts. Races among competing temporalities, with history itself being the prize. In crusades, either or both sides may believe they are “ahead” or “behind” in historical time, depending on whether they see the overall arc of history as progress or decline (obviously, you want to be behind if history is in a decline condition, and ahead if it is in a progress condition, with the goals being to decelerate and reverse or accelerate historical time as it heads towards an imagined dystopia or utopia). Crusades are often about literal disagreements about whether or not a new event in time is one already prophesied, or noise; for example, “is this person the second coming of that person?" (in Asimov’s psychohistory, this is taken very literally, with the Mule representing an actual derailing of history)

  7. Total War/Order vs. Chaos: Unshared values and RoE, symmetric legibility on RoE, asymmetric legibility on values. In this type of conflict, which dominated the second half of the 20th century, the basic dynamic is order versus chaos, often reducible to conservatism vs. progressivism, where one side’s values are illegible to the other side, and are therefore viewed as cosmically nihilist profanity and degeneration rather than just different or morally inferior. To the side that views itself as representing Order, the conflict seems like an existential decline-and-fall-of-civilization conflict, presenting a save-the-world imperative. The temporality race here is between temporalities of different orders. It’s not a question of whether Christian or Islamic eschatology is more correct. It is a question of abstract order racing against the forces of chaos. The side viewed as chaos believes it has discovered a nascent new order waiting to be born; a belief in having uncovered epistemic rather than moral progress. So what the conservative side views as chaos, the progressive side views as discovered new knowledge waiting to be decrypted to reveal its meaning. This is a level of conflict where the idea of novelty and progress are fundamentally comprehended and accommodated (“compression progress” in the sense of Schmidhuber or what I called the Freytag staircase in Tempo). In Asimovian psychohistory terms, this would be a case of the Mule being viewed as a turn of history to be understood and accommodated through steering in a new direction, rather than a disturbance to be rejected to preserve a “Seldon Plan”.

  8. Total War/Inside the OODA loop: Unshared values and RoE, symmetric legibility on values, asymmetric legibility on RoE. This was the state of the art 20 years ago, before the internet. Each side understood what the other side was fighting for, but one side’s style of play was inscrutable to the other, and this translated into one side’s asymmetric ability to get inside the OODA loop of the other, create FUD, and collapse it from within, directly destroying the other side’s temporality and reducing it to an atemporality (there is evidence that this is an actually measurable psychological effect, not just a metaphysical idea). One side experiences serendipity, the other zemblanity. This describes a lot of both military conflict and globalized business competition (between US and Japanese businesses for example) after WW2. Unlike Crusades or Order vs. Chaos conflicts, an OODA conflict does not require assumption of an absolute historicist temporality of progress or decline, only a thermodynamic, entropic one. OODA conflict is not just time-based competition, but relativistic time-based competition, with no need to assume that one or the other side is more “advanced” or “degenerate” historically, in either a values-based or rules-of-engagement sense. The winner is not so much on the right side of history, as it is the side that “wins” the right to history in a non-deterministic way. The understanding of historical order here is emergent and constructivist, not a natural or divine predestination to be uncovered. The temporal structure of conflict is also fine-grained, direct and tactical, rather than coarsely historicist. It is a battle over the now of conflict itself (the initiative/freedom to set the tempo) rather than past and future, before/after the conflict. The other side’s experience of time itself is collapsed through conflict. Temporal relativism does not mean moral relativism though. OODA conflict can still be based on absolute values, so long as they are consistent with a thermodynamic arrow of time. So the idea that liberal democracy represents the end of history for instance, is an OODA-style values doctrine. It is just an emergent, constructivist historicism rather than a received one. Another example is in the later books of Asimov’s Foundation saga, where the mysterious Gaians get “inside the OODA loop” of the Second Foundation, keeping the Seldon Plan unreasonably on track, as ongoing invention rather than prophesy unfolding on schedule. You could say that following Alan Kay’s dictum, the Gaians were inventing the future because it was easier than predicting it.

  9. Total War/Magical Weirding: The most extreme and evolved form of conflict, which in some sense is as close to a Darwinian evolutionary competition for a niche as you can get, while still being consciously engaged in conflict at the human intentionality level rather than genetic level. Here, the structure and evolution of the conflict is at some level highly surprising to all, whether they are winning or losing, because the conflict itself is generating intelligence and discovery new to all, and potential for win-win resolutions through the conflict. The outcome feels beyond serendipitous even to the winner, and creates an imperative to try and understand what happened to consolidate gains and prevent backsliding. I call this magical weirding for two reasons. It is magical in the sense of Arthur C. Clarke: any sufficiently advanced technology is indistinguishable from magic, even to the inventors themselves. You could say the winning side figured out how to use a magical new weapon available to all, by trial and error, but still does not understand the laws of the new magic. The weirding part refers to the subjective experience. It is neither the FUD of losing or the sense of confident, serendipitous mastery of winning. It is a sense of more going on than you can understand or control, even if you are willing. In terms of temporality, magical weirding moves beyond even relativistic temporality to full-blown multitemporality. The conflict is creating more time than it is destroying, so winning does not guarantee mastery of time.

So that’s my work-in-progress theory of the ascent of conflict, understood as an increasing dimensionality, complexifying-temporality evolutionary path (this is actually a rough-cut outtake from work I’m doing on my multitemporality project).

Charisma Neutrality

  
0:00
-17:08

Today I want to talk about a possible emerging successor to net neutrality, which I call charisma neutrality, which I think is a plausible consequence of a very likely technological future: pervasive end-to-end encryption. (17 minutes)

Now net neutrality of course, was part of a very important chapter in the history of technology. Though the principle is now pretty much down for the count, for a few decades it played a hugely important role in ensuring that the internet was born more open than closed, and more generative than sterile.

Even though the principle was never quite as perfectly implemented as some people imagine, even when there was a strong consensus around it, it did produce enough of a systemic disposition towards openness that you could treat it as more true than not.

That era has mostly ended, despite ideological resistance, because even though it is a solid idea with respect to human speech, it is not actually such a great idea relative to the technical needs of different kinds of information flow. So as information attributes — stuff like text versus video, and real-time versus non-real-time — began to get more varied, the cost of maintaining net neutrality in the classic sense became a limiting factor.

And at least some technologists began seeing the writing on the wall: the cost of net neutrality was only going to get worse with AI, crypto, the internet of things, VR and AR.

What was good for openness and growth in the 1980s and 90s was turning into a significant drag factor by the aughts and 10s.

What was good for growing from 2 networked computers to a several billion was going to be a real drag going from billions to trillions.

I think there’s no going back here, though internet reactionaries will try.

To understand why this happened, you have to peek under the hood of net neutrality a bit, and understand something called the end-to-end principle, which is an architecture principle that basically says all the smarts in a network should be in the end point nodes which produce and consume information, and the pipes between the nodes should be dumb. Specifically, they should be too dumb to understand what’s flowing through them, even if they can see it, and therefore incapable of behaving differently based on such understanding. Like a bus driver with face-blindness who can’t tell different people apart, only check their tickets.

Now, for certain regimes of network operation and growth, the end-to-end principle is very conducive to openness and growth. But ultimately it’s an engineering idea, not divine gospel, and it has limits, beyond which it turns into a liability that does not actually address the original concerns.

To see why, we need to dig one level deeper.

The end-to-end principle is an example of what in engineering is usually called a separation principle. It is a simplifying principle that limits the space of design possibilities to ones where two things are separate. Another example is the idea that content and presentation must be separated in web documents. Or that the editorial and advertising sides of newspapers should be separate. Both of these again got stressed and broken in the last decade.

Separation principles usually end up this way, because there’s more ways for things to be tangled and coupled together than there are for them to be separate. So it’s sort of inevitable that they’ll break down, by the law of entropy. Walls tend to leak or break down. It’s sort of a law of nature.

Whether you’re talking about walls between countries or between parts of an architecture, separation principles represent a kind of reductive engineering idealism to keep complexity in check. There’s no point in mourning the death of one separation principle or the other. The trick is to accept when the principle has done its job for a period of technological evolution, and then set it aside. But that doesn’t mean you can’t look for new separation principles to power the next stage of evolution.

One such principle has been emerging in the last decade: the end-to-end encryption principle.

The similarity in names should suggest that we’re talking about a cousin of the original end-to-end principle, and you would be right to think that. Here, what you’re saying is that only the end points in a network should be able to code and decode messages, and the pipes should not.

If you think about it, this is a loosening and generalization of the original end-to-end principle. The pipes now don’t have to be dumb, but only the endpoints can control what the pipes can know, and therefore what they can do on the basis of knowledge. The pipes are not dumb, but the endpoints are in charge. Instead of a bus driver with face blindness, all riders are now wearing masks, but their tickets can now contain any information they choose to share, and the bus driver can act on that information.

As with the original end-to-end principle, the idealized notion is messy in practice. I was talking to some friends who are more tech savvy about this than I am, and they pointed out that an endpoint device itself is effectively a tiny unencrypted network, with more than one computer, and that the pipes in the intra-device network lie outside the scope of this principle.

So for example, you can have an extra invisible chip installed by the carrier, or something in the OS that traps what you’re typing before it gets to the encryption layer. And of course private keys can get exfiltrated without your knowledge. Maybe in the future end-to-end encryption will apply to the internal environment of every endpoint device, recursively all the way down to every logic circuit. But we’re not there yet.

And even without going there, it’s obvious the principle is not watertight anyway. Today, routers can peek inside packets, but in the future, even if they can’t, they’ll be able to tell a lot simply from the geometry of the connection and transmission patterns, even with technologies like VPNs and zero-knowledge proofs in the mix.

The thing is, different types of communication have different external heat signatures, and with AI, the ability to make inferences from such signatures will improve. It will be an arms race. The question will be whether pipes can get good at reading heat signatures faster than endpoints can get good at full-stack encryption that is secure in practice, not just theory.

There is no such thing as perfect containment of information. That’s another law of physics. Actually it is another form of the same law that tells you walls always break down.

So yeah, the technology is messy, but I think it already works well enough that it will create a broad tendency towards this new end-to-end principle being more true than false. You will never be able to hide from the NSA or the FBI or the Chinese government perfectly, but you can make it very much more expensive for them to monitor what you’re up to.

Now, this new end-to-end principle is also based on a separation principle. I’m not entirely sure how to characterize it, but I think end-to-end encryption attempts to make an approximately clean separation between custody of data and control of data, and tries to ensure that no matter who has custody, the owner has control over usage. We’ll see how well it works in practice as it becomes more widespread.

Now for the real question. Assuming the principle holds more often than not, and is more a de facto default than an opt-in exception that only libertarian crackpots use, what does an internet based on end-to-end encryption look like?

I think what end-to-end encryption sustains, that is worth enshrining as a new value for this next chapter of evolution, is charisma neutrality.

What do I mean by that?

Well, I talked about technological charisma a few weeks ago, but here I’m talking about the regular human kind. The ability of charismatic leaders to tell mesmerizing stories that spread fast and energize large dumb crowds to act as mass movements.

Or at least, that’s what human charisma looks like. In practice, the reaction of thoughtful people to supposedly charismatic messaging is cynicism and resignation. They only listen to some self-important blowhard with an imaginary halo droning on and on, because somebody is forcing them to. Only a subset of idiot fanbois at the core of the crowd is actually enthralled by the supposedly charismatic performance. And to the extent charismatic messaging works as advertised at all, it does so by reading the core of the crowd and responding to it, creating a positive feedback loop, telling it what it wants to hear, whipping it up. So this ability to read the crowd is critical to exercising charisma.

Everybody not in this feedback core is exchanging cynical jokes or shitposting about it on side channels that are much harder to monitor. So what defines human charisma is not the claim to captivating content, but three structural factors.

  • One, the ability to keep captive audiences in place

  • Two, creating a positive feedback loop with the small core

  • And three, keeping the large cynical periphery too afraid to criticize openly

And historically, this kind of human charisma has always been a non-neutral thing. The people with the guns, able to control public spaces and distribution channels by force, had privileged access to charismatic structural modes. There’s a reason dictators mounting coups go after TV and radio stations and printing presses first. It is charisma insurance.

But end-to-end encryption as the default for communication makes it harder and harder to reserve charismatic messaging capability for yourself with guns. That’s the good takeaway from the culture wars. All charismatic messaging is created equal, so the messages are forced to fight each other in a Hobbesian war of stupid idea versus stupid idea.

The old charismatic media like large public plazas, radio, television, glitzy conferences, larger-than-life billboards, and showy parades, they don’t go away, but fewer people pay any attention to them. And it’s harder and harder to keep the attention captive. All the attention starts to sink into the end-to-end encrypted warren-like space at the edge of the network, and only the opt-in idiot core stays captive.

The cynical, anti-charismatic whispering on the margins becomes the main act, and the charismatic posturing in the center becomes a sideshow. And the whispering gets louder and bolder, and starts to drown out any charismatic messaging that does get in. Center and periphery trade places.

And with end-to-end encryption, because you can’t peek at or shape information flows without permission, even if you have large-scale centralized custody of the flows, the only way to spread or shape a message is, to a first approximation, by being a trusted part of the set of endpoints that are part of it.

Of course, more resources help you do this better — the idea of a Sybil attack is essentially based on gaining dominant access to a peer-to-peer network via a bunch of pseudo-identities, so basically sock-puppets. But it is much more expensive than simply having your goons take over the public square, secure the perimeter so nobody can leave, grabbing a megaphone, and boring the crowd to death while claiming charismatic reach.

In fact, the only way to exercise charisma at all will be through literal or figurative Sybil attacks. You either create a network of bot identities to dominate the end points of an information flow, or you find actual humans who are sufficiently dumb to act as your bots. And since it is becoming technically easier to detect and prevent the automated kinds of Sybil attacks, the action is shifting to human bots, essentially armies of mechanical turks.

But here there is a self-limiting effect: the value of a network drops in proportion to the percentage of bot-like idiots in it, or actual bots, so in the limiting case, your charisma can only reach mindless zombie idiots. Worse, these are the same zombie idiots you need in your core positive feedback loop, and now you have to tell them to turn around, sneak into the periphery, and act as your mindless secret agents to convert the cynics. And worst of all, you have no edge over your rivals trying to do the same thing.

That’s charisma neutrality.

And of course, in this condition, it becomes increasingly costly to control the thoughtful people, who are ultimately the ones worth controlling. The idiots are just a means to that end.

This means at some point it actually becomes easier and cheaper to simply talk to the thoughtful people rather than browbeating them with charisma. Charisma neutrality makes charisma less valuable, more equal opportunity, and more expensive. And beyond a point it starts to amplify non-charismatic thoughtful messaging over charismatic droning.

So modern networks are charisma neutral and charisma inhibiting to the extent they are end-to-end encrypted. This has huge consequences of course. Law enforcement types worry about one particular consequence, which is that the opposite of charismatic activity, namely dark, secretive underground activity, will get amplified. Particularly stuff like child abuse and terrorism.

The optimistic counter-argument is that the more thoughtful people get empowered by charisma neutrality, the harder it will be to keep such dark matters secret and secure from infiltration or whistleblowing. And remember, unlike shaping public opinion with charisma, unmasking dark activity doesn’t take dominant numbers or Sybil attacks. A single undercover law enforcement agent might be able to do enough to take down an entire network. So the dark activity networks will have to put in increasing effort to gatekeep and vet access, and maintain more internal anonymity and expensive trust mechanisms, which will limit their growth, and make them harder to get off the ground in the first place.

In other words, I’m bullish on charisma neutrality and end-to-end encryption. It’s early days yet so we are stumbling a lot on making this work well, but the benefits seem huge, and the problems seem containable.

And of course, it is important to recognize that this principle too, just like net neutrality, is not gospel. It too is just an engineering principle that will reach the end of its utility some day. Maybe it will be because of quantum computing. Maybe it will be some unforeseen consequence of the internet of things or crypto. But for now, this is the principle we need.

Loading more posts…