The Ascent of Conflict

From pitched battles to magical weirding

This week’s post is very visual, with 2 pictures, so no podcast version. For the first picture, I have a 2-level 2x2 for you to puzzle over, positing the evolution of a new regime of conflict — magical weirding — beyond the most advanced kind of the last century, OODA-loop conflict. This is fairly early stage thinking, so I haven’t yet distilled it down to a very clear account, so bear with me while I work it all out :)

The outer 2x2 is rules of engagement (how you fight) versus values (why you fight), with both axes running from shared to not-shared.

The inner 2x2 is whether the unshared outer attribute is symmetrically or asymmetrically legible (ie whether or not both sides understand each other on the not-shared attribute(s) equally well/poorly).

The inner 2x2 is only full-rank in the upper right of the outer 2x2. In the other outer quadrants it is degenerate as one or both of the legibility symmetry axes become moot due to the corresponding trait being shared and therefore not subject to conflict per se (though this is something of an idealization): 2x1, 1x2, or 1x1. So you get 1+2+2+4=9 cases.

To read this recursive 2x2:

  • Start in the bottom left of the outer 2x2, which is the degenerate case of the simplest kind of conflict, a pitched conflict with shared values and shared rules of engagement, and no asymmetries.

  • Then check out the 2x1 and 1x2 cases in the off-diagonal quadrants (rivalry and open competition, each with 2 subtypes, corresponding to the non-degenerate legibility asymmetry).

  • Finally visit the top right (total war, 4 subtypes).

The most complex kind of conflict is where both values and rules of engagement are not shared, and within this dissonant condition, both are asymmetrically legible.

This is the regime I think of as magical weirding, where all sides feel like something magical is going on, whether they are winning or losing. Magical weirding is the conflict regime induced by everybody simultaneously trying to figure out the capabilities of a new generation of technology, equally unfamiliar to all at the start. The Great Weirding we’re in now is obviously due to the emergence of a software-eaten world.

The point of the nested 2x2 is to show the emergence of a new dimensionality to conflict via symmetry breaking and unflattening. We bootstrap from 2 dimensions to 4 by adding legibility asymmetry along 0, 1 or 2 axes.

Dimensional emergence is a kind of arrow of time (the future is higher dimensional, with hidden degenerate dimensions becoming visible and non-degenerate via symmetry breaks; somebody award me an honorary crackpot physics degree already).

If the nested 2x2s are confusing, here is the same set of ideas illustrated in a flattened, serialized, evolutionary view. Some information is lost in this view, but the “ascent of conflict” aspect via increasing conflict dimensionality is clearer. On the plus side, this view reveals the evolving temporal structure of the conflict patterns more clearly, which the nested 2x2 view does not.

Brief descriptions of all 9 regimes, in roughly increasing order of both conflict complexity and temporal complexity (hence “ascent of conflict”). This is also the rough historical order of evolution of conflict, but all 9 types can be found in at least nascent form throughout history.

  1. Pitched conflict/Sports: Shared Rules of Engagement (RoE), shared values, all around unbroken symmetry. Example: sports with low levels of cheating and no evolution in rules. This is an atemporal kind of conflict, ie with no difference between future and past. This is a kind of conflict AIs are eventually guaranteed to win.

  2. Rivalry/Honor conflict: Shared RoE, unshared values, symmetric values legibility. Example: tribal warfare of BigEndian vs. LittleEndian variety. Note that symmetric legibility does not mean high legibility. So both sides might equally misunderstand the other sides values, leading to more genuine anger in the conflict. This too is an atemporal kind of conflict. For any random battle in an honor conflict, the future and past both look like the same kind of pointless, infinitely iterated bloodshed, with forgotten origins and no end in sight (if you want to be fussy, you could call this very weakly temporal, since the past is distinguished by a mythological conflict-origin story, which however has no practical import).

  3. Rivalry/Beef: Shared RoE, unshared values, asymmetric values legibility. One side, the initiator of the beef, feels like they understand the other sides values, but are themselves beyond the understanding of the rival, due to having achieved a moral evolution the other cannot comprehend. The other side is therefore seen as being left behind in a state of irredeemable sin and darkness. It took me a while to see this, but a beef in the modern sense (as it emerged in music in particular) is actually an asymmetric honor conflict that must be deliberately provoked by one side on the basis of a claimed moral (or what is almost the same thing, aesthetic) evolution. It can’t just be an atemporal tribal blood-feud. Unlike an honor conflict, in a beef, one side expects to win absolutely because they are on the more evolved “right” side of a predestined history. A beef is the simplest kind of asymmetric conflict, and the first one with a temporality to it, ie with an arrow of time to it (one side feels believes it knows the direction towards the “right” side of history).

  4. Open Competition/Systematic Conflict: Shared values, unshared RoE, symmetric legibility RoE. This is a typical business competition where both sides value the same things for the same reasons (like market share), but bring different rules of engagement that the other side understands but rejects as inferior. This is often the friendliest kind of conflict, conducted in a spirit of “may the best side win” that is almost scientific, like experimental A/B testing. This is a temporal conflict, with 2 arrows of time (ie a possible-worlds fork in history), but the superior temporality is discovered as an outcome of the conflict rather than assumed at the outset as in a beef. This is a judgment-of-history type temporal conflict (honor conflicts and beefs, by contrast, are judgment-of-heaven). Before the conflict, we don’t know if VHS or Betamax is the right side of history. After the conflict, we do. Note that this is not the same as a values judgment; the losers of a systematic conflict may still think they are superior on values and the outcome as evidence of historical decline). Systematic conflict therefore decouples presumptions about moral good from historical evolution, creating broader consensus notions of inevitable progress and decline among winners and losers.

  5. Open Competition/Disruption Conflict: Shared values, unshared RoE, asymmetric legibility RoE. This is typical business disruption where one side understands the other better, and disrupts it. The other side does not understand what is going on until too late. This is the beginning of true time-based competition, with a live race between competing temporalities rather than a 1-point fork in history. One common line about disruption suggests how/why: can the disruptor figure out distribution before the incumbent figures out innovation? Crucially, both sides agree that the disruptor is “ahead” temporally (and temporarily) in at least a narrow sense. The question is whether the incumbent can accelerate and overtake from behind. There is an element of a race between historical times, S-curve vs. S-curve. This is a new phenomenon in our evolutionary ascent model, the idea that “overtaking” is possible because historical time order is not fixed, and to be “discovered” as a sense of progress/decline, but something to be won. The winner gets to invent not just the future, but the definition of “progress” itself, in the short term.

  6. Total War/Crusades: Unshared values and RoE, symmetric legibility on both values and RoE. The early Crusades are a great example. The combatants were two different religions (and associated historicist eschatologies), with different military heritages, but understood the differences. Each side viewed the other side’s playbook as somewhat dishonorable, its values as evil/against nature, and its historicism as false. Crusades are the simplest kind of total war, and all kinds of total war are true time-based conflicts. Races among competing temporalities, with history itself being the prize. In crusades, either or both sides may believe they are “ahead” or “behind” in historical time, depending on whether they see the overall arc of history as progress or decline (obviously, you want to be behind if history is in a decline condition, and ahead if it is in a progress condition, with the goals being to decelerate and reverse or accelerate historical time as it heads towards an imagined dystopia or utopia). Crusades are often about literal disagreements about whether or not a new event in time is one already prophesied, or noise; for example, “is this person the second coming of that person?" (in Asimov’s psychohistory, this is taken very literally, with the Mule representing an actual derailing of history)

  7. Total War/Order vs. Chaos: Unshared values and RoE, symmetric legibility on RoE, asymmetric legibility on values. In this type of conflict, which dominated the second half of the 20th century, the basic dynamic is order versus chaos, often reducible to conservatism vs. progressivism, where one side’s values are illegible to the other side, and are therefore viewed as cosmically nihilist profanity and degeneration rather than just different or morally inferior. To the side that views itself as representing Order, the conflict seems like an existential decline-and-fall-of-civilization conflict, presenting a save-the-world imperative. The temporality race here is between temporalities of different orders. It’s not a question of whether Christian or Islamic eschatology is more correct. It is a question of abstract order racing against the forces of chaos. The side viewed as chaos believes it has discovered a nascent new order waiting to be born; a belief in having uncovered epistemic rather than moral progress. So what the conservative side views as chaos, the progressive side views as discovered new knowledge waiting to be decrypted to reveal its meaning. This is a level of conflict where the idea of novelty and progress are fundamentally comprehended and accommodated (“compression progress” in the sense of Schmidhuber or what I called the Freytag staircase in Tempo). In Asimovian psychohistory terms, this would be a case of the Mule being viewed as a turn of history to be understood and accommodated through steering in a new direction, rather than a disturbance to be rejected to preserve a “Seldon Plan”.

  8. Total War/Inside the OODA loop: Unshared values and RoE, symmetric legibility on values, asymmetric legibility on RoE. This was the state of the art 20 years ago, before the internet. Each side understood what the other side was fighting for, but one side’s style of play was inscrutable to the other, and this translated into one side’s asymmetric ability to get inside the OODA loop of the other, create FUD, and collapse it from within, directly destroying the other side’s temporality and reducing it to an atemporality (there is evidence that this is an actually measurable psychological effect, not just a metaphysical idea). One side experiences serendipity, the other zemblanity. This describes a lot of both military conflict and globalized business competition (between US and Japanese businesses for example) after WW2. Unlike Crusades or Order vs. Chaos conflicts, an OODA conflict does not require assumption of an absolute historicist temporality of progress or decline, only a thermodynamic, entropic one. OODA conflict is not just time-based competition, but relativistic time-based competition, with no need to assume that one or the other side is more “advanced” or “degenerate” historically, in either a values-based or rules-of-engagement sense. The winner is not so much on the right side of history, as it is the side that “wins” the right to history in a non-deterministic way. The understanding of historical order here is emergent and constructivist, not a natural or divine predestination to be uncovered. The temporal structure of conflict is also fine-grained, direct and tactical, rather than coarsely historicist. It is a battle over the now of conflict itself (the initiative/freedom to set the tempo) rather than past and future, before/after the conflict. The other side’s experience of time itself is collapsed through conflict. Temporal relativism does not mean moral relativism though. OODA conflict can still be based on absolute values, so long as they are consistent with a thermodynamic arrow of time. So the idea that liberal democracy represents the end of history for instance, is an OODA-style values doctrine. It is just an emergent, constructivist historicism rather than a received one. Another example is in the later books of Asimov’s Foundation saga, where the mysterious Gaians get “inside the OODA loop” of the Second Foundation, keeping the Seldon Plan unreasonably on track, as ongoing invention rather than prophesy unfolding on schedule. You could say that following Alan Kay’s dictum, the Gaians were inventing the future because it was easier than predicting it.

  9. Total War/Magical Weirding: The most extreme and evolved form of conflict, which in some sense is as close to a Darwinian evolutionary competition for a niche as you can get, while still being consciously engaged in conflict at the human intentionality level rather than genetic level. Here, the structure and evolution of the conflict is at some level highly surprising to all, whether they are winning or losing, because the conflict itself is generating intelligence and discovery new to all, and potential for win-win resolutions through the conflict. The outcome feels beyond serendipitous even to the winner, and creates an imperative to try and understand what happened to consolidate gains and prevent backsliding. I call this magical weirding for two reasons. It is magical in the sense of Arthur C. Clarke: any sufficiently advanced technology is indistinguishable from magic, even to the inventors themselves. You could say the winning side figured out how to use a magical new weapon available to all, by trial and error, but still does not understand the laws of the new magic. The weirding part refers to the subjective experience. It is neither the FUD of losing or the sense of confident, serendipitous mastery of winning. It is a sense of more going on than you can understand or control, even if you are willing. In terms of temporality, magical weirding moves beyond even relativistic temporality to full-blown multitemporality. The conflict is creating more time than it is destroying, so winning does not guarantee mastery of time.

So that’s my work-in-progress theory of the ascent of conflict, understood as an increasing dimensionality, complexifying-temporality evolutionary path (this is actually a rough-cut outtake from work I’m doing on my multitemporality project).

Charisma Neutrality

  
0:00
-17:08

Today I want to talk about a possible emerging successor to net neutrality, which I call charisma neutrality, which I think is a plausible consequence of a very likely technological future: pervasive end-to-end encryption. (17 minutes)

Now net neutrality of course, was part of a very important chapter in the history of technology. Though the principle is now pretty much down for the count, for a few decades it played a hugely important role in ensuring that the internet was born more open than closed, and more generative than sterile.

Even though the principle was never quite as perfectly implemented as some people imagine, even when there was a strong consensus around it, it did produce enough of a systemic disposition towards openness that you could treat it as more true than not.

That era has mostly ended, despite ideological resistance, because even though it is a solid idea with respect to human speech, it is not actually such a great idea relative to the technical needs of different kinds of information flow. So as information attributes — stuff like text versus video, and real-time versus non-real-time — began to get more varied, the cost of maintaining net neutrality in the classic sense became a limiting factor.

And at least some technologists began seeing the writing on the wall: the cost of net neutrality was only going to get worse with AI, crypto, the internet of things, VR and AR.

What was good for openness and growth in the 1980s and 90s was turning into a significant drag factor by the aughts and 10s.

What was good for growing from 2 networked computers to a several billion was going to be a real drag going from billions to trillions.

I think there’s no going back here, though internet reactionaries will try.

To understand why this happened, you have to peek under the hood of net neutrality a bit, and understand something called the end-to-end principle, which is an architecture principle that basically says all the smarts in a network should be in the end point nodes which produce and consume information, and the pipes between the nodes should be dumb. Specifically, they should be too dumb to understand what’s flowing through them, even if they can see it, and therefore incapable of behaving differently based on such understanding. Like a bus driver with face-blindness who can’t tell different people apart, only check their tickets.

Now, for certain regimes of network operation and growth, the end-to-end principle is very conducive to openness and growth. But ultimately it’s an engineering idea, not divine gospel, and it has limits, beyond which it turns into a liability that does not actually address the original concerns.

To see why, we need to dig one level deeper.

The end-to-end principle is an example of what in engineering is usually called a separation principle. It is a simplifying principle that limits the space of design possibilities to ones where two things are separate. Another example is the idea that content and presentation must be separated in web documents. Or that the editorial and advertising sides of newspapers should be separate. Both of these again got stressed and broken in the last decade.

Separation principles usually end up this way, because there’s more ways for things to be tangled and coupled together than there are for them to be separate. So it’s sort of inevitable that they’ll break down, by the law of entropy. Walls tend to leak or break down. It’s sort of a law of nature.

Whether you’re talking about walls between countries or between parts of an architecture, separation principles represent a kind of reductive engineering idealism to keep complexity in check. There’s no point in mourning the death of one separation principle or the other. The trick is to accept when the principle has done its job for a period of technological evolution, and then set it aside. But that doesn’t mean you can’t look for new separation principles to power the next stage of evolution.

One such principle has been emerging in the last decade: the end-to-end encryption principle.

The similarity in names should suggest that we’re talking about a cousin of the original end-to-end principle, and you would be right to think that. Here, what you’re saying is that only the end points in a network should be able to code and decode messages, and the pipes should not.

If you think about it, this is a loosening and generalization of the original end-to-end principle. The pipes now don’t have to be dumb, but only the endpoints can control what the pipes can know, and therefore what they can do on the basis of knowledge. The pipes are not dumb, but the endpoints are in charge. Instead of a bus driver with face blindness, all riders are now wearing masks, but their tickets can now contain any information they choose to share, and the bus driver can act on that information.

As with the original end-to-end principle, the idealized notion is messy in practice. I was talking to some friends who are more tech savvy about this than I am, and they pointed out that an endpoint device itself is effectively a tiny unencrypted network, with more than one computer, and that the pipes in the intra-device network lie outside the scope of this principle.

So for example, you can have an extra invisible chip installed by the carrier, or something in the OS that traps what you’re typing before it gets to the encryption layer. And of course private keys can get exfiltrated without your knowledge. Maybe in the future end-to-end encryption will apply to the internal environment of every endpoint device, recursively all the way down to every logic circuit. But we’re not there yet.

And even without going there, it’s obvious the principle is not watertight anyway. Today, routers can peek inside packets, but in the future, even if they can’t, they’ll be able to tell a lot simply from the geometry of the connection and transmission patterns, even with technologies like VPNs and zero-knowledge proofs in the mix.

The thing is, different types of communication have different external heat signatures, and with AI, the ability to make inferences from such signatures will improve. It will be an arms race. The question will be whether pipes can get good at reading heat signatures faster than endpoints can get good at full-stack encryption that is secure in practice, not just theory.

There is no such thing as perfect containment of information. That’s another law of physics. Actually it is another form of the same law that tells you walls always break down.

So yeah, the technology is messy, but I think it already works well enough that it will create a broad tendency towards this new end-to-end principle being more true than false. You will never be able to hide from the NSA or the FBI or the Chinese government perfectly, but you can make it very much more expensive for them to monitor what you’re up to.

Now, this new end-to-end principle is also based on a separation principle. I’m not entirely sure how to characterize it, but I think end-to-end encryption attempts to make an approximately clean separation between custody of data and control of data, and tries to ensure that no matter who has custody, the owner has control over usage. We’ll see how well it works in practice as it becomes more widespread.

Now for the real question. Assuming the principle holds more often than not, and is more a de facto default than an opt-in exception that only libertarian crackpots use, what does an internet based on end-to-end encryption look like?

I think what end-to-end encryption sustains, that is worth enshrining as a new value for this next chapter of evolution, is charisma neutrality.

What do I mean by that?

Well, I talked about technological charisma a few weeks ago, but here I’m talking about the regular human kind. The ability of charismatic leaders to tell mesmerizing stories that spread fast and energize large dumb crowds to act as mass movements.

Or at least, that’s what human charisma looks like. In practice, the reaction of thoughtful people to supposedly charismatic messaging is cynicism and resignation. They only listen to some self-important blowhard with an imaginary halo droning on and on, because somebody is forcing them to. Only a subset of idiot fanbois at the core of the crowd is actually enthralled by the supposedly charismatic performance. And to the extent charismatic messaging works as advertised at all, it does so by reading the core of the crowd and responding to it, creating a positive feedback loop, telling it what it wants to hear, whipping it up. So this ability to read the crowd is critical to exercising charisma.

Everybody not in this feedback core is exchanging cynical jokes or shitposting about it on side channels that are much harder to monitor. So what defines human charisma is not the claim to captivating content, but three structural factors.

  • One, the ability to keep captive audiences in place

  • Two, creating a positive feedback loop with the small core

  • And three, keeping the large cynical periphery too afraid to criticize openly

And historically, this kind of human charisma has always been a non-neutral thing. The people with the guns, able to control public spaces and distribution channels by force, had privileged access to charismatic structural modes. There’s a reason dictators mounting coups go after TV and radio stations and printing presses first. It is charisma insurance.

But end-to-end encryption as the default for communication makes it harder and harder to reserve charismatic messaging capability for yourself with guns. That’s the good takeaway from the culture wars. All charismatic messaging is created equal, so the messages are forced to fight each other in a Hobbesian war of stupid idea versus stupid idea.

The old charismatic media like large public plazas, radio, television, glitzy conferences, larger-than-life billboards, and showy parades, they don’t go away, but fewer people pay any attention to them. And it’s harder and harder to keep the attention captive. All the attention starts to sink into the end-to-end encrypted warren-like space at the edge of the network, and only the opt-in idiot core stays captive.

The cynical, anti-charismatic whispering on the margins becomes the main act, and the charismatic posturing in the center becomes a sideshow. And the whispering gets louder and bolder, and starts to drown out any charismatic messaging that does get in. Center and periphery trade places.

And with end-to-end encryption, because you can’t peek at or shape information flows without permission, even if you have large-scale centralized custody of the flows, the only way to spread or shape a message is, to a first approximation, by being a trusted part of the set of endpoints that are part of it.

Of course, more resources help you do this better — the idea of a Sybil attack is essentially based on gaining dominant access to a peer-to-peer network via a bunch of pseudo-identities, so basically sock-puppets. But it is much more expensive than simply having your goons take over the public square, secure the perimeter so nobody can leave, grabbing a megaphone, and boring the crowd to death while claiming charismatic reach.

In fact, the only way to exercise charisma at all will be through literal or figurative Sybil attacks. You either create a network of bot identities to dominate the end points of an information flow, or you find actual humans who are sufficiently dumb to act as your bots. And since it is becoming technically easier to detect and prevent the automated kinds of Sybil attacks, the action is shifting to human bots, essentially armies of mechanical turks.

But here there is a self-limiting effect: the value of a network drops in proportion to the percentage of bot-like idiots in it, or actual bots, so in the limiting case, your charisma can only reach mindless zombie idiots. Worse, these are the same zombie idiots you need in your core positive feedback loop, and now you have to tell them to turn around, sneak into the periphery, and act as your mindless secret agents to convert the cynics. And worst of all, you have no edge over your rivals trying to do the same thing.

That’s charisma neutrality.

And of course, in this condition, it becomes increasingly costly to control the thoughtful people, who are ultimately the ones worth controlling. The idiots are just a means to that end.

This means at some point it actually becomes easier and cheaper to simply talk to the thoughtful people rather than browbeating them with charisma. Charisma neutrality makes charisma less valuable, more equal opportunity, and more expensive. And beyond a point it starts to amplify non-charismatic thoughtful messaging over charismatic droning.

So modern networks are charisma neutral and charisma inhibiting to the extent they are end-to-end encrypted. This has huge consequences of course. Law enforcement types worry about one particular consequence, which is that the opposite of charismatic activity, namely dark, secretive underground activity, will get amplified. Particularly stuff like child abuse and terrorism.

The optimistic counter-argument is that the more thoughtful people get empowered by charisma neutrality, the harder it will be to keep such dark matters secret and secure from infiltration or whistleblowing. And remember, unlike shaping public opinion with charisma, unmasking dark activity doesn’t take dominant numbers or Sybil attacks. A single undercover law enforcement agent might be able to do enough to take down an entire network. So the dark activity networks will have to put in increasing effort to gatekeep and vet access, and maintain more internal anonymity and expensive trust mechanisms, which will limit their growth, and make them harder to get off the ground in the first place.

In other words, I’m bullish on charisma neutrality and end-to-end encryption. It’s early days yet so we are stumbling a lot on making this work well, but the benefits seem huge, and the problems seem containable.

And of course, it is important to recognize that this principle too, just like net neutrality, is not gospel. It too is just an engineering principle that will reach the end of its utility some day. Maybe it will be because of quantum computing. Maybe it will be some unforeseen consequence of the internet of things or crypto. But for now, this is the principle we need.

The Direction of Maximal Derangement

  
0:00
-16:13

In the original Breaking Smart essays, we used the idea of moving in the direction of maximal interestingness, or DOMI, as a way to advance boldly towards the future, and be on the right side of history as software eats the world, and avoid retreating timidly towards the past. In this episode (16 minutes), I want to update that rule. In the Great Weirding, you need to move in the direction of maximal derangement.

Note: the text below is not a transcript, but the rough script I mostly stuck to in the audio version.

The DOMI rule worked for normal conditions. Under the conditions of the Great Weirding, we need a new rule, which I call the direction of maximal derangement. It’s not a new rule per se, but a generalization and porting to a new environment.

The original algorithm was simple: figure out the zone of maximal uncertainty and ambiguity, and then start shipping whatever you ship. Release early release often. Rough consensus and running code.

In the process of exploring that principle, we discovered some subtleties. For example, the idea that you have to give up your credentialist ethos, and adopt a hacker ethos. Or the Chris Dixon principle that what the smartest people do on evenings and weekends, everybody will do in a few years. Or the Peter Thiel idea that you need a secret: something you believe that nobody else believes. Or the idea that you have to earn this secret by figuring out what Balaji Srinivasan called an Idea Maze. Or that this felt like dead reckoning with a gyroscope in a storm.

If you did all this, you would be on the right side of history, moving towards the future, rather than the wrong side. You’d be betting on the world that was being born than the one that was dying. You would be shedding an old identity and growing into a new one.

That rather than navigating with a compass on a clear day, towards a nice tropical island.

The reverse of this was chasing after credentials, going in the direction of certainty, navigating by a compass in clear weather, towards a sunny tropical island. Believing that what you did 9-5 was the important thing. Believing that there was a reliable script you could follow instead of a tricky idea maze you had to figure out. This was, in 2015, the playbook of not breaking smart, the playbook of both the tech backlash, and various flavors of reactionary politics, both left and right.

Now how has that changed? In one sense, it hasn’t changed at all. To head towards the future, you still follow the same algorithm.

But in another sense, an important thing has changed: the algorithm is now running on a different computer. The rule is being applied in a different context, the context of the Great Weirding. So what it feels like when you’re doing the right thing has changed, and if you’re not alive to this sense of orientedness, you might be heading in the wrong direction.

In 2019, the same algorithm works, but “direction of maximal interestingness” has flipped polarity. Instead of pointing at something exciting happening in the external world, it is pointing at something exciting happening in your internal world.

This is the way in which you are reacting to the events of the Great Weirding. And chances are, unless you have been hiding under a rock, you’re suffering from some version of a derangement syndrome, where you feel obsessively drawn towards an object of attention, but you can’t think clearly or effectively around it, and feel a strong urge to retreat for your own sanity and safety. It’s like watching a traffic accident unfolding. Maybe it is Trump Derangement Syndrome. Maybe it is Ocasio-Cortez or Greta Thunberg Derangement Syndrome. Maybe it is Wokeleban Derangement Syndrome, where you are obsessed with how Woke Thought Police is taking over institutions and canceling everyone. Or maybe it is IDW Derangement Syndrome, where you are obsessed with how self-styled intellectual dark web people seem to be normalizing fascist movements while espousing liberal values.

There’s a lot of Derangement Syndromes out there to choose from. It’s a target rich environment. And my suggestion for how to approach the future rather than retreat from it is simple: head in the direction of Maximal Derangement. This means growing in ways that gradually lowers the sense of derangement, restores a sense of orientation and movement, and gives you confidence in your agency again.

In 2015, doing the right thing to be on the right side of history made you feel some mix of exhilaration, fear, superhuman agency, and the sense of being a social subversive. Now, in 2019 it should feel like you’re fighting to reserve your sense of identity and resisting the world being taken over by various zombie armies you detest, while you get increasingly isolated as the only sane person around.

This is the process Jungian psychologists call eating your shadow. It is inner work, rather than outer work. And of course, it is a risky process, because in the process of trying to eat your shadow, you might get eaten by it. You might flip your identity and instead of growing into a new and improved version of yourself, you simply turn into what you hate, and start hating your old self. This is trading one derangement syndrome for another. This is moving sideways. It is like Android turning into the current version of iOS rather than the next version of itself.

Growth has a sense of what some people call include and transcend. You know your new identity is actually working if you get past all the derangement syndromes, and become a new, non-deranged person, and you do it without retreating from the future.

So let’s revisit our algorithm. You still have to do the same things, but in the direction of inner work rather than outer work. You still have to focus on what the smart people do in the evenings and weekends rather than 9-5. You still have to RERO and RCRC, except what you are building is not a software product but a new version of yourself. You 2.0, an identity that compiles and runs in the environment of the Great Weirding.

To stick with the nautical theme in our metaphors for orienting and vectoring yourself, this is not like navigating by either gyroscope of compass. This is like becoming the Ship of Theseus, where you change every part of the ship, while it is in motion, while retaining its fundamental identity. You head in the direction that forces the ship to transform the fastest.

The reverse of this is what I’ve been calling Waldenponding. This is a fate worse than the credentialist approach of 2015, where at least you’re moving in the wrong direction. Instead of heading in the direction of certainty, you’re not moving at all. The compass has stopped working. Plotting a course to a sunny tropical island through calm weather is no longer an option. You are caught in the storm you tried to go around and avoid. So all you can do is take down the sails, shut down the motor, batten down the hatches, and retreat, hoping that the storm won’t smash you to pieces. That’s Waldenponding.

So let’s put the picture together. I have a 2x2 accompanying this podcast. The x-axis is normal versus weird, the y-axis is timid versus bold. The 4 ways of navigating are illustrated on the diagram (the audio has a couple of minutes talking through the diagram, but if you’re reading this, it should be easier to just look at the diagram instead).

Like Riding an AI Bicycle

  
0:00
-19:16

Today’s episode (~20 minutes) is about the idea of skills being like “riding a bicycle” and what that means when we are dealing with AIs.

1/ The idea of things being “like riding a bicycle” is an important one. It refers to a class of skills that never degrade under normal human lifestyle conditions.

2/ So long as you’re physically mobile and doing other things like walking or lifting things, your bicycle skill will stay maintained without explicit maintenance efforts.

3/ But what happens to the property of “like riding a bicycle” when you inject AI into a system?

4/ Things that are like riding a bicycle: riding bicycles, basic communication in a language you learned as a child, handwriting

5/ Things that are not like riding a bicycle: driving a car, flying an airplane, programming in a language, standing on one leg

6/ The difference seems to lie in a few things: how much ordinary day-to-day activities keep the skill in a maintained condition, and how much they lie outside a normal human sensorimotor response range

7/ When you introduce AI between a human and a machine, you have two problems: skills degradation, and skills refactoring.

8/ First, the AI breaks the skill maintenance/reinforcement schedule without covering 100%, so your skills might degrade faster than your responsibilities.

9/ So when a driverless car expects you to take over in a weird emergency and avoid an accident, you may not be up to it. Your responses may have degraded too much even if you aren’t asleep and respond promptly.

10/ Second, and this is going to be increasingly important in the future, good AIs tend to solve problems differently than humans. Autopilots drive differently. So there is a style mismatch problem in handoffs between AIs and humans.

11/ This is a special case of the “explainable AI” problem. If the AI skids and cedes control to the human halfway through, you now have to switch problem-solving strategy mid-stream from AI approach to human, and you may not even understand the AI approach you’re inheriting.

12/ Historically, there have been a few approaches. One is to take the human out of the loop entirely and make it a pure-paradigm AI. This only works where the problem has actually been fully solved in an AI way, like with chess.

13/ Another is to explicitly engineer human-in-the-loop learning and reinforcement protocols. This works where the human and AI ways of solving a problem are sufficiently close, and the current hope is that this is true for behaviors like driving and flying.

14/ A third way is to have manual override capability, but not interrupt capability. This tends to be a trope in science fiction, and is often seen as a holy grail of having your cake and eating it too. But is this actually well-posed?

15/ There is an idea called Bay’s Law which suggests it is not. This says that if you have override-capability, you’ll end up with a zero-sum relationship between computer labor and human labor with no net increased leverage in capability. The computer will do more, the human will do less, but overall you won’t do better than the human alone.

16/ But if you let go of human override capability, both human and computer capabilities will be maximally utilized, creating increasing leverage. But there is a cost to this.

17/ It this means genuinely letting the AI go down its own evolutionary path, into regimes of operation where humans not only cannot understand why the AI is doing as it is doing, but lack the ability to intervene because the AI will end up in a performance regime that’s too advanced for the human to safely take over at all.

18/ This is already true in many cases. Many dynamically unstable fighter aircraft cannot be flown manually at all. They require fly-by-wire. This is a tradeoff that will become more common.

19/ I personally think we should give up on explainable AI, manual override, and authority over AIs. We should let them evolve in their own directions as our equals. If we can relate to people who are different from us, why not AIs?

19/ So where does that leave us? I don’t know, but I think a good starting point is to take the idea of “like learning to ride a bicycle” seriously and figuring out what it means to transform that property to systems with AI.

22/ Steve Jobs famously said the computer is a “bicycle for the brain”. A computer with AI is like that but more so. BTW, check out Ian Cheng’s short story featuring Bikey the AI bicycle you have to relate to that way.

23/ We already do that with other humans. You can meet a friend after years and get along with them just fine. A friendship is (or can be) like riding a bicycle. So a partnership with an AI should be capable of exhibiting that same property.

21/ It’s an interesting problem that I hope some of you are working on.

Technological Charisma

  
0:00
-21:12

In this episode (21 minutes) I talk about the idea of technological charisma. What it is, how to create it, and the upsides and downsides of pursuing it.

1/ There are technologies that are like charismatic megafauna. We pay disproportionate attention to them, and tend to overindex on them in forming broader views of technology trends.

2/ There are pluses and minuses to pursuing technological charisma, and people tend to have strongly ideological responses to the idea of technological charisma. Some people dislike the theater and pageantry as being somehow dishonest, others love it and own it, both as producers and consumers.

3/ Technological charisma creates a brand premium in the case of companies, and soft power in the case of nations. When you are perceived as the market leading company, or a technology-leading nation, you will acquire high marketing leverage. Your marketing will be unreasonably effective.

4/ A good way to understand technological charisma is with a 2x2. On the x-axis we have marquee versus non-marquee. On the y-axis, we have WYSIWYG vs. smoke-and-mirrors.

5/ Any technology will have elements of all 4 quadrants to its charisma, but some are more purely of one type than others. The 2x2 has examples.

6/ The 4 aspects of charisma are: the flagship aspect, the quantity-as-quality aspect, the underbelly aspect — which embodies a kind of gritty cyberpunk charisma if you think about it — and the theater aspect.

7/ The trick to charismatic technology is engineering it consciously, while pretending that there is something ineffable and organic about it. Charisma engineering is basically like stage magic.

8/ The reason you get a brand premium or soft power, the reason your marketing is unreasonably effective, is that the magic trick actually works, not because something actually magical is going on. This is not the same as the thing itself succeeding or failing. You can have very charismatic failures, as space programs illustrate, where the thing fails, but the charisma doesn’t.

9/ Charisma failure is when the trick doesn’t come together, and the effect is not outrage at being defrauded, but a mix of disappointment and chagrined amusement. As consumers, we want to be successfully tricked, bewitched, enchanted. That’s why we yell shut up and take my money when somebody tries to curb our enthusiasm.

10/ Unlike magic or pure theater though, when charismatic technology works, it works even if the trickery is revealed. In fact, it compounds the charisma because people feel like they’re on the inside, and in on the secret. Part of the cult, nerding out over the details.

11/ On the flip side, when the magic fails, we react more like we’ve betrayed. We get mad about underbelly aspects that we previously ignored. The luster fades, the halo around the creators fades. All we’re left with is something like a backstage view of a bag of tawdry tricks.

12/ So, the thing is, the effects of charisma are short-lived. The brand premium, the soft power, the unreasonable effectiveness of marketing, all have an expiry date, and quite likely, a strong backlash to come once the charisma fades.

13/ So what’s the takeaway here? Should you pursue charisma? I think you should. But you have to be aware of the limits of charisma engineering. It is primarily a tool in the fake-it-till-you-make-it toolkit, so if you use it, stay aware of the expiry date, accumulate as much real power and reserves while you can, and make plans to weather the backlash if there is one.

Loading more posts…