‘The War on Sense-making’

Transcription of a talk by Daniel Schmachtenberger

I think it’s clear to most of your listeners that the things called news are mostly propaganda—narrative warfare for some agency—and that they aren’t good sources of sense-making. We would hope, though, that there are some sources of high signal, low noise true information like maybe scientific journals, like academia or science itself. I hoped this a long time ago, and I had this continuous kind of disappointment, you know, I started being like, okay, I can’t trust news to be true because news is narrative warfare, I can’t trust science without actually really looking at what was the methodology employed, how was it funded, what were the axioms that the team was using, what were the logical transforms, am I seeing all of their data or is it cherry picked data? As I started to kind of unfold, to say “where are the high signal, low noise sources that I can offload some of the cognitive complexity of making sense of the world to?”. The answer is really sad. I don’t know any sources that are very high signal and low noise across lots of areas. So then I started being like “well, why is that? What would it take to fix that, what would it take to make a world-ahead intact information ecology?”

Well that requires an understanding as to why the current information ecology is as broken as it is, and we’re starting to touch on a couple things here. But this goes deep and how do we make good choices if we don’t have good sense-making? Well, obviously we can’t. But due to increasing technological capacity, increasing population multiplied by increasing impact per person, we’re making more and more consequential choices with worse and worse sense making to inform those choices, which is kind of like running increasingly fast through the woods and increasingly blind. So I think many of the people that you’ve had on rebel wisdom have been in a deep inquiry around, “how do we actually fix our own sense making?”, and it’s some of what has brought us to have conversations with each other because a part of how we work with our own sense making as we recognize the cognitive complexity of issues that the world faces is more than a single person can process. So it requires collective intelligence and collective sense making. But I can’t just offload the cognitive complexity to some authority, because I can’t trust that they’re actually doing good sense making. Maybe they’re doing good sense making within a very limited context, but then the application of that outside of the context is different and maybe there’s even distortions within their context. So I have to try and find other people that are also really endeavouring to sense make well, which means they have to understand what causes failures in sense making. And then we have to see if we can create relationships with each other that remove the distortion basis that is normally there. 

So I think what I have seen of Rebel Wisdom, this is probably the strange attractor of what is bringing everybody to watch it is people who are trying to make sense of the world better themselves and are trying to find sources of content of other people that have been trying to make sense of it well. Which is what I’m excited about and so those are just some opening thoughts and I look forward to getting into why we have as broken and information ecology as we have, and what it would take to correct that at scale and how we can make sense of the world even in the broken information ecology now in terms of practical processes.

I’ve never actually shared publicly these types of frameworks before, so this feels fun and exciting and I hope that it’s useful. As I’ve been trying to make sense of the world, making sense of why sense making it so hard, is pretty central. There’s a famous quote I think attributed to Einstein “make things as simple as possible but not simpler”.  He says as simple as possible, but really the goal is as clear as possible. Simpler would mean it’s wrong like it’s not accurate anymore. You’re going to have to face this doing media work. 

There will be pressures on you from people saying “hey people can’t pay attention to more than sound bites, you got to make it five minute chunks, the word size is too big make it for an eighth grade level”. Which is saying, people are dumb so spoon feed them stuff that dumb people can handle, which to the degree you do that and it’s successful will keep people dumb. But those are the pressures for anyone doing broadcast even for a hopefully good intention and if we want people to actually be able to make sense of the world well you can’t do it in very short periods of time with lots of distraction and oversimplified. 

If you look at anyone who actually increased the sense making capacity of the world, you look at any scientist or philosopher, they didn’t do it in tweets and they didn’t do it radically distracted and they didn’t do it in a dumbed down process. I have so many young people who have written to me saying “we want to create a new kind of education that makes everybody like Bucky Fuller or Leonardos, that conditions polymaths”. They say this because I’ve written some stuff on that topic and that they have some sense that they could lead that, and I’m like “have you read Bucky’s books?” And they say “well, no, we mostly don’t read books”. To which I reply “Have you seen the references in Bucky’s books? Just see the amount that he read and referenced to make sense of things well.”  And so there’s a decoupling of the sense of the agency possible, with what it takes to do it, it’s you know like, there’s a saying in almost all domains to this effect, that “everybody wants to be buff but nobody wants to lift heavy ass weights” or “everybody wants to win and nobody wants to work harder.” There’s something like that, that happens here. If I want to be able to make sense of the world well I have to work at that and if I want to be able to make sense of the world better than I currently do, like attention, requires being trained just like muscles require being trained. Thinking clearly requires being trained and anytime there’s a hormetic process—hormesis is the principle by which you stress an adaptive system to increase its adaptive capacity. I have to stress a muscle to get the muscle to grow. If I’m lifting an amount of weight that’s super easy, there’s  no input that says the muscle needs to be bigger and there’s a cost to getting bigger right so it’s only going to go through that cost if it’s being stressed. And the same is true if I expose myself to more heat and more cold than as comfortable, I actually gain greater metabolic flexibility to deal with heat and cold, which means that if I stay in an environment where I always have heating and air conditioning I’ll actually lose metabolic flexibility. You have to stress the system to be able to grow the system (in a particular kind of way and not all stressors are going to grow the system) but this is definitely true cognitively. Which means if I keep paying attention to hyper normal stimuli, that are moving quickly so I get the stimuli of lots of novelty I’m going to be decreasing my attention. 

But if I want to have any kind of nuanced view, I have to be able to hold multiple partial views in working memory. It’s not that some people have good memory or good attention and other people don’t intrinsically, any more than some people are buff and some aren’t intrinsically. It’s developable, but it has to actually be developed. So the impulse to say, “hey, make it really simple so everybody can get it” and the impulse to say “help people actually make sense of the world well” are different things. Now some people will make stuff technical-seeming intentionally to obscure it as a power game. So that to encourage others to defer their sense making to them, I understand this complex thing you’re not going to be able to understand, so defer your authority to me. If we actually want to empower people, I don’t want them to defer their sense-making to me. But I also don’t want them to do lazy shitty sense-making, or defer it to anyone else. Which means I want them to grow the quality of their own sense making, which means to grow the depth of their care, their anti-nihilism, to grow the depth of their earnestness, their own self-reflexiveness, to pay attention to their biases and where their sloppiness in thinking, their own skills and capacity. I want them to grow their attention span and both the clarity of their logic and the clarity of their intuition, and the noticing when something’s coming from intuition or logic and how to relate all of those things. That’s actually what increasing sovereignty means. 

So information ecology is that there’s a whole ecosystem of information. We have information coming in from marketing, from government sources, from campaigning, from just what our neighbours tell us and our friends tell us from social media. And we use information to make sense of the world, to make choices that are aligned with whatever our goals are and our values and what’s meaningful to us. What we hope is that the information around us is mostly true and representative of reality, so that we can use that to make choices that will be effective. When I say broken information ecology, it means that we can’t trust that most of the information coming in is true and representative of reality and will inform good choice making. So then this is where we have to get into ok, so where does information come from? Signals are being shared by people and by groups of people that have shared agency, like corporations and governments and political parties and religions. And so we want to start getting into, why do people share information other than just sharing what is true and representative of reality? This is actually a really key thing to start to understand, so maybe I’ll actually define something that I was just referencing which is the difference between true and truthful. It’s a first important distinction when we say someone’s being truthful—if you’re being truthful with me, it means that what you are sharing maps to what you believe, that there’s a correspondence between the signal that you’re communicating to me and what you believe is true. So we can look at breakdowns of truthfulness, which is where people are distorting information with some intentionality and that can either be through overt lying or through lying through omission or lying through emphasis bias. So that’s truthful. When we say something is true what we typically mean—and this gets very nuanced and we end up having to get into is like fundamental epistemology and ontology concepts of “what does it mean for something to be true, what are our fundamental axioms about the nature of reality”. 

I’ll put that on hold for now and just say, in general if we say that someone’s saying something that’s true, we don’t just mean that there’s a correspondence between what they’re saying and what they think, but there’s a correspondence between what they’re saying and some independently verifiable reality. So of course someone can be truthful, meaning they say what they believe, but what they believe is misinformed because they did sense-making poorly. So they’re propagating information honestly, but that is not true. So we need to look at distortions in both of these. There’s a third thing, which is representative, which is that it’s possible for someone to be truthful, to share exactly what they think is going on and that what they’re sharing is actually true they’ve actually done good epistemology and empirically validated that what they’re saying maps to reality and in some clear way. And yet the interpretation I get from that will still actually mislead me because the true information is not representative of the entire context. Articles published and famous journals, peer-reviewed scientific journals like Journal of American Medical Association, five years later we’ll see that a major percentage of them — something like fifty percent — are found to be mostly inaccurate. Fifty percent is a coin toss. Or we see the replication crisis, or we see that the things that get studied, even if it’s true information it can be misleading because for the most part, where does the money to fund the research come from? It’s going to come within capitalism mostly where there’s some Return on Investment on the research and so some areas have moral ROI than others so even a bunch of true information that is weighted towards certain parts of the information ecology over others create misrepresentation through preponderance of information. So even a bunch of true information can create distortion. So then you start to say, “oh okay so the essence of science within the philosophy of science, the essence of it is earnestness of inquiry”. It’s empiricism. But then earnestness of inquiry, Eddington defined science as “the earnest endeavour to put into order the facts of experience” and the essence of capitalism is—so you can say the essence of science is no bias, right, at least the idea, the spirit of it. We can get into the fact that even the philosophy of science has built an axiomatic bias later. But in capitalism it is about optimising for bias, like I actually have an agency that I’m trying to get ahead I have intention to increase my balance sheet and so if there is capital funding of science it’s going to fund the things that create ROI on that research so we can keep doing more research that creates both a reason to distort the info and the reason to withhold information so that is a source of competitive advantage, a reason to create disinformation to other competitors and to at least wait, you know, of course in biotech if I can get a patent on a synthetic molecule and I can’t get a patent on a natural molecule, a lot more money is going to go into synthetic molecules in natural and then we look it and say “well there’s not that many phase clinical trials on herbs and there are on pharma meds so the pharma meds must be more accurate”. No. So even true information preponderance of data is going to create problems and that’s actually true information that is being shared truthfully that would still be misrepresentative of reality. So this is where I have to say, “do I have some sense of what the actual territory is and do I have a sense that the map that’s being created actually maps onto the territory reasonably well?” Because sense-making means map generation right to be able to make choices of how we navigate how we do choice making in relationships with some actual territory. So that’s true, truthful and then representative, and we can look at distortions and all three of those. 

So when we look at why does as an individual distort information, the most fundamental way of thinking about it is there’s this idea in terms of signalling. Like, if I’m just in nature watching what is happening with rabbits and trees and birds, I’m getting information about them that they aren’t even intending to transmit, and so the information is just reflective, light is actually reflecting off of them of the nature of reality. As soon as there’s an agent that can share information strategically for an intention, then I don’t know if what they’re sharing is reflective of reality or reflective of what they think will advance their intention. And that’s kind of the key distinction, is that the moment we get abstract signalling—which language allows us and the ability to kind of forecast in our ability to model each other and your well-being and the basis of your agency doesn’t seem coupled with my well-being in the basis of my agency perfectly. In the case of the partner wanting to cheat and get away with it, there’s a decoupling of well-being and agency. In the case of if I’m a marketer of a product and I want you to purchase it, whether my product is actually the best product or not, whether a competitor’s product is better, whether you need the product or not I want you to think that you need it and to think that mine is the best right so there’s a breakdown between what seems to be in my well-being and what seems to be in your well-being. So wherever there is any misalignment in agency and there’s the ability to share signal for strategic purposes then you have a basis to have signal that’s being shared that isn’t just truthful. So then we look at where is that happening, and it’s everywhere—to really gross or subtle degrees—pretty much everywhere and sometimes for dreadful purposes. You’ve got the prosaic purposes which are basically market type dynamics ,which most of the dynamics in the world are market dynamics or at least influenced by market dynamics. Market dynamics are fundamentally, at least partially, if not mostly, rivalries. Meaning my balance sheet can get ahead independent of your balance sheet getting ahead, and definitely independent of the commons. So in a market type dynamic I’m going to be sharing information and this is why “buyer beware”, but buyer beware is not just check to make sure that the car isn’t about to break down, it’s also check to make sure that the information shared is being true, because if I’m actually sharing information as a service and you’re purchasing that information whether you’re paying for it with your attention that’s being monetised through an ad, or you’re paying for it directly or whatever, there’s nothing that says I’m sharing true and truthful information. So in market type dynamics, the goal of marketing is that there is as a company, and from the supply side of supply and demand dynamics, the goal is to compel the purchasers’ action in a particular way. Which means as a company I want to do sense making for you because I want to control your choice making. I at least want to influence your choice making. I’m not actually interested in your sovereignty and I’m not even that interested in your quality of life, I’m interested in you thinking that I’m interested in your quality of life. I’m interested in you believing that my stuff will affect your quality of life, but whether that actually corresponds or not I don’t care, in fact, if I can sell you food that is very addictive or cigarettes or social media or media or porn or whatever it is that actually decreases your baseline happiness but then makes you need another hit faster and it’s addictive that’s really good for lifetime revenue of a customer. And to the degree that my fiduciary responsibility is to maximise profitability for me and my shareholders and so I need to maximise lifetime revenue of my customers multiplied by maximising the customer base, addiction is the most profitable thing I can get. Where that’s never in the best interest of the customer. Now as a corporation where I get to employ a whole bunch of people to do market research into split-test ads and to see what works best and to use psychological insights and to be thinking about your choice making more than you’re thinking about your choice making, not only is the information I’m sharing with you not just truthful and it’s a form of a kind of narrative warfare, because we’re agents that are actually competing for what you do. I want you to do something in particular, you want to do what’s best for you. Those aren’t the same thing. But it’s actually an asymmetric warfare, because I have a lot more ongoing team and focus now especially as you start to look at a big corporation empowered by AI and big data, it’s radically asymmetric info warfare that you don’t even know it’s happening. You don’t even know you’re engaged in it. So we can see that and but all the way down to the little guy in a market who’s just peddling wares, his incentive is to compel people to buy the thing, not to really adequately inform them that he just marked the price up a lot from where he got it down the way and if they go down the way off the beaten path the other stuff is better. And so this is ubiquitous. 

Then as we’re exploring reasons that people share things that are not fully truthful and representative—there are of course things worse than this, I would say most of where the distortion comes from is agency misalignment. Well, it’s always agency misalignment. Mostly we’ll call that market but any sources where you have some agent, whether it’s a company or a country or a person that can think about their own well-being independent of the well-being of other agents and/or the commons, then there is a basis for them to optimise their well-being with some externality. In the same way we externalise cost to the environment, we an externalise the cost to the information environment. An externalised cost to the information environment is like disinformation is pollution to the information ecology, that’s a kind of a good way of thinking about it. And as ubiquitous as pollution is, where we see that the snow on the top of Mount Everest is full of pollution of many different kinds, I would say information ecology pollution is more ubiquitous because it’s not just big industrial players doing it, it’s everybody doing it and you can’t even see it as clearly. But even people will create distortions and information for seemingly positive reasons, like first there’s kind of innocuous reasons like “okay, I’m going to write a testimonial or an endorsement for my friend’s book, because they’re my friend even though I think there’s stuff wrong in their book it wouldn’t be that gracious of me to say that”. And maybe there’s some game theoretic stuff in there like, they wrote a nice testimonial for my book and I want them to keep doing that and so my giving the endorsement of my whatever credibility that other people proxy their sense making to me, I’m now you know proxying that credibility over here is not necessarily true even if it’s not that they did the endorsement on my book and I just am supportive of them taking a positive step, that doesn’t necessarily mean that anyone else who sees that I offered that testimonial and is using that as a method of their own sense making knows why I did it. So here’s the other thing, is a decoupling of the signal that I’m sharing with the intention that I’m sharing it for. And so I might be sharing something with you, and I have four or five complex intentions I might not share any of them with you, or maybe I shared one.

When you’re getting information from a news channel and you’re like, “oh this news channel wants to maximise my time on site and it can do that through appealing to my cognitive biases and my emotional biases and my identity biases”. It can do that through things that are inflammatory, it can do that through all kinds of things that are hyper normal stimulus and that hijack my attention. This is where it’s competing for my attention against where I would want to put my attention, because it’s monetising my attention. So I have to factor the agency, the intention of the news station and try and remove that artefact from the information to try and infer what the true information might be. Basically to infer what the source of distortion might be, the same with the political candidate, the same with science that’s coming forward and I’m looking at, okay so who is seeking more grant funding, and what is easiest to fund and where are their standard model biases where people are only going to share the ones that are going to get them more funding and that’s going to get them tenure where they have to defend the thing that got them the Nobel Prize (even though it may not be true anymore for ego and identity biases) you’ve to factor in all of those kinds of sources of possible bias. And so this is the first, I would say, kind of valuable thing when you’re trying to do sense making, is to recognize that the signal that you’re getting everywhere is mostly strategic, which is just another way to say intentional, on the part of the agent sharing it for their purposes, not yours. And where there is a dis-alignment between your well-being and theirs, or at least an apparent one then, what their basis for intention might actually suck for you and even if there seems to be alignment, you still don’t want to be lied to for your good, you still want true information. So I would say one of the first things we want to do when we start to do sense making is to look at why is anyone sharing what they’re sharing, and not assuming that they are being truthful.

So basically truthful is about game theory. Truthful is about the fact that people are lying all the time. And we’re actually going to say a little bit more about that one formula. When you’re playing poker you learn how to bluff, because it’s not who has the best hand that wins it’s who makes everyone else think that they have the best hand and right there’s a lot that goes on in that. And so because it’s a zero-sum game, that if my win does not equal your win my win is going to equal some other players losses, then I have an incentive to dis-inform you where information about reality is a source of competitive advantage. This is actually the real kind of key way of thinking about it. Because disinformation even happens in nature with other animals. You’ll see a caterpillar that evolved to have something on its tail that looks like a head to dis-inform birds so that they go to pick up the false head and it might still be able to live. That’s actually an involved disinformation strategy, and it’s just that the disinformation in nature happens very slowly and where the selective pressures on the side of the caterpillar and the bird are co-evolving. The bird is getting better at noticing those things as the caterpillar is getting better at dealing with that. Camouflage is a kind of disinformation right it’s an attempt to not signal something fully because there’s rivalrous dynamics between the caterpillar and the bird in that scenario. But with people with our abstract replicators, we can create the distortion much, much faster. We can have a symmetries in the capacity to create the distortion, and even exponential asymmetries. And so it’s actually really quite different. So you think about the poker bluff and you think about, like, even in soccer or football when someone fakes left and then goes right—that’s a disinformation strategy, where if we’re competing in information about the nature of reality where the water is, where the gold is, what the market is going to do next, if this company is going to make it, whatever equals a source of advantage, but we’re an assumed rivalrous dynamic (we’re competing for the same money, competing for the same attention, whatever) then first I have the incentive to withhold information. So I don’t want to tell you where the gold is, or I don’t want you to know the intellectual property that I’m going to monetise. Simply the withholding of information fucks up the information ecology so much because I’m doing cancer research and I’ve had some big breakthroughs but I’m not trying to share that with everyone else just doing cancer research because this is being funded by a for-profit process that needs to be able to monetize that through intellectual property. So we can see how many problems happens as a result of withholding of information, but we can also see how intractable this problem seems within a game theoretic environment like capitalism. I keep saying capitalism, I’m not going to say that any other bad economic system we’ve ever tried is the answer because they aren’t. We have basis for disinformation in communism and socialism and fascism. We’re going to suggest that new structures that have not ever happened are needed, I’m just wanting to say that here so people don’t attach to me criticising capitalism, is probably going to suggest something that doesn’t work. So the first thing is withholding information and we see in business how much focus is on IP and NDA’s and, you know, those types of things. But then it’s not simply withholding information it’s also disinforming, just like the poker bluff for the fake left can go right. And we can see in warfare, we try and have black projects where we withhold the information because we don’t want the other side to know what our military capacities are but we also try to disinform what our military capacities are or where we’re going to attack or whatever else as a source of advantage. That’s been happening forever, Sun Tzu writes about that. We’ve had a basis for disinformation for a long time. We’ve had rivalrous dynamics for a long time. Rivalrous dynamics are a basis by which we can get ahead by war and killing somebody else or lying to them right or ruining the commons it’s just exponential tech leads to, with those same incentives, leads to exponential disinformation, exponential extraction, exponential pollution, exponentially scaled warfare and on a finite playing field that self-destructs. So the underlying cause is the same stuff that’s been happening, but at a speed and scope and scale and level of complexity that forces us to have to actually deal with the underlying structures now, because they can’t continue. So that’s the game theoretic side of it, now what would it take to have an intact information ecology where any information that anyone had, just on the truthfulness side, was being shared, that there was no incentive for disinformation. First let’s just imagine that, no disinformation. Let’s give some other examples of disinformation. There’s not just where I’m intentionally trying to mislead you, there’s also where I’m sharing signal for some purpose for me that might mislead you and I’m not intending to, I just don’t care if I do. So let’s say there I want some increased attention, and this might be because I’m going to monetise that attention, might be because I’m going to get political power, might be simply because I just want attention. So let’s say I comment on what some famous person is doing. Let’s say I disagree with him, I’m instantly going to get some attention if I critique them effectively, that I didn’t necessarily earn and I don’t even have to believe the critique⁠—because via association of that type, I’m going to get some attention. So now people have a basis to focus on something they weren’t focused on before, to criticise it because that will get attention or to compliment it or to play off of it in a way that is not actually what they care about or believe and again you look at how ubiquitous it is, that kind of phenomenon. So the answer to getting over the truthfulness issue is actually post game theoretic world, which is the same answer as “how do we get past warfare?” Well it’s not just kinetic warfare, where we throw bombs or rocks at each other, it’s also info warfare and narrative warfare and economic warfare, which is basically any in-group that is coordinating to compete against an out-group in some kind of zero-sum dynamic. And that’s companies to companies, it’s companies to people, it’s people to people, it’s countries to countries, it’s global economic trading blocs with each other, it’s all of those things. What can people do right now within a game theoretic world to start to create spaces of truthfulness, start to create relationships where one of the highest values is truthfulness with other people that are capable of and want and are committed to that where people are not only not lying to each other, but they are endeavouring to not withhold information which is tremendous intimacy and tremendous vulnerability. And see if you can create enough psychological safety with some people to be able to start exploring what does it mean to actually share information honestly, so that we can have that and all kind of make sense together. That’s one thing. And there’s also something where it’s like, if you don’t throw trash out the window of your car because you don’t pollute the environment, be careful about not polluting the information ecology by rationalising why your own miss or disinformation is okay. And just start to think of it that way, think of anytime you’re sharing a little lies as polluting the information ecology and being like, “oh wow, I don’t want to do that, I don’t want to be part of the fucked up information ecology.” 

Okay, so now on the true side. Which is not just a mapping or a correspondence between what I’m saying and what I think, but between what I’m saying and what shared reality is, which means there has to be a correspondence between what I think in reality that means I had to do since making well before I share something. So this is the topic of epistemology. So one is movement past game theory, the next is epistemology: how do we know stuff. So even if nobody was lying and withholding information, the complexity of the world makes  epistemology hard. And most people aren’t even endeavouring at it. So if no one was lying and I could take all the information as at least truthful, there would be certain epistemic processes that I could apply that I can’t apply if I can’t even take the sources of signal as being signal without a lot of noise. So there’s epistemology that I have to have within the context of an environment has a lot of disinformation. How do I make sense of what is true and what isn’t true about signal coming in, and then how do I parse from lots of signals what might be true about reality? And to just get a sense of why and how big a deal this is, you take any of the biggest issues in the world. Like the issues that could determine whether or not we keep existing as a species. So take big environmental issues, like climate change. There’s disagreement as to whether climate change is really even a thing and to the extent that it is a thing, what the causes are and what the time scales are. Now most people who believe fervently that “climate change is real, 99% of climate scientists agree it’s anthropogenic greenhouse gases, etc”, most of the people that believe that fervently enough to go into narrative warfare for it, have never actually looked at the primary data deeply themselves. And yet there’s an almost religious fervour around it that was based on having proxied their sense making to people who they believe. So the UN said it or the Gates Foundation said it or the whatever it is, I’ve heard it repeated enough times, just through repeatability like I have been programmed to believe this thing is true which is not that different than believing a fundamentalist religious idea. 

Let’s say we take people’s fervent ideas on vaccines or their fervent ideas on the viability of market ideology or almost anything like that. Almost no one who has fervent ideas has a good epistemic basis for the level of certainty they hold. There’s a decoupling between how much certain team they have and how much certainty they should have through right process. And then you look at who are they proxying their sense making to, and most the time they’re not even proxying their sense making to the people who did the original research, many of whom disagree with each other and were funded by somebody to say something that is not fully true in the first place and who maybe were employing epistemic biases themselves. But typically, it’s somebody else who looked at all of that and then someone else who looked at all of that, so you might have like a bunch of climate scientists into someone who is speaking about that as a client’s climate scientists at a more synthetic level like a James Hansen or whatever, to then like a Gore or someone who is actually speaking to the public who we are proxying our sense making to, and we say okay how many steps removed is it and how good was the original data. And so if we think about, okay, how much radiation actually was released into the environment from Fukushima? It seems like a very straightforward thing take a Geiger counter and go out and do the studies, but how many people are equipped to take a Geiger counter out and go do that? Or to be able to actually pay attention to how the flow dynamics and the air and the water are going to work or you know so many things and so we have to take other people’s data to begin with and those other people, let’s say the data was the Japanese government or TEPCO or whoever it was, or it can a conspiracy theory group that is saying no it’s actually releasing huge amounts of ocean all the fish are totally toxified. But they might just be, they have a basis to disinform because they’re getting viewership, they’re monetising through that. So what start to get is like, is AI going to solve a bunch of problems and be relatively safe or is AI the biggest risk and going to kill us and you see the, you see kind of fervent disagreement but you see a still pedal-to-the-metal going as fast forward as we can with AI. And with CRISPR biotech, and with every type of exponential technology that could be catastrophic. And so there’s increasing speed of choice making with decreasing sense-making. And to just think about, like okay, what’s really going on with the Chinese government and its cyber warfare relationship of the US government? And what its actual capacities are, and what its intent and agency is, and those types of issues. Well, we know that’s going to be obscured, we know both sides and all kinds of sides are going to be obscuring information. And it gets even worse because it’s not just that you’ve got this group of people called China and this group of people called the US and that they’re in a game theoretic relationship with each other, but everybody on Team USA cooperates perfectly, you know, of course it’s not that. So even within the intelligence agencies, I might have two different intelligence agencies that are supposed to be cooperating but they’re competing for a bigger percentage of black-budget, and so they might be withholding information from each other or even dis-informing each other. Then I might have two agents within an agency competing for the same promotion who might run disinformation on each other, so I have fractal disinformation right at almost every place because of a game theory incentive system, which is the balance sheet of countries, the balance sheet of organisations, the balance sheet of all the way down to individual people. This separation of agency.

So I’m back to the game theory truthfulness side. But I have to factor that in when I’m trying to make sense of things because I have to be able to parse signal from noise to then be able to synthesise the signal. But then even if that wasn’t the case, I’m just trying to do an epistemology on good signal. And I have to say ok, in a complex system we’re we can’t even forecast the weather ten days out very well, how do I forecast the effect of putting certain kinds of pesticides or genetically modified organisms or whatever into the environments. It’s a complex system that we can’t forecast very well at all, we don’t know the tiniest bit of the actual information of how that complex system is going to regulate. But we’re going to do stuff that affects those systems at scale. What is the right epistemology to be able to make sense of “is this a good choice?” What are all the metrics we have to factor in? Let’s say we’re talking about biotech. So I can give you a drug that is good for some biometric that happens to be associated with the disease that I’m trying to get the doctors to be able to use this drug allowed by the FDA to treat a particular disease and then since the disease is identified by this biomarker I have to affect this biomarker. So let’s say I’m talking about high cholesterol and so we develop a statin for it. How many other metrics are this is the statin effecting. Well every day we’re learning new biometrics we didn’t even know existed. How many of those are being affected they’re part of the unknown unknown set that we don’t even know to be able to do risk calculation on? Now we could say well let’s run the experiment long enough before we release the drug to see if it affects total longevity and all-cause mortality. Well nobody does that, nobody’s going to run a hundred year experiments on something before they release the thing. They’re going to run the shortest ones they can. So where you have a system that has delayed causation, how do I know if that’s creating problems way down the road. Well it does all the time. So we get rid of DDT or parathion or malathion because we see that it’s super poisonous after we’ve been spraying it on everybody, and then we bring in a new drug that we also didn’t do long term studies on. I mean a new pesticide and then we outlaw it after a little while and then we bring in new ones. The new ones aren’t safer, they just haven’t had as much time to show how dangerous they are. So then the question is like, what would the right epistemology be? How many metrics do I have to factor? How do I know how to factor those metrics? What is the total information complexity of the scenario relative to how much I have actually the information complexity of the assessment that we’ve done? So we can get into at some depth, the topic of appropriate epistemology for various contexts but the I guess the first thing I can say is that if people aren’t even thinking about that, their chance of making sense well is pretty close to zero.

If we think about the concept of a meme the way that Dawkins originally put it forward, it’s an abstract pattern replicator where a gene is an instantiated pattern replicator. Which means that it can mutate and change and affect behaviour and propagate much faster. And we can kind of say that in Homo sapiens, our genetics selected for memetics, for higher-order mimetics. Because with most other species, what the selective pressures had them be adaptive to an environment. There’s mutation and then the mutations that survive and make the best are the ones that make it through. But that’s within the context of surviving in that environment and are able to mate successfully in that environment. So they become more fit to their environment. The cheetah does really well in the savannah, it would not do well in the Arctic, and the polar bear wouldn’t do well in the savanna and the orca wouldn’t do well outside of the ocean. So they are well adapted to their environment because of our abstraction capacity, which is both our capacity for language and memes as well as tools, we were able to go and become adaptive and become actually apex predators in the savanna and in the Arctic and in the ocean and everywhere. We were able to go to every environment, which means that as soon as our population would normally—if we were any other animals just level off in relationship with the environmental carrying capacity of an environment—we were able to decimate that environment and move to the next one, into all of them. Since we were going to be adaptive to totally new environments and since we were going to create tools, where what it was to be adaptive was changing, since we modify our environment in ways the other animals don’t we can’t come in genetically fit to a specific environment we have to come in and be able to imprint the environment that we’re in so we know how to be fit two totally different environments. Because it’s not that adaptive for us to throw spears or even climb trees all that well, but it is to be able to like text and drive and stuff that wouldn’t have been adaptive thousand years ago at all. And so this is why human babies are embryonic for so long right compared to any other animal. The thing I like to do here is to think about a horse standing up in like twenty minutes and a human being able to walk in a year, and just think about how many twenty minute segments multiply into a year to get a sense of how much longer we are helpless than anything else is. And even amongst the other primates close to us, there’s really nothing like us in terms of the extended helplessness and that’s because we don’t have inherited knowledge of how to be us since the environment is going to be different, we have to imprint the environment that we’re in to be able to be adaptive to environments that we’re changing. So this is saying that our genetics selected for neuroplasticity, selected for memetics, our hardware selected for faster software updates that could have faster changes in adaptive basis so that we could move our environments and all those types of things. So if we think about a meat and kind of like a gene as a pattern replicator but as an abstract pattern replicator that can mutate much quicker, there’s also a big differences that the other animals are in an environment where the mutation across genes is very evenly distributed. Mutations happening to the gazelles and to the cheetahs at an equal rate right. And there’s co-selective pressures on both of them, so they’re both getting faster—the slowest ones of each are dying off. So there’s this kind of symmetry of power that has the competitive pressures between them have them all up level. But when we start being mostly mimetic and the other species are still mostly genetic (meaning we’re largely getting adaptive based on abstract pattern replicators and they are still in stantia pattern replicators) we can increase our predative capacity much faster than than the environment can increase its resilience to our predative capacity. Which means that we can debase the whole substrate that we depend upon which is self-terminating, you can’t keep debasing that which you depend upon. So in evolution there is a selection process for the genes that make it through, but there is this kind of symmetry of the genes that make it through because of the evenness of mutation and because of the co-selective pressures. Now with mimetics the memes that make it through are the memes that win in a rivalrous context, not the ones that necessarily represents the true or the good or the beautiful. So the propagated memes propagate more than the true memes propagate. This is a super important concept to understand. I was always dumbfounded thinking about the evolution of religions. Take Christianity for instance and you say, okay, so Jesus when they brought Mary Magdalene said ,“let he who has no sins amongst you cast the first stone”. And then when they’re nailing him up he says, “Father forgive them for they know not what they do”, and he’s like bringing forgiveness to Judaism. And in his name we did the Crusades in the Inquisition, and said, “we will not just kill but torture anyone who doesn’t accept the Lord of Peace”. How did we do that? How did we do the mental gymnastics to take the guy whose key teachings were forgiveness and torture people in the name of that? Well, we figured out how to do it right, but the key is that we were super adaptive. So you you’ve got an idea, say you’ve got Jesus’ teachings. And then there’s going to be a bunch of mutations on that idea, different interpretations of it. Some of the mutations say “be quiet and don’t push your ideas on anyone”. Like, be contemplative, etc. And those ones don’t catch on because they aren’t being intentionally propagated. And the other ones that say, “go out and propagate these ideas on missions and on Crusades and focus not on the forgiveness parts, but on the like wrath and Leviticus and who God’s enemies”. Focus on those parts and that you actually get better spots in heaven for converting more people. So what you end up getting is that the ideas that catch on are the ones that win in narrative warfare. But they’re having to catch on, so then say, Islam is also competing for some of the same people right, because ultimately the religions become the bases of in-groups that are competing against out-groups for fundamentally political and economic and survival-type bases. So I can hold people together with a political left or political right ideology, or a capitalist-communist ideology or racial identity ideology or religious ideologies. All of those become the basis of an in-group that can bind together to be more successful in competition with assumed out groups. But what that means is that rivalrous game theoretic environment is going to be selecting for what is effective at rivalry not what is true, and definitely not what is good for the whole. And the moment that anybody figures something out that is more effective at rivalry than everybody else, reverse engineers it and creates similar mimetic mutations on other sides and so there’s not that many Jains, right. They’re lovely people and totally nonviolent and aren’t going to hurt anybody, but they’re also not pushing their ideas on anybody so the ideas aren’t spreading that fast, so the ideas that have an artificial focus on the spreading of ideas and figure out how to emotionally manipulate people into believing the idea with heaven and hell and whatever are going to spread more. So this is a key thing to get is, it’s not just—I’m giving the example of the teachings of Jesus turning into the Crusades or the Inquisition, but it’s also, take thinking like Darwin. So take the context in which Darwin came about. Darwin came about just following Malthus. So we’ve got the British Empire’s first real world ruling Empire, the first time that a global inventory of resources was ever conducted, Malthus came up with the fundamental principle of scarcity or inadequacy and says, “hey people are reproducing geometrically, resources reproduce her arithmetically, there’s not going to be enough for everybody, not everybody’s going to make it. Well then, who’s going to make it?” That idea says compassion is not viable, we can’t all make it fundamentally, mathematically there aren’t the resources for it. Now this is gibberish actually, today because we know that populations don’t reproduce exponentially forever, they steady-state and we’ve already seen the population in Japan and in Denmark and another like the most economically successful countries start decreasing. And we also know that we can recycle resources, we don’t have to just have a linear materials economy where we use them up and turn them into trash, and that completely fundamentally changes the underlying scarcity basis we can also share resources in different ways.

So the underlying thinking behind Malthus isn’t true, but he didn’t know that at the time. It seemed quite compelling. So then we say okay, so a solution for everyone, a world that works for all is not even viable. Those who want it are simply not facing up to reality. And so if not everyone’s going to make it who’s going to. And so then Darwin comes out in that context and so the idea of survival of the fittest is the idea we focus on the most, even though that’s not what he emphasised early in the writing hardly at all. So there was again taking Darwin’s idea in one context and then also taking the most propagated version of it that led to social Darwinism that basically reified institutional sociopathy. Which is okay, well if not everybody’s going to make it, some people are going to be like predators other people are going to be like prey. Predators don’t feel bad when prey die, you have to actually call the herd sometimes. And if you start to think about how hierarchical power structures work, to get to the top of a hierarchical power structure, like a big corporation or a government or a religion that’s structured that way (let’s leave that one off for a moment just say corporation or government), I have to win at a lot of win-lose games, I have to get the promotion or win at the campaign over other people, so the top is going to be people who are best at winning out win-lose those games. So if I have a lot of empathy and I actually care about other people’s loss, I’m going to do less well. If I’m a sociopath and don’t give a shit about it, I’m going to do better at. If I’m willing to dis-inform to get ahead I’ll do better at. So this is why we see higher percentage of sociopaths and psychopaths as CEOs and in the normal distribution of population, which also means that the people have the most influence in the world are asymmetrically empowered and asymmetrically sociopathic. But the way that we interpreted Darwin was reifying that as a reasonable thing, and even a good thing.

So we have to recognise that any idea even if it started as true or good or beautiful gets put into the game theoretic mill. And what propagates is the thing that’s propagated. So why do bad ideas catch on?  Largely for this reason. So oftentimes the best ideas are not well marketed and aren’t even easily marketable, and best marketed ideas that are going to catch on the most are not that true and pretty shitty results. So this is again something that people have to really pay attention to is, I think Jordan in one of his interviews with you talked about the difference between real thinking and simulated thinking. So if I’m just taking memes that I’ve heard I’m in a conversation and I’m listening for which of the things I’m going to say that I have heard somewhere else, I’m basically a meme propagator it’s not real thinking. I’m not actually endeavouring to try to make sense of the world in a new way that I’ve never done before. Most people hear something, hear a meme that comes in and because it’s from Fox they believe it or don’t believe it, or because it’s from CNN or whatever it is, they believe it or don’t believe it. So they basically have some cluster of memes that creates like a memetic immune system that says which idea is to accept and which ones to reject and then once it’s accepted then propagates them. That’s not thinking and there’s no sovereignty in that, and groups that have asymmetric meme broadcast capacity are good at making memes that are sticky, customised to specific audiences and being able to split test uptake. And then you get into high tech things like Facebook and it’s like okay, I can actually pay attention to what you click on and do profoundly deep analytics. I can pay attention to mouse hover, I can pay attention to all these types of analytics and customise the disinformation to everybody using the kind of AI that beats the best chess players in the world of chess. This is stuff that Tristan Harris talks about. So the AI that beats Kasparov at chess easily. Nobody is as good at being strategic with themselves as Kasparov is a chess and we don’t even know that we’re engaged in that. And yet it’s competing with us for our attention for its purpose of maximised time on-site. Now it just happens to be that I’m going to scroll and bounce unless than catches my attention. Short things will catch my attention more because I’m in a hurry. So just orientation for small bit size, makes everyone more fundamentalists with shittier attention spans. And then if the headline is more kind of sensational, it’s going to attract attention more so than we get these big platforms that don’t want to make people fundamentalists or how to drive politics, but they do simply as a by-product of the fact that limbic hijacks are sticky and they’re optimising for time on site. 

So first thing is that individual memes don’t get selected for—memes interrelate with other memes to create world views. Because I usually can’t make a choice based on a meme, I have to make a choice based on a representation of the world. Just going to be a bunch of ideas, a bunch of memes, a bunch of data. And so we get these kind of meme complexes, and so we can think about the evolution of meme complexes. So let’s go ahead and look at the evolution of religious ideas. We can look at the evolution of political ideas or scientific ideas, or anything. So, religious ideas. You’ll have some kind of central meme. But then you’ll also have protector memes that emerge with it that also believing them protects against the kind of cognitive processes that could have someone stop believing the primary meme. Because if the primary meme doesn’t have protective memes it won’t last so the question isn’t just what makes a meme propagate but also what makes it endure and resist change. So there’s the evolution of memes, but also the meme complexes’ resilience in the presence of other competing meme complexes. So we’ll give these protector memes and then we’ll also give propagator memes that are trying to take the whole complex and propagate it so there’s a whole football team of defence and offense and carrying the ball. So let’s say we get some kind of fundamentalist religion, so let’s say we look at Christianity, this would be the same for any religion this is just probably the one that most people here will relate to. So you’ll have some kind of central doctrinal teachings about Christ and God and what good is and those types of things. But then protector memes are what could make me not believe these things and how do we protect against those processes. So if we want to have say a literal interpretation, then we don’t want people to doubt the literal interpretation, then we start to have memes like faith is something God likes and doubt (which is another term for critical thinking) is something that God doesn’t like, it actually comes from the devil and you’ll burn in hell for it. And so the more that you just believe in the teaching, the better your chances of heaven are and the more good you are and the more that you doubt it the more that Satan actually got to you. So that’s a very strong protector meme against the kind of critical thinking that could make you question the basis of the religion. You’ll also have other protector memes like if I believe this, if I continue to believe this I get to keep having a family and a community that will take care of me if I’m poor and if I’m sick and will help me out in times of warfare and all those things. And if I stop everybody will disown me or to various degree there will be direct implications to my life regarding the change in these beliefs. And then again you’ll have propagator memes we need to go on mission and share the good word with other people and convert them, so you start to see a memetic ecosystem.

So now, if I’ve got propagator memes coming up against some people who also have protector memes from a previous complex then there’s an evolution, there’s a competition of which memes are more successful. So then, we look at the kind of evolution of the types of meme complexes over time. Now there other people have a lot more expertise in this than I do and I don’t have adequately detailed history to say this is the right historical narrative, it’s an example of the type of thing that could happen so take it as that. So say you look at the evolution of the concept of hell. Hell as a meme, in Western traditions. So first, say you take Eastern traditions like Hinduism or Buddhism, they might have many different hell realms and nobody stays in any of them forever. Most of the interpretations say that everybody gets to moksha or liberation eventually, these are places that people go to learn things as part of the liberation path. So and if you look at the descriptions of Hell at various points in Christianity, Hell got nastier as time went on. And God also got more authoritarian and heaven got better and even if we go before that we look in early cultures and we see most early cultures were animistic—the spirit of the Buffalo and of the tree and of the river and there was the spirit of everything. And so spirit was radically decentralised, and then there was an evolution because of evolutionary pressures on memes that corresponded with little tribes living in nature too early civilisations. There was an evolution to polytheism, which is now not the spirit of everything but some smaller number of more powerful spirit or gods. So there’s starts to be a consolidation of power in the religion, corresponding to a consolidation of actual power and the social structures that are happening and the way that we make sense of the world, in the way that the religious idea actually supports the thing that we’re doing in politics. And then we end up going from polytheism to monotheism and if you think about control systems, some few being able to control a many in ways that are better for the few than they are for the many and how you have systems of kind of institutional controller oppression. So if we’re having consolidation for having increasing power inequality in our social systems and we want to justify that increasing power inequality, and we want to justify increasing authoritarian power, having gods that reflect that is valuable. Those memes are going to actually be adaptive to the social systems doing that. And if you think about an authority that controls a population with reward and punishment, and then we make we actually make a God in our image and we say, “okay well what would the infinite authority be, and what would infinite reward, an infinite punishment be?”. And you just take all the concepts to their logical zenith. That’s what Christianity did. So you have one all-powerful God beyond reproach, with an infinite punishment and an infinite eternal punishment, an infinite reward. If you’re trying to look for maximum behaviour mod, that’s what you end up coming to. But it’s almost like the strange attractor of a landscape that just takes it to zenith, where it’s so terrifying once you’ve been indoctrinated leave that belief system because hell is so bad that any chance of it is so unacceptable that all kind of do Pascal’s wager and just stay in it just because. Now of course, by believing that, I might be going to the Muslim hell if they’re true but that never got its hooks in me when I was young so I’m not as afraid of it unless I grew up there in which case it has the hooks and I’m much more afraid of that than I am of the Christian hell. So we can see that you get an evolution of the Gods getting more powerful and more authoritarian and more propagative and more ideology around propagation and reward and punishment. Those things that you would expect. And specifically it’s that the groups that instantiated those memes would have actually done better in a natural selection determined in warfare. And also if I have a man’s dominion over Earth ideology it says yes destroying the environment for us and it’s okay if we destroy the environment because God’s going to come back and remake everything you know after purgatory anyways and treating animals badly really doesn’t matter because again dominion over—like those ideas are actually going to be adaptive in terms of increasing the game of power. Even though if you take to zenith that game, it self-terminates, it destroys the whole world on the substrate that we depend upon. 

So we’re looking at a kind of Christian worldview, but you can see that a whole worldview of ideas comes up around supporting Trump or climate change or critical thinking and Dawkins kind of scientific materialism. Each of those worldviews have processes of in-group, out-group pressures. Like something bad will happen if you don’t believe in these things. Going to hell is the worst example, but simply being rejected by the in-group and being called like a stupid conspiracy theorist or a pseudo-scientist or an anti-vaccer or whatever it is, or an infidel it goes against whatever the in-group idea. There’s a strong selective pressure to defect on our own thinking to the centre of the in-group. And a lot of advantage will occur if I defect on my own sense making to a centre of the in-group. And I don’t trust my own sense making, so I’d rather be part of an in-group or at least I’ll be safe by being part of an in-group, because I can’t make sense of the world and I can’t take care of myself really in the present. And it’s not safe to disagree with everybody. If I share views that just have me disagree with everybody and I don’t have any in-group, then I seem fucked. So what I do is I look at which in group seems to be closest to what I think and then I defect on my thinking to normalise with the in-group. So we want to just kind of pay attention to you know looking at these different information ecosystems, mimetic ecosystems, to see how they evolve, how they apply in-group out-group types of selective pressures, and how they apply appeal other than good epistemology. Like how they apply rhetorical skills and emotional manipulation skills to compel people to believe something where the basis to believe it should actually be some better epistemic process. Then the key thing is if we can start noticing this in ourselves, and noticing where we feel a kind of bias towards or away from something before we’ve thought about it, based on how it fits with the rest of our mimetic complex that we also haven’t analysed well.

Interviewer: I guess a lot of people a lot of people might be tempted just to give up on trying to make sense of the world.

I think almost everyone has. Again it’s like, is Fukushima still in risk of further breakdown or not, has it already polluted the ocean, should we be eating fish from the Pacific or not? Is 5g actually possibly a problem or not? Is climate change and coral reef die-off an eminent issue in the next five years or not? And if so what is the right way to approach it? Are we approaching our own doom with CRISPR, with AI? Like these are seriously important things to have some clarity on. These are things that we actually want to come to enough certainty on to make choices, because we are making choices. And by not making choices, we’re making choices by default in terms of what market pressures are doing. And almost everyone if they really kind of think about it will admit, I actually don’t know the answer to any of those things. But that doesn’t slow down the rate of first to market for powerful AI or you know anything else. And to the degree that people don’t admit that they have no idea and they think they have an idea, if they’re honest, most of them will say, “I’m pretty sure I have an idea because I proxied my sense making to other people that I trusted, but I didn’t actually do the foundational research”. And to the degree that people did the foundational research if they’re honest enough they’ll say, “the total amount of data that I looked at relative to the complexity of the scenario was orders of magnitude too small”. So most people have given up on sense making about based reality. They just haven’t admitted that. Where I can get ahead in a human ecosystem by affecting what other people believe, independent of what is true, independent of base reality, then we live in a simulated reality where if I can if I can get you to believe something that will lead you to vote in my interest or to purchase things in my interest or whatever, game theory will have me try to optimise distortion and optimise my ability to get ahead at creating and winning at simulations and it won’t have me try to connect to base reality at all. And so we get a world that is so constructed and decoupled from base reality that most of the time what is true isn’t even relevant and it’s even relevant. And it’s even further than. There’s a whole class of people for whom the idea that there are true concepts, doesn’t even appeal to them very much. Now some people might think this sounds like excessively cynical. I’m going to be careful with this because I don’t want to say names that will create conflict unnecessarily but I was talking to someone the other day who was saying that what I believe is give or take the same as what this other person believes and what we believe on this topic couldn’t be more diametrically opposite. But the person who is saying, “yeah you know we all kind of are on the same page, we all believe the same thing” wanted to put on an event where we would all be there that she would sell lots of tickets for. And her orientation is what is actually going to optimise viewership and so how do we get people that are going to be engaging and mix with people who are famous enough in order to optimise viewership. But I was focused on what is actually true. Are we being earnest in what we’re endeavouring to do here? And her sense that we believe kind of the same thing, because we both talked about technology and we both talked about the future and we both kind of want stuff to be good, and we both use the word exponential sometimes—it wasn’t that she thinks that what I think is true or what this other person thinks is true. It’s that the idea of true beliefs is not what she’s optimising for. What she’s optimising for is what’s going to get the most views and then there’s a backfill, a rational backfill that tries to rationalise. That we believe kind of the same thing or whatever. But the orientation isn’t even trying to say, “do I believe what this person saying?”. It’s saying, do I believe that I can make money on what this person is saying so you realise that we live so much in a simulated reality. Like if I’m a venture capitalist, I can make money on a product that is shittier than products that currently exist even though the market is supposed to be a sense making mechanism within the context of demand that there will be mutation—lots of different versions of the same product for the same service, and that the one that’s actually the best at the best price is the one that the market will select for and it’ll be upregulated. So the first part was mutation, then survival selection, and then good parts of a few different ones might combine and that’s kind of mate selection, and that’s kind of how we think of markets as a sense making system. But we all know that the best marketed product will oftentimes beat the actual best product, where the price of product development drops the price of marketing and customer acquisition goes up, which means that it’s a shittier product better marketed. That means the sense making system is broken. It’s not a good sense making system. So but as a venture capitalist I can invest in a company that I know is going to create a massive distortion level and market really successfully. And maybe the company will go bankrupt at some later point but I will have exited by then, so I don’t really care long term. I care that it’s going to market successfully and be successful financially not if it’s actually a better product or service. And so if the if the CEO is a really compelling sociopath that is highly motivated and spins distortion bubbles I’m going to be more motivated by that than thinking that the product is actually fundamentally better, in many cases. That’s not only story but it’s a story. And so then my goal is to actually invest in someone else’s ability to generate a distortion bubble that will pull enough people along that me as an early investor, it’ll lead to adoption of customers and other investors and then I exit before the distortion bubble pops. And so we can see there are whole domains that are fairly decoupled from reality, from base reality, where spinning simulated realities is the whole goal. 

It’s another thing we have to pay attention to, is that the cognitive complexity of a lot of the issues we face is just much vaster than most people have time for, than most people have training for and even vaster than we evolved to process. So this idea of hyper objects, that there’s not just plutonium but all of the plutonium, or all of climate change or all of species extinction. I can’t actually observe that directly but I can infer it know that it’s a real thing. But how do I hold the cognitive complexity of any of those let alone all of them, is a very tricky thing. I still have to feed my kids and pay the bills tomorrow, and so the adaptive pressures on me are to focus on what I need to focus on in the very small and in the short term, even though what I’m doing because of globalised supply chains is affecting the global in the long term. The idea “think globally, act locally” most of us are doing the opposite. We’re actually thinking very locally about having actions that affect the world globally, which means we’re thinking on very short timescales, fairly narcissistic and self-indulgent timescales, but with things that will have enduring and massive impacts. And that is a decoupling of scale, it’s a decoupling of sense making and choice making, a bunch of things. But so for most people the idea of having the luxury of trying to make sense of the world, is something they don’t even feel like they have because they’re nose-down just trying to like do the next couple things they have to do. And then other people who are actually trying to get ahead are optimising for simulations and distortion bubbles rather than trying to make sense of the world. So there’s a lot of people who aren’t actually even trying to make sense of the world. Now when I ask what is meaningful, what is meaningful is going to be bound to what I think is real. And if I give up on knowing what is real, there’s a way in which I’m giving up on the depth of my connectedness to what’s meaningful. So yes giving up on sense making there’s  kind of an expression of a type of nihilism and it and it feeds into further nihilism.

First thing is I stop trying to squish reality into a perspective. This is super important. And then anytime I have a perspective and I am defaulting into thinking of it as the truth, I become dubious of that in myself. And I become curious about the partial truth in other people’s perspectives, including the ones that I think are stupid and crazy. Because none of them have no signal, even if I think there’s a lot of noise. So I want to say why do they think it is true. Well this bias, that bias, okay, and some perception mixed with the bias. What are they perceiving then with the bias is also there. Maybe it’s not that much sense making but it’s something that’s meaningful. So this is St. Francis’s quote of “seek more to understand then to be understood”. That means actually seek to take different perspectives. So the Hegelian dialectic is you’ve got a thesis, and then the antithesis, I actually want to try and take this perspective and construct this case and once I’ve constructed both of these, now I’m stuck with either that I flip-flop between them or I’m just confused, or I just claim paradox or the next step is thesis, antithesis, synthesis. There’s a higher-order truth that is actually not paradoxical, that reconciles them, it just requires a higher order of complexity. It’s paradoxical within too low a level of complexity. Einstein’s “you can’t solve a problem at the level of problem, at the level of complexity” and so since the house is three-dimensional, no two-dimensional picture will actually give me a solid sense on it. If I’m looking at the cylinder, if I’m trying to collapse the cylinder by a dimension and take a cylinder (which is a three dimensional object) and take a 2d slice of it, if I got a cylinder like this, a slice like, this is a circle, a slice like this is a rectangle in 2d. A circle and rectangle are mutually exclusive descriptions of a shape. One has straight lines and corners, the other has no straight lines and no corners. If I try to say, “well it’s both”, that just makes no sense at all, right. If I say, “well it must be part of both so it’s a rounded rectangle”, well that has no truth at all. But I have to actually be able to construct a higher dimensional space in which rectangleness and circleness fit together in a way that makes perfect sense, called a cylinder. But this is what a level of consciousness that isn’t the level that caused the problem, the problem is partiality of perspective that can create then a basis for conflict. So then there is a higher order that is able to reconcile those. And so this is where you know we’re seeking clarity more than simplicity, because the simplicity can happen through reduction. And typically the synthesis comes from novel insight that neither the thesis nor the antithesis held, it’s a little bit of both. So a couple examples. Let’s say we take kind of political left and political right ideology there’s a gazillion examples. And what we call the political left and right today might seem actually quite different than what it was in the past. So let’s take kind of what had previous to recently been some essential ideas in Republican or Democrat platforms. So we kind of have this Republican right oriented idea of wanting to empower the self-responsibility and sovereignty of the individual, and more individualistic and so smaller government, less social services, more empowering of those who are entrepreneuring and pulling themselves up by their bootstraps. And you know that kind of thing. And then you have the kind of more Democratic left perspective that says, the collective is actually created by the individual, so we want to empower the individuals that are creative and take agency, because better individuals make a better whole. That’s kind of the gist, over here it’s like, well but the individuals are being conditioned by the environment that they’re born into, by the whole. And so even though I can find that one story of that one guy who pulled themselves up by the bootstraps in the ghetto, there’s a whole lot more people that succeeded who were born into the Hamptons and born into South Central. And so let’s create better environments that actually condition better people because there’s top-down effects. So there’s for many people, some seemingly compelling truth in both of these. And also also problems. So there can be a right oriented perspective that says, hey look if we set up social services in welfare and whatever we actually condition shittier people who are less strong and resilient in the face of the environment, and we de-incentivise those who are most entrepreneuring and creative and we make people who are doing badly still do well, and that actually kind of down regulates the evolutionary pressure in the whole system. Why would we want to do that? And of course over here, we can have a kind of left perspective that says, yeah but some people are getting ahead using shared services that the government pays for from everybody’s tax money that they aren’t actually really accounting for, and they’re affecting the commons negatively in a way that is externalising the cost everyone else, so that they’re claiming to be more entrepreneurial, but it just means they’re extracting from the commons and externalising cost of the commons better. And do we do we really want to let people die on the footsteps of hospitals because they don’t have money or insurance in the fully individualistic libertarian kind of idea and fully libertarian kind of ideology can’t solve multipolar traps. But there’s clearly truth in both of these, and they are clearly neither complex enough to actually handle reality, in which there are bottom-up effects or the individuals effect the whole and there’s also top-down effects for wholes in turn affect the individual and there’s feedback and feed-forward loops and neither of them are factoring those enough. But the debate process doesn’t bring about dialectic. The debate

process here actually makes the idea more polarised to get people on this side to emphasise rhetoric over real sense-making, and emphasise winning over collectively trying to make sense together. So the debate process is not a good sense making process, it’s a narrative warfare process. Dialectic is different where it says, okay, I think we have some truth and not all of it. I think you have some, let’s endeavour together earnestly to figure out the things that we’re all interested in figuring out. And then we start to say okay, well do we want better individuals who are more sovereign independent of environment? Duh, nobody  doesn’t want that. Or do we want the wholes that support all the individuals within them to do better? Well, duh. We want that too, but the way that we’ve done social services oftentimes makes the people not more sovereign, but more dependent. So it’s actually not making better people, it’s making more comfortable but shittier people. And the way that we incent individuals over here, in the sense people who are entrepreneuring but by externalising costs to the commons and creating radical wealth inequality, so it’s not incentivising the most truly creative and intelligent and good people it’s incentivising effective sociopathy and things like that. So neither of these are doing all that good at the thing they’re even claiming to do. So how do we create social services, collective processes, that condition healthier people who are more sovereign? How do we create environments that condition people who do better in any environment? That actually conditions strength and resilience and sovereignty in the individuals who then in turn affect the environments in ways that support the increased sovereignty of everybody else? That’s a totally higher-order way of starting to think about the relationship between them that starts to recognise the inexorable failing on both of those perspectives and that what the approach would take is more complex. 

That’s an example of starting to move towards synthesis, from a thesis and an antithesis. And if you, if you look at even personal things in your life you’re wrestling with. Like okay, “do I want to accept and love reality and people and myself as I am, or do I want to like help strive to make things better?”. They’re similar, those types of things. You’ll find a thesis antithesis everywhere, where the actual insight to synthesis is a novel insight not contained in either of those. If I, when I say, accept you as you are right now, or help work to change you to make you better—and I see those as different, it’s because I’m seeing you as a noun. I’m seeing you as a fixed thing or accept you as you are, in this current state. Whereas if I see you as a process, if I see you as a verb, I see you as a becoming. That is different than you were yesterday and different than you were when you were two. Then accepting you as you are includes dynamism, includes the impulse to change and to grow and evolve, so I can accept and love you fully as you, which includes accepting and loving the impulse to grow and expand and transcend. So I can support you to become more not on the basis of judging you as insufficient but loving you completely, including loving the evolutionary trajectory inside of you. But that insight of you as a verb rather than as a noun, was actually not captured in either of these. So the dialectic process of saying, like okay well how do I see the partial truth here to construct this fully, and then how do I see the partial truth here construct this fully. And then what new insights bring these together in the higher order perspective that is more complex and more nuanced than either of them? So that’s kind of a dialectic process, and it might not just be two thesis-antithesis, it might be lots of perspectives. This is a process that I would love to see people start practising for sense-making. Whenever they’re talking with someone, start by seeking to understand before seeking to be understood. And seek for the truth value in what people are saying not just the wrongness. But then don’t holistically throw out what they’re saying as totally wrong or totally true, but be able to separate that there is signal and noise. And then be able to say, if I see signal from a number of sources, how does that fit together into higher order perspective? This this is another sense making process. 

Interviewer: You mentioned emotion and vulnerability as key components of sense making. That may seem a little bit counterintuitive to people. Can you explain what you mean by that? 

If I want to make sense of the world well and I’m going to engage in some communication processes with other people to make sense of the world with some collective sense making, then I want them to share true information with me to not dis-inform nor withhold. So, how do I create the trust and psychological safety for them to do that? Well I’m probably going to have to do that too. And so how do we create mutual trust and psychologic safety that we’re not going to use the information that we’re sharing with each other and game theoretic ways with each other. That’s a huge part, and if people don’t have some spaces and some relationships where they feel like they can actually share fully, openly, honestly and feel trust in that—their sense making is going to be radically curtailed, to what they can just do on their own. And without anybody’s ability to help error correct them with full sharing. Also one of the sources of bias is identifying with what I believe, because I’m identifying with part of an in-group that believes that thing or because I am special or smart or right or whatever for believing that thing. So the impulse to be right means that I won’t seek to understand other perspectives, and so if I’m going to actually seek to understand the truth value in other perspectives, like earnestly, try and get what they are seeing, where they’re coming from, I have to stop seeing it the way that I’m seeing it for a little while and I also have to completely suspend debate and narrative warfare and the impulse to be right and all of that. And so there is a deeper human connection that’s involved there’s also something, there’s a psychological process that almost seems like a spiritual process where for me to really try and get where you’re coming from on a topic, I have to really take your perspective. What is it in me that is taking your perspective, because it’s not my perspective? It is actually having to drop the way that I see things to really try and see it the way that you see things to make sense of it. So that means that there’s a capacity in me that can witness my perspective, that can also witness your perspective, but it is deeper than the current perspective I have. And we all know we can change our beliefs and there’s still something that is us. So there is an ‘us-ness’, that is deeper than the belief system. To be able to really try to make sense of someone else, I actually have to move into that level of self that is deeper than belief systems. 

Interviewer: What do you hope that people will get from it? 

I would like if they are looking at their own biases and saying, okay, where do I have emotional needs that are affecting what I believe. Where do I have in-group out-group stuff happening, where am I actually doing disinformation, what are my cognitive biases? Let me go look up the list of cognitive biases and start to inventory them. Keep talking about epistemology and what the axioms are: “I don’t even know what that means, how do I actually go and explore what the right steps of logical process of an axiom are? How can I empower my own learning?”. So if people felt inspired to learn how to learn better, and then inspired to create relationships with other people where they actually do care about understanding what is true and real, so that they can also have a better relationship with what is meaningful and being able to make choices lined with that and where they’re endeavouring to understand what is true and real together and also endeavouring to create an intact information ecology where no one is disinforming anybody. Which is as much or even more an emotional process as it is a cognitive one because to really create an intact information ecology involves vulnerability and intimacy. That’s what happens, getting out of a game theoretic context, where I say oh our well-beings are shared. Well what if you defect on me? I have to actually create a situation and trust to be able to share real information, so then I get to see where my own wounds make it make me incapable of trusting or of revealing. So I mean those would be things that I would be happy if people looked at. The emotional and cognitive and overwhelmed by time and lazy and biased, and whatever sources of where they aren’t sense-making well and endeavoured to work on those. And I’d be happy if maybe you guys with rebel wisdom took a lot of people who you interview, who like, maybe one thing they have in common is they’re all people who are endeavouring to make sense well and started to ask them more about sense-making processes that they employ, so people can actually learn like Brett could talk about here’s how evolutionary theory, here’s things that we know from evolution, here’s how you can apply this as an epistemic tool. Or Eric could talk about here our principles in the philosophy of science and in physics that are valuable tools in understanding reality, those types of things. And then if they were even not that many people who started to really think about break downs in information ecology better and write on that more, and those ideas were able to start to become better understood. I’d be happy about that. 

Well if we look at the framework of sharing things that are truthful, true and representative. We start with the truthful side, we would need to remove the incentive for disinformation. And the first major source, there is kind of market—it’s the game theoretic dynamics that emerge from market type dynamics. And so again, if I have two different branches of the government competing for budget or different representatives who are supposedly both seeking to be in benefit of the country competing for percentage of the budget. Now they have the incentive to disinform whoever’s allocating the budget, disinform the public, disinform each other. Which means that the total level of coordination just sucks right and this happens in corporate politics, inside of corporations, this happens everywhere. So as long as we have separate balance sheets, as long as the different intelligence agencies are competing for that, as long as people we have several balance sheets then we have a fundamental basis that my well-being is separable from and oftentimes directly rivalrous with yours and with others and with the commons. So then, we will compete with each other for a lot of things and we will engage in, at the worst case, physical warfare, kinetic warfare, but will mostly engage in economic warfare, competing against each other and at the cost of the commons. And information and narrative warfare also at the cost of the information commons and each other. So to really get over that we would need to couple or we need to create alignment between agencies, between what you have intention to do and what is also in my well-being and vice versa. Which means that we would need to have more coupling between our well-beings, which means we would need to have a different process of resource provisioning and I will say that the type of system that could do that adequately has never been proposed in any kind of major way because obviously none of the systems ever proposed so far or tried do that. But if we have a rivalrous relationship with each other, information will be part of that rivalry and will damage the information ecology in the same way we damage the physical environment or each other kinetically. So that’s a big ask. So our balance sheets are kind of at the foundation of what create rivalrous dynamics. And this is at the level of corporations, at the level of individuals and of the level of nations. So can we still have private nation states that could benefit each other at the expense of others? Or commons that share information perfectly? So like, if we really want to say how do we have a perfectly intact global information ecology, it couldn’t happen within the context of nation states and private balance sheets and political lefts and rights and in-group out-group type structures. It would take us a while to talk about what something post game theoretic might look like, but if you want to just start to intuit it you say well if we look at all of the neurons in your brain or all of the cells in your body they’re all their own agents. Like the cells self-organise; I can take it out of you and put in a petri dish and it will keep self-organising for period controlled by its own internal genetic code. But even in your body there’s something like 70 trillion individual cells, but those cells are organising in a way that’s best for them, as individuals and best for the ones around them and best for the whole simultaneously than either sacrificing themselves for the whole nor they sacrificing others for a game theoretic benefit. And this is true at the level of cell to cell interaction, but it’s also a true at the level of organ to organ interaction or organ system to organ system or cell to organ. Fractally, at all these levels vertical and horizontal, there is a kind of symbiotic process that happens. And if you tried to model like the organs and a capitalist relationship with each other, where the heart and the lungs were competing against each other for scarce resource to hoard as much resource for feature as possible, and you model that out at the cellular level, the body dies very quickly. Cancer cells are actually doing that. A cancer cell is doing what is good for it in the near term but bad for the whole, and it will end up killing the whole and killing itself in the process. But the cancer cell only happened because the body as a whole had some mishealth that had carcinogenesis exceed the immune response capacity to deal with the carcinogenesis, so the whole was already sick to make the individual sick to some degree. There’s a feedback and feed-forward process between the parts and the whole. If you think about like vision and the way that parallax error correction and parallax occurs in vision, one eye doesn’t give me peripheral vision and it doesn’t give me depth perception. Two eyes together give me something that and also the single eye will have errors that aren’t corrected for, but the two eyes together the overlap of what they both see but also the difference of what one sees and the other one doesn’t allows any error in this eye to be corrected for, or error in this eye. And it allows peripheral vision and depth perception. So this is a place where not only are the eyes not in competition for which one is seen as true, but the process of how they’re related in the optical cortex gives me error correction on the imperfections in each of them, and it gives me new synergistic information that neither of them had on their own. That’s fucking amazing to think about right and this is true. Like your brain as a whole process of information that no individual of the types that no individual neuron or sub neural networks are processing. And individual neurons can get something wrong, but there’s error correcting processes that don’t propagate that. But the processes do propagate the true information, so we’re like well how the fuck does it work? The information processing that each of the parts are doing goes through a communication protocol that error corrects the false parts, and also gives parallax on the true parts where we get not only the truth of all the parts but a way of binding that together for synergistically higher-order information. The cells are sense making and they’re communicating, they’re signalling with each other right. A hormone is a communication, a neurotransmitter, like those are all signalling processes but they don’t have a game theoretic relationship with each other right they actually have a mutually symbiotic relationship with each other so they are supporting each other sense making in that way. The lungs obviously do better if the heart’s doing better, as opposed to doing worse if the heart is doing better. So if we just start to kind of imagine into what type of communication processes protocols would have to happen between humans that allowed for error correction on any individuals perceptions, but allowed the true parts of everyone’s perceptions to be separated from the error parts and then all the true parts to be synthesised at a higher order of complexity than individuals could do on their own. When we think about the civilisation of the future and the collective intelligence of the future, we think about it that way.

Rebel wisdom is a new sense making platform bringing together the most rebellious and inspiring thinkers from around the world. If you’re enjoying our content then you can help us make more by becoming a subscriber, which will give you access to a load of exclusive films. Also you can then join our group zoom cause to discuss the ideas in the films and you can send us ideas for questions for upcoming interviews. We’re also looking for talented people to help us out with editing, graphics, music that kind of thing. And if you’re a regular viewer you’ll know we talk a lot about the value of embodying or actually living out the ideas that we talk about, so that’s why we run regular events in London. Check out the links on the website for more and hope to see you soon.

Any errors in this transcription are the responsibility of Perspectiva, and will be corrected, with thanks.

Leave a comment