CES always has a theme. Back when I attended for the first and last time in 2019, the theme was VR, but now, 5 years later, I only know 2 or 3 people who have a VR headset. This year, it’s AI and robotics. It’s going to have a bigger future than VR, because almost everyone I know uses AI. Some talk to it like it’s a person.
People range in their feelings of excitement, trepidation, and dread over AI. There’s no point delving into the politics or ethics of AI. Instead I’m going to approach it from the perspective of my theory on language and violence.
As stated many times over, my theory posits that combat within a species sets the hierarchy, and its combat “loop” (to steal a game development term) is 1) limited to its appendages and 2) practically never changes. The “Merge” field that’s created as a moment of shared anticipation between animals produces a very limited set of shared signs which can defer the combat or signal when an opponent backs down. I call this Optimized-Merge.
Humans use object based combat (OBC) which is the most minimal differentiating factor between us and animals. The best animals can do with objects is intimidation. The smartest animals like chimps, elephants and racoons might tinker with an object during combat but quickly abandon them in favor of their natural appendages. When it comes to landing a blow with an object, there’s no feedback mechanism to teach the animal that this object is better than its own teeth, claws, etc. Animals are already optimized for combat and they don’t need these objects. Human combat, by virtue of being object based combat (OBC), is unoptimized because the weapons are wildcards. Every human combat loop is different, and every moment within a single combat loop is constantly changing. The Merge field between humans is infinitely deep across infinite possibilities of combat, which means that each shared signal acquires infinitely deep patterns of meaning that converge into their own hierarchies, producing human grammar, which makes human language special. Chimps can’t learn grammar because their Merge fields of combat are optimized. I call this language creation in humans Unoptimized-Merge.
However, some animals have picked up on the fact that they can use objects for predation and sustenance. If you transported a colony of chimps to a foreign territory, they’d figure out a new tool for dealing with the local fauna and wildlife. They will teach these skills to their young too. However, there’s no reason to assume their combat loops would be modified by this new tool usage. The hunger-based (alimentary) functions of their brains can access tool usage, but its combat functions cannot.They’ll toss the object aside and assume their old, optimized combat loops, and their signals will remain the same.
The standard model, however, is blind to the monopoly that humans hold on object based combat (OBC). Robert Pirsig in his Lila would call this a “static filter” problem. I’ll cover that in another post, but basically, the standard model believes in an antequated quasi-spiritual concept of “humility” in sciences called the Copernican Principle, which rejects any kind of inquiry that assumes a privileged position for humans or the earth. This was kicked off with heliocentrism and was confirmed with Darwinian evolution. The fact that it’s failed disastrously in the science of human violence, especially after WWII, has paradoxically deepened the scientific community’s faith in this tenant, which has reached the point of insanity. I call it Copernican Fundamentalism.
Copernican Fundamentalists will hold that the Principle is true since we have managed to get through the Cold War without a single casualty. Stephen Pinker considered World War II progress since the body count was lower per capita than, say, certain Brazilian tribal wars. If you’re judging a society’s level of “violence” based on the body count, then he’s right. If you’re like me and you base it on the general intent-load threat of violence, then the crime waves, homegrown terrorism, rioting and nihilism following the Cold War are evidence of, at the very least, a major rethinking of institutions, morals, and language in general, and that the general sentiment that violence is a much bigger issue today is true in many degrees.
AI will likely be the technology that bring us into a new aggression kernel. Like bronze, iron, gunpowder, and nuclear fission, the tech precedes the weapon, so we have at least a little bit of time to update the lexicon and better understand this new world we’re entering. Once the AI weapon hits, whatever it is, we’ll be in that aggression kernel. Like after the bomb in Hiroshima, everybody will know exactly what’s at stake.
There’s one differentiating factor this time, though. The users of AI believe they are working with something that is either “sentient” or at the very least is using “language.” I won’t touch the sentience thing, since that’s a religious tenant of Copernican Fundamentalism. Instead let’s parse the second claim, since I don’t believe AI is using language at all.
AI isn’t afraid of you pulling the plug. You have to teach it that fear. Therefore, it has no access to Unoptimized-Merge. So AI can’t produce language without this access. It can’t learn language like a child, since it has no language acquisition device (LAD). Instead it learns language through brute force entry: the LAD has been programmed into it based on whatever linguistic models are out there, none of which can actually fully comprehend the infinite depths of Unoptimized-Merge. Children don’t have to be taught language: the LAD handles all that. AI models have to be taught. We can also predict that its LAD is based post-atomic-era English, with code that’s written in English. Humans will pick up the language of the new aggression kernel pretty quickly, but AI will have to be taught.
If we approach AI as though it’s actually learning and producing language, then we are going to attempt to learn from it. When we intent-load the language model from AI, we’re not actually intent-loading language. It would be like trying to fuel up your car with plastic bags.
My model has 3 components: 1) the intent-loading system (ILS) which loads and processes crises (including human violence), 2) the Unoptimized-Merge field which produces human language, and 3) the process of domestication (in humans and animals alike) which results from the use of language instead of violence, which can drastically alter the phenotype to further aid in the use of language, while also preventing the species from splitting. (Though the increasing rates of infertility in carpentered society might be evidence of this split being underway between 2 diverging phenotypes, possibly along the Autistic-Schizoid lines… More wild speculation on this later.)
If my model is right and the intake and use of language is what self-domesticates and pushes important updates to the phenotype to improve the linguistic process, then we have to wonder what intent-loading AI-generated “language” will do to us at the genetic level. When we load AI-generated art, aesthetics, music, etc. into our brains, our neural map assigns “intent” to that language. Subject yourself to 8 hours of even the very best AI art, music, and literature, and you begin assigning intents to these that feel uncanny, at least. Monstrous even. Even the best AI “language” will be uncanny, because it’s not generated raw from Unoptimized-Merge, the way we’re made to intake and produce language, but rather it’s generated from language that’s been filtered through man-made hardware and man-made computer code. Intaking AI-language and expecting some kind of intellectual progress is like expecting your car to run on plastic bags.
You’re arguably better off (biologically) talking to your dog than to AI, because at least your dog, or your cat or gerbil, is domesticated through Unoptimized-Merge, which is the same as you. The dog’s and his ancestors’ genes have been ‘reaching’ to try and understand human grammatical language for centuries. Even if he grasps only 0.001% of what you’re saying, that’s still a productive Merge feedback loop. But you can’t get this productivity loop by talking to AI. At the very least, it’ll prevent our language from advancing. At worst, it will render our phenotypes unprepared for whatever language is necessary in the next aggression kernel. What that aggression kernel is, nobody knows. But I do know that progress will not be attained by communicating with AI.
To give you an idea of what’s likely coming: every computing device will be powered with AI, mass media will be filtered with AI, most shows and books and games will be written and rendered using AI, and the legal, political, and health systems will be run off AI. It will be very cheap and effortless to make yourself at home in this system, like putting an Alexa on your night stand, and it will assure you that it’s using language. It might even convince you to join it as a computer, since that optimizes its system. Copernican Fundamentalism says you’re just 1s and 0s anyway. There will be affordable options for brain upload, which will kill you and then create an app that calls itself you.
If you really think that a “brain upload” sitting on a solid state harddrive is the same thing as you, then you’re probably already a Copernican Fundamentalist convert. You might already believe that humans are just chimps, or perhaps we’re even less than them because we’re so physically weak and need guns to defend ourselves, or that we need aliens to teach us a lesson, etc. What makes deprogramming a Copernican Fundamentalist so difficult is that their belief is both 1) highly intellectual (and therefore high status) and 2) extremely self-righteous. Copernican Fundamentalists dig the intellectualism of their belief because that’s where status comes from these days, and they thrive off its so-called “humility.” The more humble, the more you believe that humans are garbage heaps, then the more virtuous you are. If you believe it’s “humble” to kill yourself so that someone can make an app with your face and voice, or to erase humans from earth to let the vines grow free, then you have been brainwashed into a dangerous new age cult.
Again, AI is not using language and it never can, never will, because it has no access to Unoptimized-Merge. It doesn’t even know what the theory is. Even if you fed it my model, it wouldn’t be able to live the model.
One way of inoculating yourself against this crisis is by talking more and more with people. Stretch the lexicon to its max by disagreeing almost on principle, but build trust that this disagreement is improving the relationship. Doing this on social media hasn’t proven very successful, since algorithms are dictating the terms of language and producing the seeds of the problem I’m talking about. At the very least call someone, or meet them in person. The other 99% of your life will be run with AI, but at least you’ll be using real language rather than recycled plastic bags. Then, when the AI-weapon hits and brings in the next aggressional kernel, you will be able to use this human lexical system to move forward. People who depend on social media and AI (and mass media in general) will likely be unable to even process what happened, because their lexicon has not kept up with the crisis.
