Perhaps the reason driverless cars are “uncanny” is because the average person can’t merge with a car. When we drive a car, we are using a tool which has, as a result, our linguistic expression. We can detect hesitation, anger, happiness, etc. while watching how another’s car moves. It allows us to predict flows of cars. However, when we see a driverless car, it exhibits non-linguistic behavior. It doesn’t hesitate, get angry, or get excited. It doesn’t flash its lights at us angrily. It doesn’t honk at us when we do something silly. It doesn’t wave at us if it recognizes us.
Drivers who also understand computer programming can, however, Merge with driverless cars because they understand the language of the coder behind the car’s driverless software. The average person, however, is at a loss for how to interpret the car’s behavior. Say the driverless car is programmed to follow the lines in the road, but the city road contractor has decided to scrub off the lines today and replace them with cones. The driverless car might not acknowledge cones as “lines” and will veer into a driver’s lane, infuriating the driver who will honk and scream at the oblivious driverless car which continues at a 45 degree angle toward the sidewalk. This is uncanny and unacceptable behavior for a car, but it’s understandable if you’re a coder. In this case, the city contractor and the car programmer are not speaking the same language and can’t Merge, and it devolves on the poor driver. If the driver was a coder, he would notice the driverless car moving into a lineless road and proceed with caution until, as he predicted, the car went off course because it lost its tracking abilities.
We can see this same phenomenon with websites. My father is is like many people in that he frustrated by website forms because he doesn’t understand how website forms work. For example, to sign up for something, he’s required to enter his email address, which he often does using autocomplete. For some reason, autocomplete tends to leave an extra space at the end, which he doesn’t notice and wouldn’t consider an issue anyway. He hits “submit” on the form, but the form gives him a generic error, “invalid email.” He squints, trying to be sure he spelled his email properly. He tries again, but there’s still a space at the end, which he can’t see if the cursor isn’t on that field. “Invalid email,” it says again. After 3 tries it locks him out, assuming he’s a spammer. What he doesn’t realize is that the website coder was lazy and forgot to put a trim() function on the email field variable on submission, which would have eliminated extra spaces that are so commonplace when using autocomplete. When the back end code checks my dad’s form data, it sees an illegal “space” character and issues a generic error as it would if it found other characters that are not allowed in email addresses like a $ or a <>. (The reason these characters are illegal is because they’re commonplace programming characters and might be exploited for passing code through an email server. In the early 90s a hacker might have tried using an email like ericjacobus<? do; echo “Hello World”; loop; ?>@aol.com to make the server computer print “Hello World” forever when it reads my email header.)
Like in the driverless car scenario, the coder and my father are not speaking the same language. If my father were a programmer, he would have noticed the additional space and removed it and the form would have worked. But ultimately the coder is at fault, since he is the speaker, my father is the hearer, and the web form is the symbol. Web coders still make this error pretty regularly. It’s not just sloppy coding. It’s bad linguistics.
We can see Merge happening in foods too. I recently ate a donut in Las Vegas and it tasted exactly like a donut from Redding, as a donut from Boston, as a donut from Beijing. There is some combination of base ingredients in the donut dough that is responsible for this flavor. This might be the shortening, yeast, or vanilla extract, or some combination of them all, which originally stems from some chemical factory that originally produced candle wax or nerve gas or something. I have no idea and it doesn’t matter: the donut-maker wants to be sure their donut tastes the same as every other donut, no matter whether it’s glazed, chocolate-covered, or jelly-filled. It can then be identified as a “donut” despite all these things. This is important because the “donut” signal is firmly rooted in the human lexicon, at least in America. This signal might remind you of a county fair, or Sunday mornings at church. For me, it conjured memories of a busy flea market in my home town which also contained a loud recycling center inside, so the same donut taste usually brings with it the smell of old beer and soda cans. When I eat a donut, it’s like watching the Vietnam War on TV in the 70s: I know that when I experience this signal I am sharing in a massive network of fellow consumers. The donut manufacturer knows that billions of people want to tap into this virtual social network and will rely on the same recipe to deliver this promise.
A donut every now and then is fine, but I notice the sense of nausea much more quickly than I used to. Perhaps this nausea was always there, even in the days when I’d eat a few donuts each week (my dad sometimes brought home some extras which he kept on his tool truck). Mainstream American institutions in the 50s tried desperately to convince us that this sickness was due to saturated fat, which would justify their use of unsaturated fats that were being produced cheaply from canola seeds, fats that are far less naturally occurring than saturated fats which you find in coconuts, avocadoes, and animals. Maybe I always interpreted the nausea as a sort of integral part of the signal: the nausea is proof that the donut signal is “correct.”
What happens when we allow donut nausea to continue for years on end? I’m not sure health experts understand this. Science is still in a developmental stage where it is too obsessed with the component parts of things (in this instance, health) to consider the holistic effects of Unoptimized-Merge (UOM). UOM is the uniquely human shared moment of anticipation, rooted in object-based combat, and lasting much longer than any animal anticipation due to our wildcare nature of violence. UOM defers violence by verifying and parsing signals shared between the antagonists (while most other signals are rejected). Each UOM signal carries the deep hierarchy of the recursive nature of antagonism itself, now transposed to words. This, my opinion, is how signals come loaded with grammatical elements. When we eat a donut, though the donut flavor “signal” is distanced from any kind of Combat, this only reveals the power of Unoptimized-Merge (UOM): the UOM field is always open and can always take advantage of signals to defer violence, in this case when we’re at the county fair or at a noisy flea market. UOM ensures that, no matter where I happen to be, I can derive some kind of “meaning” from the signal and anyone else eating the donut will share in this. It creates social harmony and laughter and all this, which is great. But after years of this? We’ve been intuiting that processed foods are bad, but we aren’t really touching on why. Perhaps food shouldn’t be using UOM like this. It points too directly to a human producer, which is not how we’re designed to eat food.
Unoptimized-Merge allows us to use machines, packaging, art, donuts, website forms, driverless cars, and other human products. Through these we communicate (Merge) with their producers on a deep level. On one side of the “conversation” is the consumer. On the other side is the producer. In the middle is the product, which serves as the linguistic symbol being exchanged. A complex network of transportation, communcation and finance separates the consumer from the producer. All of it is baked into this little linguistic symbol, the product.
The product is synthetic, but so is human language in a way. Think of the word “widget” which combines various odd facial gestures and sounds, and probably a slight squint, to denote a shared understanding of this thing. There’s even a bit of comedy in the word “widget.” We imagine a widget to be a carabiner with a usb dongle, a kleenex box with a magic 8 ball inside, or a paperweight that farts. All this is baked into the word. It might even conjure the image of the person who created the term. When we say “Windows 10” we know that this stems from Bill Gates. And yet the words “widget” and “Windows 10” don’t really introduce anything foreign into the situation. But when we are Merging over a driverless car or a broken website, what conversation exactly are we having? Most of us aren’t having a conversation at all. It just feels broken, like the driverless car isn’t listening to us. Even if every driver knew programming and could understood why driverless cars are careening onto the sidewalk, even if every website user understood web form coding, would these be “healthy” conversations? Are these the equivalent of donut nausea? Are we getting nauseous over the donut only because of the ingredients themselves? Or because there’s something *uncanny* about the situation itself?
To get to my ultimate point: AI is everything these days, and everyone is arguing over what are, in my opinion, secondary issues. The risk of an AI blowing up a rocket over New York or crippling the Yen are obvious problems. The less obvious problems are the purely linguistic ones. Let’s say we make AI “totally safe”. We run all the same clinical trials, and after 10 years AI is just as rampant as donuts. No donut has massacred a city, and no donut has toppled an economy. Donuts are pleasant, fun, and harmless. Certainly AI companies want the same to be true of their AI platforms. All of them want the economy humming along pleasantly like the rest of us. Let’s assume they get their wish and in 2034 AI is on every device and makes everything super quick and simple. If you want to watch a viking movie starring Bruce Willis as though it were made in 1935, some streaming service will generate that for you on the fly and it will look and sound *mostly* correct. With one vocal command your phone will mine your bank accounts and file all your 1099s to the IRS. You will converse with AI on a regular basis and it will seem totally fine.
However, you might feel nausea, or perhaps other unpleasant sensations from interacting with AI so much. What is that? The experts will first attribute it to the wifi signal, which will have to be pretty strong in 2034. Then they’ll attribute it to the radiation coming off the new television sets. Then your phone. Then your couch, the ingredients in your AI-cooked food, the insulation in your house. AI bots will go and change it all to meet modern scientific standards.
The sensation might remain. Why don’t Japanese fishermen feel these sensations? “More fish,” the quack scientist says. AI will bring you a bunch of fish that are every bit as healthy as those the 106-year-old Japanese fisherman eats, but the sensation remains.
These are the less important, secondary issues. The more important issue is UOM.
If my model is correct, then the continual inttake of Unoptimized-Merge signals (human language) modifies the pheontype, lending to self-domestication. Use of language with animals will domesticate them as their own brains struggle to Merge with us, the apex predator. They can only do so much, since they are already optimized for a limited signal set with shallow hierarchies (due to the fact that their combat is a closed loop, while ours is not, so signals for them can’t carry a deep recursive structure). And yet, domestication modifies the phenotype, which helps, and their behavior shifts a bit over time to communicate with us. Use of language, under this model, might lend to gradual brain growth, hair changes, white extremities (palms, sclera), all of which are an attempt to improve Unoptimized-Merge between people. But how does this system respond to “products” like donuts, driverless cars, and AI movies?
AI movie platform producers are not using “language” to make their content. Signals that you share with them via comments, liking certain parts of the AI movie, un-liking other parts, etc. are aggregated to retrain the model and produce “better” AI until your 1935 Bruce Willis Viking Die Hard movie looks like something actually produced at RKO studios. In your rational mind, this is real filmmaking. But in your Deep Structure, to use Chomsky’s terms, you intuit that 1) this experience is slightly off, and 2) nobody is sharing in it with you. You know full well that Viking Die Hard is not being generated by another person at the other end of the “conversation.” There is no John McTiernan on the other end, no Bruce Willis talking about how hard it was to keep the viking helmet from falling off his head, no human discussions anywhere. It’s just you and whatever people are in your house at that moment. You are perfectly alone with the AI now, in a closed loop system, which your Deep Structure cannot fathom, yet it’s forced to Merge with it despite it being so weird, despite the nausea, despite the uncanniness. And yet it will seem fine. It’s a hell of a deal too. You get to watch whatever you want for $20/month. What do you ask it for next? Maybe you will try to recreate a previous Newflix experience. The language you develop merges with the same recylced plastic language that the AI is generating behind the scenes. There is no progress in this language. Just a closed loop, like talking to your dead ancestor or an imaginary friend. Can you use this to communicate with other people? Will you want to? Nausea might be the least of your worries; you might experience new illnesses that nobody has recorded before, illnesses from *within*, like cancer, but even less predictable.
Maybe that will be when we start detecting “AI Poisoning.” It will be blamed on all kinds of secondary stuff, chemtrails and hormones or whatever, but we’ll be missing the source of it. The standard model can’t figure out human language. And yet we’ve gone and done all this as though the model makes perfect sense. Too much money was on the table to do otherwise.
We’ve been told for well over a century now that language is just something that evolved bit by bit, that it was a mere extension of animal language, and therefore AI can just be an extension of our modern language and be just as useful and progressive. But Unoptimized-Merge requires that we communicate with a party on the other side of the product, or else our phenotype will be at a loss for which way to take us. UOM is the Deep Structure, and language is an interface. If we tinker with the interface without knowing the Deep Structure, we have no idea what we’re doing or what effects it will produce.