A mentor asked me a good question regarding my last post: If interacting with AI as though it is human is detrimental to the human phenotype (per my Unoptimized-Merge hypothesis), isn’t that the same with mass media in general, or financial markets, which have always functioned as sort of massive neuron networks.
I would argue that the difference between the neuron networks of mass media and other human institutions is that the communication is still between humans, with the media acting as a proxy. Mass media is still a Parsing scheme. The audience “says” something (“We want more Beatles,” or there’s an uptick in album sales) and the media “says” something back (more Beatles tours, more albums). Unoptimized-Merge validates or rejects these signals depending on if they’re shared. If the new album tanks at the box office, Unoptimized-Merge has failed this scheme. If the album goes gold, it Parses the scheme into the next action, for example issuing 10 more albums just like this one.
Governments operate similarly. The populace “says” something (they rob more food stores, they have more babies, or they leave in droves) and the government “says” something in return (issuing more police or increasing welfare, etc.). The feedback loop is more direct on a local level, less direct nationally, even less direct internationally (WEF, IMF, NATO, etc.). Unoptimized-Merge will issue Parsing schemes for any size of human population. Mass media technology is a way for a smaller and smaller number of leaders to issue Parsing schemes to a larger and larger number of subjects.
Large-scale communications systems are mostly one-way Parsing schemes. When the audience “asks” the system something, they either interact with a technician, clerk, etc., or they get silence. So either there’s real Parsing involved, or the audience realizes they’re not talking to anybody. Sure, many will believe that the latest celebrity quote is about them, or that Obama or Trump are talking directly to them through the TV, but we can categorize this as delusional, or at least romantic.
Social media offers a slightly different Parsing platform. 1 average person can suddenly, technically, interact with 8 billion people. This interaction is moderated by an algorithm which determines what you and your audience want to see. The algorithm itself is notoriously inhumane, and people on social media spend lots of time trying to figure out how to actually Parse with the people behind the curtain. The biases, politics, and religious views of the people running the platform are what trickle down to the social media user, and the average person will sooner or later grasp what the platform is really about. They will intuit whether or not it’s even worth interacting with people on there anymore, since all the users’ language is mediated by this scheme. Average people turn into monsters on social media, not because they’re monsters, but because the scheme of the platform restricts Parsing to insane levels. I’ve often called people and asked them to clarify some stupid things they’ve said online, and we proceed to have a healthy conversation that is otherwise not possible on the platform.
I don’t know how much longer these healthy off-platform conversations will be possible. I notice talking to youths who grow up on social media appear to have no descent Parsing skills when talking to adults.
AI is a secondary (or subsidiary) Parsing scheme, arguably even less personal than social media algorithms. The audience now can “ask” the system something, and it will respond without any human agents actually Parsing anything. AI’s so-called “language” is recycled, but gives the impression that it’s actual language.
The only Parsing on AI, like on Social Media, is done through the curators’ updates, and whatever guardrails they install. We actually see this with some high-end AI users who try to find the human agency behind AI so they can actually Parse with it and interact with the curators and programmers. The latter, in effect, continue to try and hide their presence by making guardrails more invisible. But the vast majority, 99.9% of users, will come to believe that AI is actually Parsing, and that it’s therefore “sentient,” and so the users’ own concept of Parsing will degrade far below the social media user’s concept of Parsing. We see it underway today. The idea of “language” is already degraded enough to the point where even our oldest thinkers believe that birds and chimps use grammatical language just like humans, but that we’re just too “dumb” to understand it. This will be the elites’ argument about AI as well: if we can’t Parse properly with it, then it’s our fault for being too stupid to comprehend it.