Darwin’s “survival of the fittest” model was combined with Gregor Mendel’s genetics which formed what’s known as Modern Synthesis (I’ll call it Synthesis I). It was relatively easy to happen upon Synthesis I: the science was already mostly there for both Darwin’s and Mendel’s schools. All one had to do was combine the fields and test them against data.
Synthesis I has failed to pinpoint the origins of human language. The problem here is that linguistic scientists think that Synthesis I has the answers to the origins of language. But they haven’t found them. There is no language gene, no cluster of language genes, and no understanding of what language is on a basic descriptive level. There is a general understanding that language has a variety of “aspects” to it, and so language scientists attempt to find these aspects in various animals like chimps, thinking that this is how they can find the genetic roots of language. Again this has failed continuously. Derek Bickerton, one of the last great linguists, spends 250 pages in More Than Nature Needs tossing and turning over this problem because he assumed Synthesis I could solve the issue, and as a result the whole thing is a convoluted mess.
What’s worse is that AI technology is proceeding along assuming the language problem has been solved. AI scientists believe language can be artificially created by copying normal language, giving a bunch of feedback, and letting CPUs run for a while.
In reality, AI is not using language. It’s using brute-force data entry techniques to estimate language. It can’t create language. It can only translate it in the most robotic ways. It’s a very complex Google Translate service and nothing else. It cannot generate a grammar because the programmers have no idea what grammar is or how it forms. And they don’t care.
Should we stop AI for this? Probably. Can we? No. I would however urge AI devs to reevaluate whether Synthesis I is actually giving them the tools they need to understand language modeling. I believe in fact they’re just generating chimp language that sounds human, and at this rate it will make some apocalyptic errors that they will have to continue putting guardrails against. This will persist so long as Synthesis I is used as the foundation of AI.
There are other sciences of language out there which are flowering, like the generative language movement, Chomsky’s group, and I think Generative Anthropology has a lot going for it. But this isn’t enough. All these assume some kind of a smooth continuity between humans and chimps and all feature the same glaring blind spot that I’ve been railing about for months now. (As a caveat, I think this is GA’s only shortcoming, an unresolved problem that I’ve presented about at their conferences, and the response has generally been positive.) This blind spot is the specificity of human violence, which needs to be defined as Recursive Object-Based Aggression (ROBA). (I’ve also been calling this object-based combat/OBC, but ROBA is catchier.)
Synthesis I has failed to account for language. It has also failed to account for human violence. The “science” of violence is non-existent: it’s a bunch of zoologists, archaeologists, psychologists, urban planners, statisticians, etc. who are using the same “aspects” technique as the language scientists: list the criteria for human violence, find them in animals, innovate pharmaceuticals or implants which might resolve these issues, plan cities differently, etc. These are very low quality works on violence resolution. Higher quality works are linguistic in nature: how to use language to defer violence. Christopher Blattman’s book “Why We Fight” doesn’t pretend to know the science: he just treats it like a linguistic issue, and his linguistic resolution is economic. Solid stuff. There are surely many other linguistic resolutions too, and these should be debated at length.
Violence is a linguistic issue, not a biological one. It’s a linguistic issue because it’s resolved through linguistics. It’s not biological because you can’t resolve it through biology. Violence is human. Recursive Object-Based Aggression (ROBA) is as integral to the human condition as the spine and lymph nodes and anything else. ROBA is who we are. Language is the resolution.
Since Synthesis I has failed in both language and violence, then the human condition, as it pertains to violence and language, evades the standard scientic model.
A hypothetical “Synthesis II” would combine the science of linguistics with a science of violence. As of now, there’s no science of violence, but it would be easy to make one. First we have to get out of the Copernican trap of assuming human aggression is chimp aggression. Rather, human violence is ROBA, which is produced from the Unoptimized scenario. ROBA is a scientifically separate category from animal combat. There are no moral terms or ethics attached: ROBA has object-based combat, animal combat does not. Animal combat, at best, has object-based intimidation, but this has no feedback loop for repeating object usage in combat. It’s merely an anomaly, like a chimp learning some signs from sign language, or a parrot copying Simpsons quotes.
But in human violence, object-usage is a constant. Perhaps object usage, which only “reached” the intimidation part of the chimp brain, succeeded in “reaching” the combat portion of our brains, and there it remains, creating a circular, apocalyptic ulcer, which instantly produces grammar as we know it. Otherwise this ulcer would have killed us at its first instantiation.
With ROBA being the foundation of a science of violence, and armed with a science of language in the other hand, we can probe the depths of ROBA to show how it creates the Merge function which is crucial to linguistic science.
Combine a science of violence with linguistics, and we can have Synthesis II.
