Metaphorically Speaking
Humans most need help when the complexity of the task with which they are dealing is new, and exceeds their (very low) input limit.
Some examples:
Money Laundering – Poorly written legislation – limit below which transaction not checked - $10,000, so feed a hundred envelopes through a deposit ATM – a million dollars laundered in 10 minutes. Automated banking (labour-saving) processes, no intelligence. This is legislation, changed at a glacial rate, attempting to thwart a nimble, intelligent adversary, and failing badly.
Robodebt – turning legislation into a program – hundreds of thousands of people traumatised, 2 suicides, $1.7 billion reparations. Lies, skulduggery, stacking of board reviewing decisions. Corruption or looking the other way all the way down – a few people risking their careers by speaking out about the “stealing”.
Horizon (UK) – turning a specification into a program – thousands traumatised – 13 suicides, 70 wrongly incarcerated, ₤1.7 billion reparations. Rank incompetence, trying to cover with lies, skulduggery. Killing people to try to save a reputation.
Boeing 737 MAX MCAS – deliberate lying to regulator, resulting in 346 deaths - $20 billion hit to reputation – shoddy engineering, shoddy civics, overflowing order book, no consequences. The worst crime was trying to put the blame on "third world pilots", when a ticking time bomb was added without their knowledge to a plane they were familiar with.
Lockheed Martin F-35 – average design (slow, slow climb rate)saved by better missile. Many cascading mistakes – stealth paint washes off in rain, Navy undercarriage too weak to land on carriers. Component commonality initially estimated at 75%, tight aircraft design results in 24% - a huge cost overrun (hundreds of billions), a decade of delays and no hope of a fifty-year life (a new design commissioned from a different manufacturer after five years).
What could AI have done to avoid these disasters?
We need to emphasise that the examples stem mostly from skulduggery (either deliberate lies from the get-go, or trying to cover up incompetence) rather than overwhelming complexity. Lying adds a complexity which is usually ignored, but projects in the tens or hundreds of billions are rife with it (including lying to oneself).
We will be comparing LLMs with Semantic AI for their suitability for complex problems. The simplest solution for the first three examples is to turn the text into working models, but LLMs don’t allow that.
LLMs
This is not fertile ground for LLMs to prosper. They were invented for Search Engines, and can quickly find patterns of words. If you wanted to find something about “dog parks”, earlier Search Engines would look for “dog” and then for “park”, resulting in many irrelevant hits. Now the SE looks for “dog park”, or even “segregated dog park”. But the Search Engine has no idea about what the words mean, so it can’t make use of synonyms. “A park for dogs and their owners, split into different areas for different size dogs” sounds synonymous with “a segregated dog park”, but only if the thing doing the searching knows what the words mean. Synonyms can also be structural, so
To him, animals are more important than people
can be rearranged into
Animals are more important than people to him
LLMs are based on a simple idea – patterns yes, avoid meaning like the plague, but the simplicity of their structure prevents their use where meanings are important – almost everywhere. They can be pushed a little further than the infuriating crudity of chatbots, but not much further because the meaning of the text is ignored
Training of LLMs
There is talk of training LLMs, but that does not mean training in any way to do with meaning. One form of training is to find a likely piece of text, and send it off to another tool, suited to the intended task. This is broadly unreliable – there is nothing that can put the right word in the right pigeonhole.
Figurative Use
A reasonably generalist vocabulary would run to about 50,000 words, but along with that are about 10,000 phrases and figures of speech. Some examples:
A bridge too far
A fly in the ointment
A whale of a time
Beat around the bush
fly under the radar
for the time being
get on someone’s nerves
go along to get along
hand-to-mouth
have other fish to fry
Hoist by one’s own petard
Look the other way
Without a reasonable knowledge of idiom and metaphor, much technical text and general reporting would be unintelligible. This is a fundamental problem of LLMs – they have no idea what the fly or the ointment or the fish are about, or why someone would be beating around a bush – they have no idea how to go about finding out what is meant – they are suitable only for the shallowest analysis of text – finding and copying a school homework essay, say.
I am willing to believe that LLMs aren’t going to do the hard stuff. But what about something else – say Neurosymbolics?
What is the argument for converting something described in English into a much inferior language that most people do not use to describe a complex problem. Neurosymbolicx is a mishmash of technologies – a mixture of directed resistor networks, a clumsy and largely unworkable logical structure which puts logic remote from the operational structure, and bits of English (which folds predicative and existential logic into its own structure – “he wanted her to know the truth about John” – how do you separate the logic from the rest of the text?). Neurosymbolics is the latest, and hopefully last, effort too avoid the work of making a machine “understand” English.
Neurosymbolics is not going to conquer the world in the way that English has, and it’s not going to help with another pressing need humanity has – collaboration among specialists.
Collaboration
If a project is large and important, it will incorporate the work of many specialties. Let’s take Domestic Violence. There would be lawyers, police, politicians, psychologists, psychiatrists, trauma specialists, culture specialists, Complex Systems engineers, and the general public. These people all talk different dialects of technical English. The machine’s job is to be familiar with those different dialects and meld a complete picture, comprehensible to all the parties.
Semantic AI
Semantic AI takes each word or phrase in the text and turns it either into an object or an operator in an undirected self-extensible network, where logical, existential and temporal logic is diffused through the network. Yes, it needs a large vocabulary and lots of metaphorical phrases, but if it is to interact with humans, it has to speak their lingo, not force them to crush the problem description into some cramped Mickey Mouse language, cooked up by someone who is less than literate (and wouldn't understand the problem anyway).
Orion Semantic AI
Comments
Post a Comment