AI and Productivity
Productivity in Australia in the 1990s grew by 2.2% a year. That slowed to 1.4% in the 2000s and 1.1% in the 2010s. Recent productivity performance is even worse, and hardly higher than a decade ago. Productivity Commission
Almost all the low-hanging fruit in terms of productivity gains have been picked, so we have to move further along the path to human understanding. Can AI help?
There is heavy promotion of LLMs (Large Language Models) to do so. LLMs were invented to avoid the “million hits” message from a Search Engine. In the early days (circa 2000), the Search Engine would treat the words in the request as key words, and search for them individually, so a dog park request would be searched as “dog” and “park”, and get many hits. Now, the Search Engine searches for “dog park”, gets many fewer hits, and takes the most popular one. This is fine for indexing, but the method has many obvious holes. Dog parks can be “segregated” according to size of dog, or the different sizes of dog can be kept “separate”. An LLM has no understanding of what words mean, so it can’t use synonyms, unless they are explicitly provided in the prompt. It can’t reason about what the article it has found is saying, and it works poorly in a dynamic environment, where a new article represents current reality, but is too new to be popular. Tariffs would be a good example, where they change by the day in both directions (meaning date of publication of an article becomes very important). To make matters worse, LLMs have “hallucinations” about 50% of the time – they make stuff up, and then argue with you if you tell them they are wrong. The upshot – it would be unwise to hope for any great improvement in productivity using LLMs.
An alternative is Semantic AI, where the text describing complex situations is turned into an active structure, so non-specialists can see how something is meant to work. But firstly, let’s talk about something that keeps holding us back – the Four Pieces Limit. Humans have a very severe limit on their input capacity – no more than four concepts can be kept in play at once – any more than that and they are handled as constants. What that means is that we are brilliant at relatively simple problems or problems where there is a large, well-known infrastructure to support them, and hopeless when there are many factors to be taken into account simultaneously. The recent push for Complex Systems Engineers to do economic modelling is recognition that economists are not good at handling complex problems using simple tools and their understanding, forcing them to cut a problem into small pieces by the Four Pieces Limit and destroying its integrity in the process (we are CSEs, but we want to provide a tool for non-experts to use, not one that we have to drive).
Areas badly needing assistance:
Skulduggery
Robodebt – Australian Social Services legislation was twisted to suit political ideology when implemented as a program. Two suicides, $1.7 billion in reparations, hundreds of thousands of people traumatised. Semantic AI would have given people working in the area a tool to show that the program was (maliciously) wrong.
Horizon – British sub post office program - four suicides, seventy people illegally incarcerated, ₤1.7 billion in reparations – turning the specification into a working model would have shown the mistake was in the program, not in the actions of its users.
Boeing 737 MAX MCAS – anti-climb software and sensor – 364 deaths from two crashes, Boeing lost US$20 billion. Boeing staff lied to the FAA, installed software and sensor capable of crashing the plane without any redundancy, didn’t change the Flight Manual. This would be a hard one to catch – would need to monitor outgoing orders and inputs to the factory. Made harder by the fact that the FAA is perennially understaffed and appoints staff of the manufacturer as temporary FAA inspectors, so the machine could be reporting an infraction to the person causing it. Productivity has many inhibitors.
Unfamiliarity
Defence has relatively few projects, so a project is likely to have a team largely unfamiliar with a large and complex project, and (with a bit of hubris mixed in) it makes predictable mistakes. Turning the large amount of text associated with these projects into working models would help to cut down on the mistakes and save billions (MRH-90 as example – failure, $4 billion wasted).
Just Complex
Some areas, such as Central Banking and Trade Flows, are just complex – too many things happening at once. Semantic AI moves the management of complexity to the machine, while allowing humans to communicate with the machine in their own language. This should improve productivity considerably.
Conclusion
We need to increase our rate of productivity improvement, but we are unlikely to do so without coming to terms with our limitations. One way is to get help with large slabs of text that currently bamboozle us (including the experts in the area).
In Australia, Orion Semantic AI
In USA, ActiveStructure Semantic AI
Comments
Post a Comment