Are We Any Nearer to AGI?
What should we expect from AGI?
We should expect something that starts out with:
·
An understanding of complex text (legislation,
specifications, instructions) with an ability to actively link multiple
documents.
·
Ability to read and make sense of photographs
and diagrams
·
Understanding of reality and moving pictures
(videos)
·
Understanding of simple mathematics, physics and
chemistry, with an understanding of how to increase the complexity to match the
problem.
In other words, the equivalent of a first year university
student, with the addition of an ability to link many things – an ability that
far exceeds a human’s severely limited ability (about four concepts). We have
the problem that the person or persons will have insufficient “bandwidth” to
describe the problem in the first place, and will need to be led
So are we getting any closer to AGI?
Unfortunately, no.
Generative AI
Generative AI has largely taken over AI, and this is
antithetical to the first goal, understanding text. Generative AI can search a
very large dataset (all the text on the internet), and may find an article
written by a human that links the words used in the prompt (it may find many
articles – if so, it takes the most popular one). When it works well, it seems to work
effortlessly (it is not really AI, instead it is a lookup service on a huge
dataset. The problem comes if nothing that matches the prompt can be found (the
problem is new or emerging), or you wish to combine pieces of information into
a larger synthesis. Generative AI does not understand the meaning of a single
word, so synthesising a more complex entity is impossible for it.
Neurosymbolics
Neurosymbolis is often touted as a stepping stone to AGI.
This seems unlikely, as it throws away information about the text. English has
a logic of its own, where a group of words has a special meaning – “a house of
cards”, “an imaginary flat surface of infinite extent” (a “plane”, where
“imaginary” covers “infinite extent”), “electronic funds transfer instruction”.
Neurosymbolics throws away the connections between words,
instead replacing them with logical operators that know nothing about
connections among English words. Needless to say, throwing information away is
not the best way to proceed to understand something. It also assumes that the
words and symbols completely express the problem, when many problems are only
partially known, and much hypothetical reasoning will be required to discover
the full outline of the problem (or the solution is given in several
hypothetical solutions).
Reading of Photographs and Diagrams
There will be many AGI problems where the problem cannot be
described in text alone, or failure to read a photo or diagram leads to an inconsistency.
An example is a photo of a loaded container ship. The accompanying text talks
about a level of stacking (of containers on top of each other of seven or nine.
The photo shows a level of stacking of eleven (inconsistencies will be rife in
descriptions that use different media forms).
The problem of inconsistencies is very likely to occur with
large and complex projects – typically where tens or hundreds of billions of
dollars can be wasted due to unfamiliarity (the US F-35 project as an example).
Comments
Post a Comment