LLM versus AGI
LLMs do everything we need. You put in a few words, you get hundreds of words, specially crafted for you. Not quite. An excellent FT article provides a good basis for comparing LLM and AGI approaches to understanding text. You put in a few words, it starts writing from the prompt you have given it. If it can’t find a link for one of your search terms (the words in the prompt), it may drop it without comment. There is nothing which understands the meaning of the words (how can there be, when there are so many pieces of text from so many different fields). There is nothing that understands how strings of prepositional phrases should be unravelled (because you need meaning for that), or how words should be clumped into objects, before being operated on by modifiers (e.g. “an imaginary flat surface” or “he went to jail for murder”). The basic idea, that a few words can be accurately turned into a few hundred or a few thousand words by keying off the words, is deeply flaw