INSA versus English

 

I used Google to find something on INSA.

Found: https://petervoss.substack.com/p/insa-integrated-neuro-symbolic-architecture

Some comments:

It sounds great (except the part about Neural Nets and a few others). How does it compare with English? You have described its abilities using English - would you care to describe the abilities of English using INSA? Would you recommend that people learn INSA instead of English? If not, why not? (Sorry, I mean any natural language used in an advanced society to describe existing and newly emerging technology). What are you doing with figurative language - "raised the bar", "a bridge too far", and all the nuance that English allows? 

English is used by a billion people - how many people will end up using INSA? Will we be back to the bad old days of people effectively programming things they don't understand? You seem to be introducing a translation stage that is unnecessary and dangerous – it would be good if normal folk (non-INSA) could understand exactly what the machine has been instructed to do, without requiring someone else to tell them.

English uses the same word for different POS (up to five) and up to 60 different meanings – yes, it is a problem, but billions of people have learnt to handle it – it can’t be that hard. 

When you talk about Neural Networks, are you talking about directed resistor networks, or real neural networks, that can turn themselves inside out.  Whoever thought to call them ANNS should have been tarred and feathered, and run out of town on a rail. ANNs don't seem to mesh with "the inability for them (LLMs) to learn incrementally in real-time" (and the implied ability of INSA to do so).

"the powerful non-brittle pattern matching ability of neural networks" - are these ANNs - if so, you assemble this beast, it drives around the corner (in a figurative sense), breaks down (encounters something unexpected).  "non-brittle" means it can be repaired back at base, but it does not mean "self-extending", which would seem an obvious goal of AGI.

Are you committed to handling mental states? - the hardest part of AGI will be explaining to a person why something is a good idea when they can't understand why, because of their limitations, and become enraged at appearing to be stupid. A deep knowledge of mental states (including irrationality and psychosis) will be necessary. 

 If INSA is intended for a small subset of the world's problems, that’s fine, but you need to say so.

 Our goal with Semantic AI is to handle thousand-page pieces of legislation (with hundreds of links to other legislation), or hundred thousand page specifications of complex equipment (cut up into reasonable chunks – 3000 pages for undercarriage, with lots of links) – translating them into a computer language sounds like the death of a million cuts, particularly as lots of things go unsaid in English (too trivial to mention). Translation without understanding is a minefield.

Comments

Popular Posts