The Next Fear on AI - Hollywood's Killer Robots

 



 An article in the New York Times

https://www.nytimes.com/2023/05/05/us/politics/ai-military-war-nuclear-weapons-russia-china.html?smid=url-share

Our comment

SnapChatGPT isn't the enemy - the enemy is extreme gullibility. SnapChatGPT joins bits of text together based on statistics, without understanding the meaning of a single word.


Artificial General Intelligence (AGI) will require an effort somewhere between a hundred and a thousand times the effort that went into building SnapChat and the other LLMs, with people having completely different training.

 

All the talk about demanding safety, trustworthiness and loyalty from one of these things is just talk - there isn't anything to connect these concepts to.

Until there is an English language interface between us and the machine, such concepts are pie in the sky.

 

But why would we want AGI? People have a Four Pieces Limit for their conscious mind, meaning that we are not very good at thinking about complex things, or rapidly evolving things, like a battle.

AGI can help in the midst of chaos, while LLMs just regurgitate.

 

Sun Tzu – “In the midst of chaos, there is also opportunity”. 

Machines that know themselves would be nice.

Comments

Popular Posts