Site icon theGIST

Using human learning methods to create human-like intelligence

Digital art of a humanoid robot reading a book. Image by Duncan Rawlinson - Duncan.co (CC BY 2.0)

 

Passing the Turing test is no longer groundbreaking news. The test involves a computer and a human conversing with a judge who must then determine which is the computer. However, even if the computer successfully deceives the judge, it does not necessarily prove human-like thought processing. While artificial intelligence (AI) models excel at specific tasks with vast amounts of data, humans acquire knowledge and skills through smaller-scale training and subconsciously apply them to new tasks. For incidence, while AI models can outperform humans in tasks like text refinement[1], they cannot beat a human’s driving skills[2]. To truly emulate human capabilities, AI must possess instinctive responses, logical thinking, planning, understanding, and even the ability to experience emotions. The path to achieving this may lie in developing Autonomous Machine Intelligence.

Recently, Yann LeCun, a French computer scientist and winner of the Turing Award, proposed a novel method of training AI using a world model. The world model represents the AI agent’s ‘common sense’. Just as in seeing an apple drop from a tree a human can predict it will land on the ground, the world model can predict future states of the world based on the current input state[2]. By integrating this world model with other modules, the agent can reason and plan accordingly as the AI agents can deduce such sequences without explicit instruction. For example, AI agents can emulate all the possible actions, receive the consequences from the world model, and finally select the best one to execute. This mimics human reasoning and planning.

Another study involved ChatGPT, a popular chat bot, which has the Theory of Mind[3]. That is, it can infer the thoughts of another human. The research demonstrated ChatGPT’s impressive mind-reading capabilities. Interestingly, this ability might arise as a byproduct of the extent of the model’s language ability instead of being pre-engineered[3].

Regardless of the specific approach, be it the world model or models based on complex language abilities, training AI in a manner that aligns with the human learning process can afford the models human-like characteristics. While AI research has made tremendous strides, achieving a truly human AI entails more than just task performance — it necessitates replicating the essence of human cognition and behaviour.

 

[1] https://www.washingtonpost.com/technology/2022/12/10/

[2] https://openreview.net/pdf?id=BZ5a1r-kVsf

[3] https://arxiv.org/ftp/arxiv/papers/2302/2302.02083.pdf

 

**NB: Please note that ChatGPT was partly used to generate this article.

 

Edited by Hazel Imrie

Copy-edited by Rachel Shannon

Author

Exit mobile version