Site icon theGIST

Mindful Machines: The underlying mechanisms of Advanced Artificial Intelligence

Neurals networks. Image by Pascal Lemaître (CC0)

 

Imagine a world where machines possess the ability to process and discriminate information the way the human brain does. This curiosity has sparked the formation of artificial neural networks (ANNs). ANNs are computing systems inspired by the structure and functioning of the human brain. This programming enables algorithms to learn patterns and make predictions based on data. ANNs provide expansive computational possibilities, now serving as the interface between neuroscience and computer science.

ANNs aim to achieve a concept known as deep learning, a type of machine learning that leverages ANNs to solve complex tasks, mirroring the cognitive capabilities of the human brain. The term “deep” refers to the use of densely connected layers of nodes, which parallels the architecture of the network of neurons in the human brain. Such structure has enabled machines to automatically learn hierarchical representations of data, allowing comprehension of intricate patterns and features[1].

ANNs are tremendously complicated as they can perform millions of calculations both at the network and single node level. Data moves through each layer of nodes in one direction. A single node can be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data[2]. For every incoming or outgoing connection it makes, a node will be given a number, which is referred to as a weight[2].  Weights indicate the strength and direction of the connection. Once a node is activated it gets a piece of information from each connection it has, and each piece is multiplied by the associated weight. After this, all these multiplied values are added up, and the result is a single number. If the number is below a threshold value, the node passes no data to the next layer of nodes. However, if the number exceeds the threshold value the node ‘fires’, meaning its number is sent along all outgoing connections[4]. Essentially, activated networks process information by taking an input, adjusting it based on importance, and combining it to produce the most appropriate output.

The ultimate objective of this programming is to train ANN models that discern intricate patterns. Subsequently, machines undergo a training phase, in which the weights of the networks are adjusted through repetition. This process ensures that the ANN predicts the correct output for a given set of inputs.  ANNs have catalysed huge progress towards solving many different problems. Especially, ANNs are propelling industries that rely on automated analysis of visual or auditory data, such as facial recognition, medical imaging, personalised treatment plans, and in schools, adaptive learning systems. The majority of impressive progressions in artificial intelligence have been achieved via large ANNs trained on substantial amounts of data[5] .

 

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9673209/

[2] https://www.researchgate.net/publication/5847739

[4] https://deepai.org/machine-learning-glossary-and-terms

[5]  https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414

 

Edited by Hazel Imrie

Copy-edited by Rachel Shannon

 

Author

Exit mobile version