A team of scientists at the nation's premier institute, the Indian Institute of Technology, Delhi (IIT-Delhi), has invented a new neuron model that will help in building accurate, fast, and energy-efficient neuromorphic Artificial Intelligence (AI) systems for applications like speech recognition. The team published its findings as a study that came out in Nature Communication last week.
In the human brain, neurons and synapses are believed to be the most important building blocks giving rise to intelligence. The model, called the Double Exponential Adaptive Threshold (DEXAT) will work in a similar fashion and make the AI system more efficient.
"Over the last few years, we have successfully demonstrated the utilisation of memory technology beyond simple storage. We have efficiently utilised semiconductor memory for applications such as in-memory-computing, neuromorphic-computing, edge-AI, sensing, and hardware security. This work specifically exploits analogue properties of nanoscale oxide-based memory devices for building adaptive spiking neurons", Professor Manan Suri, lead author of the study, said.
Per the team, this invention will help in achieving high performance with fewer neurons, and its benefits were shown on multiple datasets. The researchers also revealed that a classification accuracy of 91 percent on the Google Spoken Commands (GSC) dataset was achieved. Moreover, the nano-device neuromorphic network was also found to achieve 94 percent accuracy even with very high device variability.