An international team comprised of 23 researchers has published a review article on the future of neuromorphic computing that examines the state of neuromorphic technology and presents a strategy for building large-scale neuromorphic systems. The study “Neuromorphic Computing at Scale” appears in Nature.
The research is part of a broader effort to advance neuromorphic computing, a field that applies principles of neuroscience to computing systems to mimic the brain’s function and structure. Neuromorphic chips have the potential to outpace traditional computers in energy and space efficiency as well as performance, presenting substantial advantages across various domains, including artificial intelligence, health care and robotics, according to the scientists. As the electricity consumption of AI is projected to double by 2026, neuromorphic computing emerges as a promising solution.
The authors say that neuromorphic systems are reaching a “critical juncture,” with scale being a key metric to track the progress of the field. Neuromorphic systems are rapidly growing, with Intel’s Hala Point already containing 1.15 billion neurons. The team argues that these systems will still need to grow considerably larger to tackle extraordinarily complex, real-world challenges.
Brain-inspired approach
“Neuromorphic computing is a brain-inspired approach to hardware and algorithm design that efficiently realizes artificial neural networks. Neuromorphic designers apply the principles of biointelligence discovered by neuroscientists to design efficient computational systems, often for applications with size, weight, and power constraints,” write the authors.
“With this research field at a critical juncture, it is crucial to chart the course for the development of future large-scale neuromorphic systems. We describe approaches for creating scalable neuromorphic architectures and identify key features. We discuss potential applications that can benefit from scaling and the main challenges that need to be addressed.
“Furthermore, we examine a comprehensive ecosystem necessary to sustain growth and the new opportunities that lie ahead when scaling neuromorphic systems. Our work distils ideas from several computing sub-fields, providing guidance to researchers and practitioners of neuromorphic computing who aim to push the frontier forward.”
“Neuromorphic computing is at a pivotal moment, reminiscent of the AlexNet-like moment for deep learning,” said Dhireesha Kudithipudi, PhD, the Robert F. McDermott Endowed Chair in Engineering at the University of Texas at San Antonio (UTSA) and founding director of MATRIX: The UTSA AI Consortium for Human Well-Being, who served as the lead author.
“We are now at a point where there is a tremendous opportunity to build new architectures and open frameworks that can be deployed in commercial applications. I strongly believe that fostering tight collaboration between industry and academia is the key to shaping the future of this field. This collaboration is reflected in our team of co-authors.”
Last year, Kudithipudi secured a $4 million grant from the National Science Foundation to launch THOR: The Neuromorphic Commons, a first-of-its-kind research network providing access to open neuromorphic computing hardware and tools in support of interdisciplinary and collaborative research.
Need to develop wider array of user-friendly programming languages
In addition to expanded access, the team also calls for the development of a wider array of user-friendly programming languages to lower the barrier of entry into the field. They believe this would foster increased collaboration, particularly across disciplines and industries.
“Twenty years after the launch of the SpiNNaker project, it seems that the time for neuromorphic technology has finally come, and not just for brain modeling, but also for wider AI applications, notably to address the unsustainable energy demands of large, dense AI models,” noted Steve Furber, PhD, emeritus professor of computer engineering at the University of Manchester. Furber specializes in neural systems engineering and asynchronous systems. He led the development of the million-core SpiNNaker1 neuromorphic computing platform at Manchester and co-developed SpiNNaker2 with TU Dresden. “This paper captures the state of neuromorphic technology at this key point in its development, as it is poised to emerge into full-scale commercial use.”
To achieve scale in neuromorphic computing, the team proposes several key features that must be optimized, including sparsity, a feature observed in the biological brains. The brain develops by forming numerous neural connections (densification) before selectively pruning most of them. This strategy optimizes spatial efficiency while retaining information at high fidelity. If successfully emulated, this feature could enable neuromorphic systems that are significantly more energy-efficient and compact.
“This paper is one of the most collaborative efforts to date toward outlining the field of neuromorphic computing with emphasis on scale, ecosystem and outreach between researchers, students, consumers and industry,” said Tej Pandit, a UTSA doctoral candidate in computer engineering and one of the co-authors. “Representatives of many key research groups came together to share crucial information about the current state and future of the field with the goal of making large-scale neuromorphic systems more mainstream.”
Pandit is pursuing his doctoral degree at UTSA under Kudithipudi. His focus is on training AI systems to learn continually without overwriting existing information. He recently published about the topic.