Computers are everywhere: in our pockets, in our offices and in our cars. Some are advanced in the field of artificial intelligence (‘AI’). Yet, we still don’t have walking-and-talking robots. At CogniGron, Steve Abreu is designing programming methods for neuromorphic computers. He describes why brains are more intelligent than computers.
By Steve Abreu / Photos: Henk Veenstra
Over the past years, some computers have become increasingly “intelligent”. More than 20 years ago, IBM’s Deep Blue computer beat Garry Kasparov at chess. Six years ago, Google’s AlphaGo system beat Lee Sedol at the game of Go. Two years ago, OpenAI’s GPT-3 system wrote an article about AI being harmless to humans in The Guardian. The article was edited by a human editor, but the content was generated by GPT-3. Evidently, progress in artificial intelligence has been incredibly fast and impressive.
And yet, we still don’t have self-driving cars. We still don’t have walking-and-talking robots to help us in our everyday lives. Why is that? In my PhD research at the Bernoulli Institute, I work on advancing AI with novel computers. This line of research leads to deep questions, not just about technology but also about the nature of intelligence itself. What is limiting progress in AI? Why are brains more intelligent than our computers?
First of all, it is important to realize that artificial intelligence is really quite different from human intelligence. The difference is not surprising. AI runs on digital hardware, which processes digital information step-by-step according to programmed rules, or algorithms. Human intelligence runs on biological “wetware” which processes information simultaneously in billions of neurons, according to physical dynamics and chemical reactions. If we want to build computers that can drive, walk, and talk, we may want look to the brain for inspiration.
Much of today’s progress in AI already comes from deep learning, where models of neural networks are simulated on digital computers. Such models can be trained to learn to perform tasks without being explicitly programmed. This means we can train a deep learning system to recognize different faces without needing to specify how this should be done. However, we are only simulating these neural network models in the same digital hardware that was designed for managing Excel spreadsheets or playing video games. This is slow, energy-consuming, and limits us to neural networks that are much smaller and simpler than the human brain. Conventional computing technologies are facing fundamental limits, which prohibit us from designing and training larger neural networks. Therefore, we must look to new kinds of computing devices to enable us to scale to larger and better AI systems.
Neuromorphic computers take inspiration from how the brain processes information by building computers made of neural networks directly in the physics of the device. Building physical neural networks into our computers makes neural computation more energy efficient and allows for more accurate modeling of neural dynamics. Neuromorphic chips can be manufactured using the same materials we use for digital computer chips, but in the new CogniGron center at the UG novel “cognitive materials” are also being investigated. These materials promise more efficient memory and learning for next-generation computers.
There are two main goals of neuromorphic computing. First, by building systems that work like the brain, we can build more powerful and energy efficient computers, which may open the doors for the next generation of artificial intelligence. Second, by building a system that emulates the brain, we improve our understanding of how the brain works and how it gives rise to intelligent behavior. Neuromorphic computing requires a truly interdisciplinary effort, connecting materials scientists, neuroscientists, device engineers, computer scientists, and cognitive scientists under a unifying objective of building brain-like computers.
Why do we still use digital computers and not neuromorphic ones? Digital computers are easily programmable and a single computer can run many different programs. You can use the same computer to receive emails, edit spreadsheets and watch movies. In contrast, programming neuromorphic systems for different tasks is not as easy. As owners of neural networks, we know from experience that we cannot directly tell a neural network what to do (don’t think of a pink elephant). Similar difficulties arise when working with neuromorphic computers. We have found ways to program, or train, artificial neural networks on digital computers. But the same methods do not work in analog computers, or in novel cognitive materials. To make neuromorphic computers useful, I aim to develop novel methods for programming them. To achieve this, I work with different neuromorphic computers to design programming methods and training methods within the constraints of the given physical system.
I currently work with the DynapSE2 chip, which was designed by researchers from the Institute of Neuroinformatics in Zurich and the University of Groningen. This analog chip contains 1024 neurons, each of which can be connected with up to 64 other neurons. All neurons process information at the same time, and the chip only consumes energy when information is processed. Standard ways of training neural networks on digital computers cannot be applied on the DynapSE2, so we have to radically re-think how to program, or train, such a computer. The DynapSE2 serves as a testbench for ideas which can eventually be scaled up to larger neuromorphic chips in the future.
As a Marie Curie fellow in the European project “Post-Digital”, I get to collaborate with colleagues in different research institutions across Europe. Currently, I am on a three-month visit at the Institute of Neuroinformatics in Zurich to collaborate with other researchers on the DynapSE2 chip. Later this year, I will join a research group at the University of Ghent in Belgium to expand my research to optical neuromorphic computers. Optical computers leverage laser technology for optical signals traveling at the speed of light, which is a significant advantage over the much slower transmission of electrical signals in electronic computers.
I want my research to contribute to the democratization of AI. At present, large companies that can afford expensive supercomputers have a monopoly on AI models because our laptops and smartphones are not powerful enough. In a neuromorphic future, each one of us would be able to carry around a personalized intelligent AI in our pocket, without depending on large organizations to process our data.
This article was created in collaboration with MindMint.
Dr Annette Scheepstra of the UG Arctic Centre, part of the Faculty of Arts, is about to conduct research into tourism in Antarctica and how tourists can become Antarctic ambassadors. She has been granted €1 million in funding by the Dutch Research...
The Royal Netherlands Academy of Arts and Sciences (KNAW) has appointed Professor Maria Loi and Professor Dirk Slotboom from the Faculty of Science and Engineering as members of the Academy.
The Dutch Research Council (NWO) has awarded three Vici grants, worth €1.5 million each, to three UG researchers. Prof. J.W Romeijn, Prof. S. Hoekstra, Prof. K.I. Caputi can use this money to develop an innovative line of research and to set up...
The UG website uses functional and anonymous analytics cookies. Please answer the question of whether or not you want to accept other cookies (such as tracking cookies).
If no choice is made, only basic cookies will be stored. More information