Memristors have attracted interest for mimicking synapses in more energy-efficient scalable approaches to “brain-like” neuromorphic computing. However, their intrinsic variability has inhibited the performance of memristor-based neural networks, stymying progress. Now researchers in Beijing have shown that by introducing “fuzziness” into their neural network learning algorithm, they can produce synaptic memristor circuits that perform better in neuromorphic computing tasks.
- Intel Unveils Loihi Self-learning Neuromorphic Chip Digest Article
- Machine Learning Tackles Quantum Error Correction Digest Article
- IBM Just Achieved a Deep Learning Breakthrough Digest Article
- ORNL Researchers Turn to Deep Learning to Solve Science’s Big Data Problem Digest Article
- U.S. Military Sees Future in Neuromorphic Computing Digest Article