Technology
Mimicking the sparsity in the brain
Neural networks, biological and artificial
Neural networks have become so associated with machine learning that sometimes us computer scientists forget the origin of the name. The silicon neural networks we work with were inspired by the biological neural networks in our very own brains.
And while we may have made remarkable strides towards matching the ability of the computational engines in our skulls, we still have a long way to go: as Mitchel Waldrop describes in this Nature article, the brain “can carry out computations that challenge the world’s largest supercomputers […] in a package that is smaller than a shoebox, consumes less power than a household light bulb, and contains nothing remotely like a central processor”

Sparse Coding
In recent years, some artificial intelligence researchers have gone back to the brain for further inspiration. They seek to find something that can be transferred to artificial neural networks, something which will allow us to unlock more of the raw efficiency and performance tantalizingly promised by biology. One idea that has stood out is sparse coding. Peter Kloppenburg and Martin Paul Nawrot’s 2014 paper summarizes sparse coding as when “a specific stimulus [in the brain’s sensory system] activates only a few spikes in a small number of neurons.” In other words, although the brain’s neural network is densely connected, most stimuli only activate (“spike”) a small number of neurons, thus leading to a sparse representation of a stimuli in the brain.
Sparse coding has spent the past several years as a cutting edge, but not quite practical, approach to improving AI technology. ThirdAI’s new sparse coding implementation finally delivers the promise of a brain-inspired neural network that isn’t just different from it’s more traditional peers, but is actually better than them.

How to spike? Use memory, and not just for storing parameters
But replicating that biological magic trick is no longer out of reach. With clever use of data structures and memory lookups rather than expensive brute-force computations, we can achieve the efficiency promised by sparse coding in the brain. Essentially, instead of exhaustively evaluating and then sorting neuron activations, we can reorganize the neurons themselves in computer memory so that neurons with similar activation patterns are stored close together.


The real difference: performant associative memory versus expensive similarity search
Even with more advanced memory-based frameworks, most information retrieval algorithms for retrieving similar neurons simply have too much overhead compared to optimized dense matrix multiplication algorithms. For every sample and for every layer, a sparse coding algorithm must query the neuron similarity index and return a list of neurons to spike, a performance cost that quickly adds up. At ThirdAI, we’ve built an efficient associative memory step that makes our neural networks faster than the state of the art.