Written by Steven Woo
In our previous blog post, we discussed the history of neural networks (NNs) and machine learning (ML). We also took a closer look at some of the memory standards that are currently powering a diverse range of NNs and ML applications. In this blog post, we’ll explore some of the more prominent neural networks that are behind the recent advances in AI and ML.
Multilayer Perceptrons
The foundation of modern neural networks is the perceptron, a digital model of the biological neurons found in our brains. Perceptrons are interconnected in layers in a manner similar to how neurons are connected. Multi-layer perceptrons (MLPs) are one of the oldest forms of neural networks, and are used today in many classification tasks where the goal is to identify known objects within a set of data. Convolutional neural networks (CNNs), recurrent neural networks (RNNs) and Generative Adversarial Networks (GANs), leverage this same foundation. As one of most popular neural networks, MLPs continue to be used today for many different types of classification tasks.
As the name implies, MLPs implement multiple layers of neurons, typically 3 to 20, although some have been reported to have as many as 100 or more hidden layers. CNNs add additional layers, known as ‘filters,’ to the front of the network to extract features from the data which are subsequently passed though as inputs to the MLP layers. This technique allows the filters to extract and access features the MLPs use to classify inputs. MLP layers are trained and adjusted with back propagation using mathematical equations to organize the connections and neuron characteristics. It should be noted that the above-mentioned filters are also adjusted using a similar set of equations which are iteratively applied during the CNN’s training phase.
Spiking Neural Networks (SNNs)
Spiking neural networks (SNNs) are a more recent class of neural network that model biological neurons and neural networks more closely than perceptrons and MLPs. Like MLPs, SNNs emulate the neuron and synapses, but they also incorporate the notion of time into the model. In MLPs, each neuron decides to fire at the same time during a propagation cycle. However, SNN neurons do not fire simultaneously during propagation cycles. Rather, SNN neurons are designed to fire at certain times, namely whenever their membrane potential (related to the electrical charge across the membrane) exceeds a specific threshold value.
When an SSN neuron fires, it generates a signal (or ‘spike’) that travels to the other neurons it is connected to. Each of the connected neurons adjust their membrane potential based on this signal – and fire when a specific threshold is reached. There are various methods for using the output signals (‘spike trains’) to encode information, including leveraging the frequency of the spikes or the time between spikes to represent numerical values.
Generative Adversarial Networks (GANs)
Generative Adversarial Networks (GANs) are typically used to power unsupervised machine learning applications. This is a discipline in which machines learn to draw inferences from datasets that are unlabeled, rather than classified or categorized a priori.
Essentially, GANs are implemented by connecting two neural networks using a methodology that allows them to ‘compete’ against each other. The first network, known as the generative network or generator, creates new examples by generating data that fits a data distribution of interest. The second network, referred to as the discriminative network or ‘discriminator,’ discriminates between instances from the ‘true’ data distribution and candidates produced by the generator.
The discriminator, which is often a CNN, is initially trained with a set of learning data until it reaches a specific accuracy threshold. Subsequently, the generator – which is often a variant known as a Deconvolutional Neural Network – creates new examples that attempt to ‘fool’ the discriminator. As new examples are generated and applied to the discriminator, back propagation techniques are used to adjust and optimize both the generator and discriminator.
Interested in learning more about machine learning and AI? You can check out our article archive on the subject here.
Leave a Reply