The human brain does two functions very well – the first is that it can recognise images very well. The second is that it can detect patterns based on time and make predictions based on those patterns.
The last few months I played around with the first function- — around building image recognition neural networks (CNN)– the ones that help you determine whether a picture is of a dog or human (source code) or another example is detecting faces in a picture (article) or recognising handwritten numbers (article).
For example, we can write a CNN that determines that the above picture is of a horse.
But, if you ask a CNN to tell us if this horse is running or not — it would have a hard time. If you ask a human — the answer is dependent on a number of things and most on time. If you saw this picture as a series of pictures where the horse was loitering around, you could make the determination that it is likely that the horse is standing around. In that case, we used human memory to solve the problem of determining what the horse is doing.
Recently, I started to dig into the second function of the human brain and see how you can mimic the human brain. To answer a question — how do you introduce an element of time in AI.
The canonical example is that I would like to predict the prices of a stock. Another example is Google Assistant, it needs to understand the context to service a request.
Recurrent Neural Networks (RNNs) and Long Short Term Memory (LSTMs) help you solve problems that have dependencies on time. I have blogged (here and here) about LSTMs previously so lets take RNNs in this blog.
The following picture is called an unfolded model of a RNN.
Let’s break this picture down.
Each circle is a feedforward NN (ffnn) which means it is taking input (i), does some calculation and puts out a value (o). The weights calculated by the ffnn are Wi and Wo on the input and output respectively. NN’s typically take anywhere between 1 to thousands of inputs, thus, i can be in thousands. NNs typically output 1..n outputs as well, thus, o can have that range as well.
So far so good.
Let’s now bring in the notion of time.
RNN’s keep an internal state around (s) and Ws are the weights that are generated for that state. Think of s as the memory component of the RNN.
The way you bring in the memory element along is that you feed the memory from time t as the input to time t+1 and so on. This, is how RNNs are different than standard feed forward networks. The input increases from i to i+s.
s_t = some_function (Wi*i + s_t-1*Ws)
The picture is that of a simple NN repeating itself over time starting with t upto t+2 but really going on to infinity.
The next level of complexity is to stack an RNN on top of another and create a 2 layer RNN. Two layers is the beginning because you can stack an arbitrary number of them.
This lattice of RNNs then provide the flexibility to process temporal dependent data and make complex predictions.
In the last blog, I gave an overview of LSTMs (long short term memory) in AI that mimics human memory. I will use this blog to go 2 layers below to draw the building blocks of this technology.
As a reminder, at a 50k level, the building block looks like the image on the left. There is long and short term memory on the left –> some input comes in –> new long and short term memory is output on the right. Plus, an output that determines what the input is.
Let’s open the box called the LSTM NN (neural network). This block is composed of four blocks or gates:
- The Forget Gate
- The Learn Gate
- The Remember Gate
- The Use Gate
The intuitive understanding of the gates is as follows: When some new input comes in, the system determines what from the long term memory should be forgotten to make space for the new stuff coming in; this is done by the forget gate. Then, the learn gate is used to determine what should be learnt and dropped from the short term memory.
The processed output from these gates is fed to the remember gate which then updates the long term memory; in other words a new long term memory is formed based on the updated short term and the long term memory. Finally, the use gate kicks in and produces a new short term memory and an output.
Going a level deeper:
The learn gate is broken in two phases: Combine –> Ignore.
- Combine: In the combine step, the system takes in the short term memory and the input and combines them together. In the example, the output will be Squirrels, Trees and Dog/Wolf (we don’t know yet — see previous blog for context)
- Ignore: In the second phase, information that isn’t pertinent will be dropped. In the example, the information about trees is dropped because the show was about wild animals.
The forget gate decides what to keep from the long term memory. In the example, the show is about wild animals but there was input about wild flora, so the forget gate decides it is going to drop information about the flora.
This gate is very simple. It adds output from the Learn gate and Forget gate to form the new long term memory. In the example, the output will be squirrel, dog/wolf and elephants.
This gate combines the input from the learn gate
Math behind the various gates for mathematically inclined
Combine phase (output = Nt)
Take the STM from time t-1, take the current event Et and pass them through a tanh function.
Nt = tanh (STM_t-1, Et)
Mathematically Nt = tanh (Wn [STM_t-1, Et] + bn) where Wn and bn are weight and bias vectors.
Then, the output from the combine phase is multiplied by another vectory called i_t that is the ignore factor from the Ignore phase.
Ignore phase (output = i_t)
We create a new neural network that takes the input and STM and apply the sigmoid function on them.
i_t = sigmoid ( Wi [STM_t-1, Et] + bi)
Thus, the output from the Learn gate is:
tanh (Wn [STM_t-1, Et] + bn) * sigmoid ( Wi [STM_t-1, Et] + bi)
The forget output is calculated by multiplying the long term memory with a forget factor (ft).
The forget factor is calculated using the short term memory and the input.
ft = sigmoid( Wf [STM_t-1, Et] + bf]
Forget output = LTM_t-1 * ft
The remember gate takes output from the Forget and Learn gates and adds them together.
LTMt = LTM_t-1 * ft + Nt * i_t
The use gate applies a tanh function on the output of the forget gate and multiplies it to a sigmoid of the input and the event.
STMt = tanh (Wu [STM_t-1, Et] + bu) * sigmoid ( Wv [STM_t-1, Et] + bv)
The four gates mimic a point in time memory system. To truly envision this, think of a lattice of connected cells separated in time. Thus, the memory system will continually evolve and learn over a period of time.
(credit: the math and the example are coming in from the Udacity deep learning coursework)
Memory is a fascinating function of the human brain. Specifically, the interplay of the short term memory and long term memory where both work in conjunction to help humans decide how to respond to the current stimuli is what makes us function in the real world..
Let’s take an example —
I am watching a program on TV and suddenly a picture of a dog/wolf comes up. From the looks of it, I cannot distinguish between the two. What is it – a dog or a wolf?
If the previous image was a squirrel and since squirrels are likely to be in a domestic setting, I could make an assessment that the current image is a dog.
This would be reasonably true, if all I had was my short term memory.
At this point, my long term memory kicks in at this point and tells me that I am watching a show about wild animals. Voila, the obvious answer at that point is that the current image is of a wolf.
LSTMs mimic human memory
A specific branch under Deep Learning in AI called LSTMs (long short term memory) are used to solve problems that have temporal (or time based) dependencies. In other words, LSTMs are used to mimic the human memory to predict outcomes. Unlike another branch in Deep Learning called RNNs (recurrent NNs) which only keep the short term memory around, LSTMs bring in long term memory to bring in high fidelity to their predictions.
The Working of LSTMs
- What should it forget? Trees in this case because the show is about wild animals and not trees.
- What should it learn? There is a Dog/Wolf in addition to the squirrels and trees.
- What should it predict? The Wolf
- What should it remember long term? Elephant, Squirrel, Wolf
- What should it remember short term? Squirrel, Wolf
All the above is done for the current time and the new long/short term memory are fed into the next input that comes at time t+1.
Thus, you can think of the above picture as recurring for every time epoch t.
In the next blog, we will deep dive into the LSTM NN and see how each of the bulleted questions are answered.
Pretty interesting, isn’t it?
(disclaimer: the example used is from the Deep Learning course work on Udacity)
Neural Networks (NNs) that depend on past history are called Recurrent Neural Networks (RNNs). The technical term is networks that depend on temporal dependencies or dependencies that change based on time are called RNNs.
This class of NN’s are distinct than the ones that, for example, do image recognition which do not have dependencies on time. RNN’s have wider applications because most applications have dependencies that depend on time. A key challenge in RNNs is the vanishing gradient problem in which the contribution of information or memory (in the next paragraph) decays geometrically over time.
The basic idea is that you take a feed forward NN (FFNN) and introduce internal memory state that takes the output from the internal training layer. Thus, our FFNN now remembers what happened before. For completeness, prediction can be 1..n.
The next step is to chain each of these cells together.
Thus, a RNN is a combination of multiple such cells where multiple inputs are fed into different cells (think prediction of next word is dependent on the last few words), each cell has it’s own memory coming in from its previous iteration and each cell goes ahead and makes its own prediction.
The beauty of RNNs is that they can be stacked as lego blocks. To imagine this, think about the above picture as 1 RNN lego block and make the prediction layer feed into another RNN block stacked above it.
There are some interesting use cases for RNNs:
- Sentiment analysis
- Speech recognition
- Time series prediction
- Natural language processing
- Gesture Recognition
Amazon Lex provides a framework to build conversational interfaces using voice and text with .
I was fascinated when Facebook launched the feature where it put a box around a human head (and a bit creeped out when it started suggesting the name of the human next to the box). I always wondered how they did it and filed it under machine-learning magic-ery. Now, I know how they do it so let me peel back the curtain.
There are two distinct problem domains in the feature
- Find the human – we will use this blog lifting the curtain behind the magic.
- Label the human – this is supervised machine learning and we will ignore this problem in the blog.
The “Find the human” problem is solved through something called “Haar Cascade Classifiers” – detailed article for brilliant humans and the rest follow along in this blog :-).
The underlying building block is a Classifier but lets drop the terminology and use airport security as a metaphor to explain the process.
Think of face detection solution as a airport security problem where a series of security guards each do a specialised task. The guard at the airport entrance is responsible to ensure that there is no suspicious car loitering around the airport. The guard at the security gate is responsible for letting ones with a valid id and a boarding ticket. The person behind the scanner is responsible to weed out any harmful objects in the handbag. The person behind the scanning machine ensures that no person gets in with a gun. The explosives security person uses a specialised explosive detector paper and puts in the machine to find out if hidden explosives are carried by the person under consideration.
Each of this security guard is a Classifier and classifies a particular threat. When each is put together in a series, we get a Cascade of Classifiers. Each building on the work of the other. Each and everyone of them has to perform their specialised task for a successful outcome. The successful outcome in this case is that a person was allowed into the airport lounge and can board his/her plane. Each of the classifier goes through a great deal of specialised training for it to perform their task. Makes sense?
So lets apply this metaphor to face detection machine-learning algorithm. In ML, each classifier focusses on a special feature within a picture. The basic classifier tells something as simple as “this is a horizontal edge” or “this is a vertical edge” where edge detection is a feature. This classifier feeds into another one that perhaps says “this is a square” and so on so forth. Eventually, you get to a classifier that tells you “this is a bridge of the nose” or “these are eyes”. Each classifier has been fed 100s of thousands of images that are either positive (human in the picture) or negative (no human in the picture) for it to learn to correctly classify the pictures.
So how many such features are there? Turns out a whole lot. For a typical 24×24 pixel, there are 160k+ features. The Haar in the “Haar based classifier” is a mathematical function that optimises this algorithm and reduces the number of features to look out for to about 6k.
Now it turns out that applying this knowledge into our programs is a lot simpler than the entire training process because opencv.org provides a python package called opencv to detect these pictures.
I ran a short function to detect humans in about 100 pictures with humans and ended up with a 100% detection rate – not bad at all. Running it over 100 dog pictures ended up returning a 89% accuracy rate. Thus, 11% of dogs were categorised as humans and if you know me – I think that is fair because some dogs are like humans :-).
Let’s look at a problem: I am a dog lover and I subscribe to Instagram where I get a stream of images of toasters, birds, cats and dogs. While these images are exciting (who doesn’t like toasters :-)), what I want to do, is automatically save images of dogs to an account on the cloud. The question is, what kind of program do I need to write that tags all dog images to feed my cloud database of awesome-dog-images.
A neural network to the rescue!
A neural network is a decision engine. Its job is to look at in-coming data and make a decision about the data. Typically, a neural network uses logistic regression to make these decisions. I covered regressions here – logistic regression classifies data into a yes/no class of answers.
Building a neural network is like training a dog.
- You train the dog to obey a command
- Analyse if the dog is behaving right in all field conditions and
- Provide the appropriate feedback to fix any problems
“Sit boy” – Training the neural network
To get something useful from a neural network – you need to train it. To do this you take the existing set and split into the data you train with and the data you test against.
Training the data
In the data here, I have two regression lines that neatly separates the images. In neural network parlance, each quadrant is a decision node that outputs yes/no if it identifies the data right. Each node is called a perceptron or neuron. Each one looks at input data and decides how to categorise that data. In the example above, the input either passes a threshold for dogs, cats, birds and toasters and each neuron answers a yes or no.
You write a function called an activation function to make the decision. Typically, an activation function uses a heaviside activation function i.e. the function returns false if the number is < 0 and returns true if a number is greater than 0. Each function takes a weight (w) that determines how important the function is. This is really important – the weights determine if a network is going to be successful. For example as I am looking for a dog, the activation function for dogs will have a heavier value than for a cat. If on the other hand, I was looking to classify toasters, the weight for toaster would be greater than that for mammals and birds. The output of these functions will go through an aggregator function that determines the final output. The activation function is the key as it is this function that determines what the answer is going to be.
The network above has only forward propagation as in each neuron makes a decision and pushes the answer forward. Thus, once you have trained the network and the network makes errors, it will continue making the error forever. What makes this picture interesting and useful is if you can find the error between the right value and the error and feed it back to the network. This is called back propagation.
Is the dog doing the right thing: Analysing the output
Did the dog do the right thing? A dog sleeping on the sofa while I gave it a sit command is an epic fail. Isn’t it?
In neural networks, you take your test data and feed it through the network to see the output. The good thing with the test data is that we already know the answer. You compare the result from the network to the test data and determine the delta. The delta tells you how off you are from the right answer. We will get into more details in another blog perhaps.
Provide feedback to the dog to fix issues: Provide feedback to the network
Once you have the delta from the network, you feed it back into the network. This process is called back-propagation. The way to think about back-propagation is that it is a mirror of the neural network, where instead of starting from the inputs you start with the error instead. You start with the derived answer and the actual answer, calculate the delta for the error and start updating the weights on the way back. You do so for each record in the data set. You typically use gradient descent to find the right weights on the way back.
To sum up – weights are what helps determine the right answer but you don’t get the right weights when you only move in one direction, so you find the error and fix the weights back and voila you have a neural network that has learnt.
Congratulations! You have now successfully trained your dog – welcome to a great life ahead.
This blog captures an intuitive understanding of concepts such as Regression analysis required for deeplearning.
The following are summary of my notes from week 1 of the Udacity Deep Learning course.
is used to model the relationship between a dependent variable and an independent variable. The intent is to draw a line through your data that best fits your data. This line is then used to predict a value when a new point lands in. Linear regression only works for data that is linear and is sensitive to outliers.
Example: I could model life expectancy of an individual based on BMI of the individual if I had built out a model of BMI —> Life Expectancy
Multiple linear regressions
While linear regression modelled a relationship between 1 independent variable and 1 dependent variable, multiple linear regression factors in other independent variables as well.
Example: The previous example is highly simplistic in assuming that we can predict life expectancy of an individual based on BMI only. However, if we do add heart rate data as an independent variable, we are likely to classify the data much more accurately.
is a regression model where the dependent variable can only have two output values “pass/fail”, “alive/dead”. I am going to use an example from the Udacity course.
A college admissions office looks at the grades and test results of an individual to accept or reject the person in the university. In the sample picture attached to the blog, every one who is green has been accepted to the university in the past while everyone on the red has been rejected.
This data splits very cleanly and doesn’t really require a neural network to predict acceptance/rejection of a new student. We will make this more complex as we go to the next example.
There are some things that completely blow your mind. Applying artist specific styles to images is one of them. Completely sci-fiction – although the Prisma filter has made this accessible.
It takes ages for an artist to come to a signature style. It takes longer for another to mimic the style and I cannot fathom if a mimic can apply the style to different images. Deep learning does it with panache. Impressive! I am a convert.
I used the project called fast-style-transfer and ran the following command on an image to produce an output.
# creating a sandbox environment for python conda create -n style-transfer python=3.5 source activate style-transfer conda install -c conda-forge tensorflow=0.11.0 conda install scipy pillow # doing style transfer python evaluate.py --checkpoint ./rain-princess.ckpt --in-path --out-path ./output_image.jpg
Pictures to be stylised
The original artwork:
The Wave by Kanagawa
Scream by Edward Munch
Rain Princess by Leonid Affremov
The modified images
This blog sets up the core software requirements for the Udacity deeplearning course to get you started quickly.
I have just signed up for the deeplearning course and am fairly excited about it. The course heavily depends on Python – I used Python about 18 years back and have been a java guy since. The course requires you to setup Python 3.5, a number of data packages as part of Anaconda (a data science platform for analytics).
Rather than struggle with requirements on my Mac again and again, I have setup a docker image that is up-to date with all the core requirements for this coursework. Here is what you need to do get the docker image and run jupyter (which is wiki system for data analysis).
docker run -i -t -p 8888:8888 hsingh/anaconda-deeplearning cd /home/jupyter jupyter notebook --ip='*' --port=8888
To bring up the Jupyter notebook, go to your browser on the host machine and enter