Let’s start with a couple of problems Problem a) I read a lot of news articles and find that a number of words appear close to each other — New next to York, New next to Delhi; However, if I am reading book about history of India — it is unlikely that New will come next to Delhi; in this case Delhi will stand alone. … Continue reading AI to make smart word associations
The human brain does two functions very well – the first is that it can recognise images very well. The second is that it can detect patterns based on time and make predictions based on those patterns. The last few months I played around with the first function- — around building image recognition neural networks (CNN)– the ones that help you determine whether a picture … Continue reading Is that horse running? bringing memory to AI with RNNs.
In the last blog, I gave an overview of LSTMs (long short term memory) in AI that mimics human memory. I will use this blog to go 2 layers below to draw the building blocks of this technology. As a reminder, at a 50k level, the building block looks like the image on the left. There is long and short term memory on the left … Continue reading Inner working of an AI that mimics human memory
Learn how AI remembers things to make smarter decisions Continue reading Mimicking human memory with AI
Neural Networks (NNs) that depend on past history are called Recurrent Neural Networks (RNNs). The technical term is networks that depend on temporal dependencies or dependencies that change based on time are called RNNs. This class of NN’s are distinct than the ones that, for example, do image recognition which do not have dependencies on time. RNN’s have wider applications because most applications have dependencies … Continue reading Understanding Recurrent Neural Networks (RNNs)
06 Mar 2018 I was fascinated when Facebook launched the feature where it put a box around a human head (and a bit creeped out when it started suggesting the name of the human next to the box). I always wondered how they did it and filed it under machine-learning magic-ery. Now, I know how they do it so let me peel back the … Continue reading How do they detect faces in pictures?
03 Mar 2017 Let’s look at a problem: I am a dog lover and I subscribe to Instagram where I get a stream of images of toasters, birds, cats and dogs. While these images are exciting (who doesn’t like toasters :-)), what I want to do, is automatically save images of dogs to an account on the cloud. The question is, what kind of program … Continue reading Understanding neural networks – a dog lovers primer
12 Feb 2017 This blog captures an intuitive understanding of concepts such as Regression analysis required for deeplearning. The following are summary of my notes from week 1 of the Udacity Deep Learning course. Linear regression is used to model the relationship between a dependent variable and an independent variable. The intent is to draw a line through your data that best fits your data. … Continue reading Deep learning fundamentals – logistic regressions
02 Feb 2017 There are some things that completely blow your mind. Applying artist specific styles to images is one of them. Completely sci-fiction – although the Prisma filter has made this accessible. In the last blog, I set up the core software requirements for the Udacity deeplearning course. This blog uses the setup to transfer the styles of 3 famous paintings and apply it to images. … Continue reading Teach a program to paint like Van Gogh
28 Jan 2017 //embedr.flickr.com/assets/client-code.js This blog sets up the core software requirements for the Udacity deeplearning course to get you started quickly. I have just signed up for the deeplearning course and am fairly excited about it. The course heavily depends on Python – I used Python about 18 years back and have been a java guy since. The course requires you to setup Python 3.5, a … Continue reading Setting up Anaconda for deep learning