Manage focus with Pomodoro Technique

I have long believed and now I know that is backed by research that multi-tasking doesn’t quite work. When you switch context to check that beep from your phone, it takes your brain about 15–20 minutes to come back into the flow state. A surprising research in the last month showed that a phone kept upside down on your desk has a detrimental impact on your productivity.

That said, it is extremely hard to manage your focus when you are sitting in an open office space, with the web right in front of you, emails shouting for attention and time and again I find that I am lost.

I use the Pomodoro technique to manage my focus when I find that I am off the beaten track. I was surprised to hear that a colleague hadn’t heard about the technique. So here goes…

The name Pomodoro, is based on kitchen timers in Italy. The timers looked like tomatos and you twist them to start a 25 minute block.

The technique itself is very easy. Whenever, you want to focus, you start the timer and focus on 1 task for 25 minutes. Once you reach the 25 minutes time frame, take a 5 minute break and start again. Run through four pomodoro’s and take a 10 minute break.

To be candid, I have never been able to run through 4 pomodoro’s back-to-back but getting to 2–3 improves productivity significantly.

Another iteration on the technique is that your daily plan should include a certain number of pomodoro’s that work for you. For example an 8–10 pomodoro day would be an extremely productive day. In this case, you drop your checklists, find slots to make progress and get through items in those focus blocks.

From a tooling perspective, timer on your phone works easily. There are a number of Chrome extensions that keep track of your pomodoro on the web.

The four surprising things about a Santorini sunset

So you made it to Santorini.

You flew half-way around the world, changed flights in Athens and were surprised that there was such a huge plane could land in a decidedly small island with a minuscule of an airport.

Hired a car and drove into Oia — the village to be in, for a beautiful sunset in Santorini.

You find that the famed Santorini sunset is visible best from the top of the ruins. Aren’t you glad that you booked your hotel right opposite the ruin so when the sunset comes around, you can quickly walk up to the ruins. Then you decide to explore the village because the sunset is 4 hours away.

You start walking back about 90 minutes before the sunset and notice that tourists (including yourself) are making bee-line like ants towards the ruins and suddenly you are stopped a good 5-10 minutes away from the ruins because the ruins are full.

Surprise 1: Byzantine Ruins can get full!

Ruins are full – what do you mean? That’s crazy talk and we are 90 minutes away from the sunset.

You pull out the “I stay in the hill across the ruins” and get walking towards the ruins area. That’s where the craziness hits you. This is why tourists are here – Sunset in Santorini!! Every square inch of land is taken over by tourists.

You enjoy the sunset from your hotel.

Surprise 2: Fights break out to enjoy the sunset

The next evening, you find a spot way below the ruin that isn’t visible to people who don’t stay in your hotel — good choice with the hotel again! Only to bump into a National Geographic photographer who isn’t too happy that you are there. You strike a friendship and then you hear him say that, last evening, he was at the ruins three hours before the sunset and still was booted out by some boorish Chinese tourists just before the sunset.

Surprise 3: Brides everywhere near the hotels in Oia

So you stand at the hill overlooking the hotel. The sun is decidedly not going to show up from behind the clouds and so you start looking for interesting subjects.

Are those asians…women in bridal trousseau?

Yes they are! Holding up a veil with a camera man and a few other helpers helping click the perfect picture. The groom meanwhile is resignedly enjoying his beer.

Funny!

Surprise 4: Chinese brides are at every good sunset point

Santorini Santorini
The next evening it gets better. As you head towards the “Three bells of Oia” for a sunset picture. You see not one or two but three entourages of Chinese (I asked) getting their pictures clicked. Brides walking around with their trousseau. Every moment scripted by the photographer.

Santorini

They completely missed the sunset because they were busy clicking pictures for posterity and were missing from the present moment. Santorini

Perhaps the only unsurprising part was that the moment, the sun set, everyone of them turned around got in the car and got the hell out of the place.

A sad commentary on the nouveau riche tourist traveling around the world to sample the best but displaying a behavior that demonstrates neither respect (fight for a picture) nor the desire to truly enjoy the place that they are visiting — it’s all about a Facebook post.

Santorini

For the record, here is what they missed post sunset. Santorini

The sunset itself — gorgeous if it shows up!
Santorini

(this post is re-printed from blueorangeart.com)

AI to make smart word associations

Let’s start with a couple of problems
Problem a) I read a lot of news articles and find that a number of words appear close to each other — New next to York, New next to Delhi; However, if I am reading book about history of India — it is unlikely that New will come next to Delhi; in this case Delhi will stand alone. In other words, I am smart enough to understand the context and the relationship between these words. Question is — can my machine learning program make this relationship based on the data that is presented with.
Problem b) My grammar teacher has done a good job of instilling rules in my head — female of a donkey is a jenny (did you know that?). I can determine Jenny is a donkey while reading a text about donkeys but can my machine learning algorithm make that association?
The larger class of problem is how do you understand some class of text — perhaps all novels by Isaac Asimov or the entire wikipedia database and make predictions based on that dataset.
The machine learning algorithm that makes these smart associations is called “Word2Vec”. Word2Vec is the model used to create word embeddings. The model takes each word and maps it to another word in the word space that the model is learning from; eventually a cluster of related words settle down close to each other.
The larger class of problems is called “Word Embeddings”. These are two dimensional neural networks that are used when there a huge number of classes. The NN is able to make semantic relationship between words and produce richer relationships.
Intuition
Word2Vec works with a large data set. Let’s take the example of feeding the entire data set of scientific articles on donkeys to the algorithm. Assume, that this a vocabulary of 10k words. Each of these words, will be represented as input (one-hot encoded) to the Word2Vec algorithm. The training data set will produce a set of weights that determine this relationship. The output is a probability distribution of each of the 10k words. Thus, fed in “jack” (male of a donkey), the output distribution will likely be heavily weighted towards “jenny”.
Chris Mccormick has a great overview of the Skip-gram model for Word2Vec.

Is that horse running? bringing memory to AI with RNNs.

3DayInIceland-8.jpg

The human brain does two functions very well – the first is that it can recognise images very well. The second is that it can detect patterns based on time and make predictions based on those patterns.

The last few months I played around with the first function- — around building image recognition neural networks (CNN)– the ones that help you determine whether a picture is of a dog or human (source code) or another example is detecting faces in a picture (article) or recognising handwritten numbers (article).

For example, we can write a CNN that determines that the above picture is of a horse.

But, if you ask a CNN to tell us if this horse is running or not — it would have a hard time. If you ask a human —  the answer is dependent on a number of things and most on time. If you saw this picture as a series of pictures where the horse was loitering around, you could make the determination that it is likely that the horse is standing around. In that case, we used human memory to solve the problem of determining what the horse is doing.

Recently, I started to dig into the second function of the human brain and see how you can mimic the human brain. To answer a question — how do you introduce an element of time in AI.

The canonical example is that I would like to predict the prices of a stock. Another example is Google Assistant, it needs to understand the context to service a request.

Recurrent Neural Networks (RNNs) and Long Short Term Memory (LSTMs) help you solve problems that have dependencies on time. I have blogged (here and here) about LSTMs previously so lets take RNNs in this blog.

The following picture is called an unfolded model of a RNN.

Recurrent Neural Network

Let’s break this picture down.

Each circle is a feedforward NN (ffnn) which means it is taking input (i), does some calculation and puts out a value (o). The weights calculated by the ffnn are Wi and Wo on the input and output respectively.  NN’s typically take anywhere between 1 to thousands of inputs, thus, i can be in thousands. NNs typically output 1..n outputs as well, thus, o can have that range as well.

So far so good.

Let’s now bring in the notion of time.

RNN’s keep an internal state around (s) and Ws are the weights that are generated for that state. Think of s as the memory component of the RNN.

The way you bring in the memory element along is that you feed the memory from time t as the input to time t+1 and so on. This, is how RNNs are different than standard feed forward networks. The input increases from i to i+s.

Mathematically

s_t = some_function (Wi*i + s_t-1*Ws)

The picture is that of a simple NN repeating itself over time starting with t upto t+2 but really going on to infinity.

The next level of complexity is to stack an RNN on top of another and create a 2 layer RNN. Two layers is the beginning because you can stack an arbitrary number of them.

RNNs-2

This lattice of RNNs then provide the flexibility to process temporal dependent data and make complex predictions.

Prem Narula – lessons from a life well lived

About 3 months back, when I heard that Prem uncle passed away in his sleep, I broke down and cried. I cried like I haven’t in a long time. His passing away has left a huge hole that can never be filled.

I was 18 when I had my last substantial interaction with him. I must have met him about 3-4 times since I left Mumbai in 1992 and 26 years later, he (and his wife) still remains one of my favourite human beings. For that matter, his parents were fantastic human beings too.

I got thinking in the last month or so – In a fast moving world, filled with short interactions — why do I miss him? Why was he special? What are the lessons that I can bring in my life and hopefully in yours too from him? What made him an extra-ordinary human being?

He wasn’t my teacher, relative but a neighbour who became much more.

Prem uncle and Sangeeta aunty, lived in our apartment complex were in their 30s when we met them. Presumably, I saw him at his peak — financial and health wise. He had a fantastic business going, the confidence that comes along with having made it and made it very fast. As a 16 year old, going down his career path was something that I seriously considered.

Then, it came crashing down — he had to go through two heart valve replacement surgeries, one anticipated (and I think triggered by chain smoking) and another one because the first valve was defective. Followed by a paralytic attack. Between these three events, he suffered a betrayal by his business partner who pounced on the opportunity to take over the business and drove him out. All of this in a span of 2 years :-(. Having recently suffered a betrayal myself, I can truly empathise the pain and the hurt that he had to go through.

The financial success then wasn’t what made him extra-ordinary. It isn’t his material success in the world that mattered to me. The people who hung around him who craved the reflection of his material success evaporated very quickly.

Here are the lessons that I learnt from him:

As I got thinking, I realised, the reason I loved him, or my sister loved and now his son and nephews and nieces loved him was because of unconditional love and acceptance that he radiated towards us . I truly have never seen any other human as generous as he(and aunty) was in sharing his love. And when he accepted you in his life — he truly well accepted you for all the good and the bad.

When he interacted with you, you were truly the centre of his attention. There were no half measures in terms of looking at others. This was important for a 15 year old kid — a human/an elder paying complete attention to me and acknowledging my viewpoints.

Bringing joy and fun in every interaction with friends and family. Be it the nightly carrom sessions or the hours of Nintendo games or forcing me to play Antakshari (an Indian singing game). The focus was truly to enjoy life and bring joy to everyone involved. To this date, when I miss him, I reach out to my Nintendo and play it. Consequently, I never play antakshari because it reminds me too much of him and wonderful time gone by that can never be recaptured.

Playfulness in relationships. I loved how he brought play in relationships. Minor irreverence perhaps — not the right word  perhaps gentle ribbing are the right words. A minor playful drama with his wife, father and mother pretending to hide his smoking habit while it was out there in the open. Jumping in the car to go get paan in the middle of the night or for that matter getting me to sit on the hood of the Jeep and drive in the beach. It was all great fun.

That’s really it! Living a well lived life doesn’t need trappings of success, huge amounts of money or too many accomplishments.

Looking back at his life and seeing how well I embody those lessons. I find that I fall woefully short. Life today has become too serious, attention fragmented by devices perhaps a small bit of playfulness is what I embody from his life. His passing away has been a wake up call for me to refactor my life to the qualities that I admired in him and admire in others.

It is worth repeating that a life well lived doesn’t need trappings of success, huge amounts of money, too many accomplishments, status but what it needs is unconditional love, joy, fun, playfulness.

Prem uncle – you will be forever missed but you will continue in our hearts. Thank you for all the fun times and the love and congratulations on a fantastic life and life well lived!

prem-uncle
Prem uncle, Sangeeta aunty and their son – late 1990s

 

Inner working of an AI that mimics human memory

In the last blog, I gave an overview of LSTMs (long short term memory) in AI that mimics human memory. I will use this blog to go 2 layers below to draw the building blocks of this technology.

LSTM-1As a reminder, at a 50k level, the building block looks like the image on the left. There is long and short term memory on the left –> some input comes in –> new long and short term memory is output on the right. Plus, an output that determines what the input is.

Let’s open the box called the LSTM NN (neural network). This block is composed of four blocks or gates:

  • The Forget Gate
  • The Learn Gate
  • The Remember Gate
  • The Use Gate

The intuitive understanding of the gates is as follows: When some new input comes in, the system determines what from the long term memory should be forgotten to make space for the new stuff coming in; this is done by the forget gate. Then, the learn gate is used to determine what should be learnt and dropped from the short term memory.

The processed output from these gates is fed to the remember gate which then updates the long term memory; in other words a new long term memory is formed based on the updated short term and the long term memory. Finally, the use gate kicks in and produces a new short term memory and an output.LSTM-2

Going a level deeper:

Learn Gate

The learn gate is broken in two phases: Combine –> Ignore.

  • Combine: In the combine step, the system takes in the short term memory and the input and combines them together. In the example, the output will be Squirrels, Trees and Dog/Wolf (we don’t know yet — see previous blog for context)
  • Ignore: In the second phase, information that isn’t pertinent will be dropped. In the example, the information about trees is dropped because the show was about wild animals.

Forget Gate

The forget gate decides what to keep from the long term memory. In the example, the show is about wild animals but there was input about wild flora, so the forget gate decides it is going to drop information about the flora.

Remember Gate

This gate is very simple. It adds output from the Learn gate and Forget gate to form the new long term memory. In the example, the output will be squirrel, dog/wolf and elephants.

Use Gate

This gate combines the input from the learn gate

Math behind the various gates for mathematically inclined

Learn Gate

Combine phase (output = Nt)

Take the STM from time t-1, take the current event Et and pass them through a tanh function.

Nt = tanh (STM_t-1, Et)

Mathematically Nt = tanh (Wn [STM_t-1, Et] + bn) where Wn and bn are weight and bias vectors.

Then, the output from the combine phase is multiplied by another vectory called i_t that is the ignore factor from the Ignore phase.

Ignore phase (output = i_t)

We create a new neural network that takes the input and STM and apply the sigmoid function on them.

i_t = sigmoid ( Wi [STM_t-1, Et] + bi)

Thus, the output from the Learn gate is:

tanh (Wn [STM_t-1, Et] + bn) * sigmoid ( Wi [STM_t-1, Et] + bi)

Forget Gate

The forget output is calculated by multiplying the long term memory with a forget factor (ft).

The forget factor is calculated using the short term memory and the input.

ft = sigmoid( Wf [STM_t-1, Et] + bf]

Forget output = LTM_t-1 * ft

Remember Gate

The remember gate takes output from the Forget and Learn gates and adds them together.

LTMt = LTM_t-1 * ft + Nt * i_t

Use Gate

The use gate applies a tanh function on the output of the forget gate and multiplies it to a sigmoid of the input and the event.

STMt = tanh (Wu [STM_t-1, Et] + bu) * sigmoid ( Wv [STM_t-1, Et] + bv)

Summary

The four gates mimic a point in time memory system. To truly envision this, think of a lattice of connected cells separated in time. Thus, the memory system will continually evolve and learn over a period of time.

(credit: the math and the example are coming in from the Udacity deep learning coursework)

Mimicking human memory with AI

Memory is a fascinating function of the human brain. Specifically, the interplay of the short term memory and long term memory where both work in conjunction to help humans decide how to respond to the current stimuli is what makes us function in the real world..

Let’s take an example —

I am watching a program on TV and suddenly a picture of a dog/wolf comes up. From the looks of it, I cannot distinguish between the two. What is it – a dog or a wolf?

If the previous image was a squirrel and since squirrels are likely to be in a domestic setting, I could make an assessment that the current image is a dog.

This would be reasonably true, if all I had was my short term memory.

At this point, my long term memory kicks in at this point and tells me that I am watching a show about wild animals. Voila, the obvious answer at that point is that the current image is of a wolf.

LSTMs mimic human memory

A specific branch under Deep Learning in AI called LSTMs (long short term memory) are used to solve problems that have temporal (or time based) dependencies. In other words, LSTMs are used to mimic the human memory to predict outcomes. Unlike another branch in Deep Learning called RNNs (recurrent NNs) which only keep the short term memory around, LSTMs bring in long term memory to bring in high fidelity to their predictions.

The Working of LSTMs

LSTM-1
The NN takes in the long term memory (Elephant), the short term memory (Squirrels, Trees) and the input (Dog/Wolf) and makes the following set of determination:

  • What should it forget? Trees in this case because the show is about wild animals and not trees.
  • What should it learn? There is a Dog/Wolf in addition to the squirrels and trees.
  • What should it predict? The Wolf
  • What should it remember long term? Elephant, Squirrel, Wolf
  • What should it remember short term? Squirrel, Wolf

All the above is done for the current time and the new long/short term memory are fed into the next input that comes at time t+1.

Thus, you can think of the above picture as recurring for every time epoch t.

In the next blog, we will deep dive into the LSTM NN and see how each of the bulleted questions are answered.

Pretty interesting, isn’t it?

(disclaimer: the example used is from the Deep Learning course work on Udacity)

Oia, Santorini – beauty with heavy cardio

Santorini
View from Byzantine Ruins

(If you rather see the pictures, here they are)

Days 3, 4 and 5 in Santorini – Oia, Imerovigli and rest of Santorini

After 2 days in Fira (day 1, day 2), we moved to the jewel of the Santorini island – Oia.

Describing Oia in a blog post is very easy – we went to Oia and stayed there for 2 days :-). Because there isn’t anything else you do in Oia — you get there and enjoy it.

On the other hand, Oia is a place where tourists line up 3 hours before a sunset to see it. Often, tourists duke it out to get the best picture of the sunset. Chinese brides (more in another blog) walk around in their wedding dresses to take their dream wedding picture.

Beauty against the contrast of heavy cardio workout in a hot, humid environment is how I remember Oia.

Santorini

Oia is not for weak knees, you are either climbing up or you are going down and that in the humidity of the island is a killer. What is amazing is that each hotel has a porter that makes this trip multiple times a day with bags on their shoulders or on a dolly.

 

Gaining the conditioning and strength of a Oia porter is my ideal health goal now. Perhaps time to move to Oia :-).

Sunset in Oia – Byzantine ruins

Oia is at the edge of the island with the Byzantine ruins jutting out at 45 degrees on one side of the island. The Byzantine ruins is the place that you wait up for the sunset.

Santorini

It is packed at sunset time! Officials close access to this part of the island about an hour before the sunset.

I was told by a national geographic photographer that he reached 3 hours before the sunset and then a fight started right before the sunset and he couldn’t capture the pictures.
My recommendation is to go past the ruins and take the steps to go to the port and stop mid-way to get pictures of the sunset. This is where we got the pictures to the sunset. Santorini

Oia port Some people take the cruise that starts on the other side of the island to catch the sunset. The cruise brings them near Oia port where they hang around like a bunch young punks and then break up as soon as the sunset is over. I heard that the boat ride is choppy. We unfortunately had a couple of days of greyed out skies and didn’t quite get a great picture. That said, it is surreal to hear whole bunch of tourists clap for the sunset. I couldn’t but help think that most people wouldn’t care two hoots for sunsets whichever city that they come from and here in Oia, we had them clapping for a no-show sunset.

View from Byzantine Ruins – Windmills and Imerovigli

The view from the ruins is gorgeous. You see the famous windmills on one side and the town of Imerovigli on the other. I really loved walking from the ruins to the windmills (a 15 minute low cardio work out :-)). Santorini

Santorini Santorini

The walk around in Oia is gorgeous.
Santorini Santorini

We had dinner in a beautiful spot called Thalami and loved the food and the restaurant staff.
Santorini Santorini

Imergovigli town is mid-way between Fira and Oia and pretty non descript but with an easy access to Fira bells and some good food.

Staying in Oia and Imerovgili

Santorini

We stayed in Oia Mare in Oia which is near the base of the windmills and overlooks the ruins. The hotel has rooms which are like the traditional caves of Oia is fantastic, the people managing the hotels are fantastic, the views are fantastic too.
Santorini

I will highly highly recommend staying here.

We stayed in Senses Boutique hotel in Imergovigli, which has a fantastic view of the Calderra. Unfortunately for us, my wife got allergies to something in the hotel. The hotel staff changed our rooms and things got better but not that much.
Santorini Santorini
I captured some great sunset pictures on the way to Fira from Imerovigli. The three bells of Fira is the best place to capture the Santorini sunset – better than Oia.

Santorini Santorini

Santorini Santorini

Other parts of the Island – Lighthouse, Red Beach and Perissa Beach

The lighthouse and red beach are an interesting touch point but you wouldn’t miss much if you didn’t head there. The next picture is a stop over point just before the lighthouse. The lighthouse itself is unimpressive.

Santorini We stopped at the Vylachada port where you take the cruises. It was a fairly small port and we were quite surprised to see how small it was.Santorini

Santorini

Perissa beach on the other hand had a very nice laid back feel to it. Santorini

Santorini Santorini

This is where we truly felt that we had hit the island life :-). We headed to a dive restaurant called Tranquillo and truly felt tranquil – perhaps it was the beer :-).
Santorini The drive to the other side of the island has numerous bakeries on the way. At one spot, we saw a group of painters and the funny thing was that all the paint marks on their clothes were Oia white because there are no other colors used.

No colors – a terrible life as a painter!

There are numerous shops with local artists along the drive and we visited a few of them and bought some unique island pottery. Santorini

Santorini

Summary of Santorini trip

Santorini with the towns of Oia, Fira, Imerovigli is a fantastic place to visit and I’d say the hype matches the actual experience. Santorini

Harpreet’s Newsletter #6 – Wonderful Santorini

Hello,
So I was on a 2 week vacation — my first in 15 years! Thoroughly enjoyed visiting new places and reading a lot — book notes to come in the next few weeks. Meanwhile, I decided to take it easy on the blog writing for those weeks and I will get back to it starting this week. Meanwhile, here is what is worth sharing this week.

  1. [Travel] Day 1 in Santorini Fira, Day 2 in Santorini — Walk to Three Bells of Fira
  2. [Productivity] An article that recommends you to split your day into a makers and a managers schedule

Thanks for reading this newsletter, if you liked it forward it to a friend or tweet me some love. If you like to read more, past issues here.
– Harpreet

Day 2 Santorini – Walk to Three Bells of Fira and wonderful Oia

IMG_7120
The three bells of Fira

This blog is part of a series of blogs (day 1 in Fira) to plan a trip on Santorini. The series first appeared on blueorangeart.com.

Fira
Calderra dinner view

The first day in Fira definitely got into the island mode after a gorgeous sunset and dinner overlooking the Caldera.

We decided to head to the Three Bells of Fira which is part of the iconic church that is often used in pictures.

IMG_7096
Church on the way to Fira

This walk is beautiful and starts through the shops of Fira and starts on a gradual slope upwards and soon the slope becomes steeper. There aren’t too many directions and everybody is generally walking up the slope in the direction of a different Fira church that comes in practically the first 10 minutes.

IMG_7097
Walk upto the first church
IMG_7098
Cave Shop

On the back side of the church there is a boutique shop that is supposedly very popular called “The cave” – we stopped here for about 10 minutes and soaked in the atmosphere. There are directions on the church wall to head to the 3 bells of Fira.

We quickly moved on. The rest of the walk is on the edge of the cliffs with gorgeous views of the Caldera and the villages Firastefani and Imerovigli.

IMG_7116
Firastefani in the distance
IMG_7117
The famous rock of Imerovigli

There were lots of tourists and lots of nice places to take pictures on the way to the bells.

IMG_7118

We finally reached the bells, took our fair share of pictures and walked back. IMG_7146

The total round trip for the walk is about 2-3 hours, you can do it much faster but why would you :-)?

IMG_7126

Heading to Oia

After the trip, we were all stoked up to head to Oia. The drive from Fira to Oia is about 45 minutes and goes through the mountains and all going up. There is very limited parking in Oia and after some back and forth with the hotel, we finally found the Post office where we parked our car. The post office is the meeting point where the porters from the hotel pick you up and take you to the hotel. I will skip the walk to the hotel and the hotel itself for the next blog.

We spent most the afternoon cooped in the hotel room because of the bright sun and heat. We did walk around Oia for about an hour or so. It is a very small town and about an hour and you get the end to end picture of the town.

The Oia sunset experience is why people head to Oia and the best spot is on the top of Byzantine ruins. What they don’t tell you is the amount of people on the top of the ruins and people start queuing about 3 hours before the sunset — which in the heat of Oia is a bit crazy.

IMG_7176

 

 

I skipped the ruins because I was standing on the top of our hotel roof which is part of the charming view from the ruins — so I definitely missed the view on the first day. However, we ended up making good friends with a few people from around the world. The topic starter was the crazy traffic on the ruins :-).

IMG_7168

IMG_7166
Boats coming in for the sunset


There were too many clouds and the sunset experience was a bit underwhelming to be honest. Nevertheless the location was unbeatable!