As We Reach For the Next Steps in AI

Human Intelligence appears to be special (most days). We can learn quickly with a few samples of data. The capacity to do so comes from our innate ability to abstract lessons from previous lessons and recognizing when context and concepts are similar. This capacity is not limited to humans; animals share these skillsets (monkeys and crows).

In machine learning, we will take a model already working reasonably well and re-use it in a new place. This concept is to transfer knowledge. Some tweaking may be needed, but if the objectives and data are similar enough, this often works. But the tasks often fails if the job diverges too far from the initial training.

Though transfer learning is useful, it is not really “learning” to learn like us. The ability to adjust to a few pieces of data and know which thought process or “algorithm” to use in real-time is a different challenge. In other words, we can solve a new task with little training data because of our ability to organize the learned knowledge from previous jobs. In machine learning, Meta-learning algorithms attempt to learn the learning algorithm itself (learning to learn).

A common meta-learning task is one-shot learning. One-shot learning task requires correctly making predictions when only a single example of data is available for each class. We, humans, do this all the time with a certain amount of accuracy. A child sees a giraffe for the first time and never forgets what a giraffe is after that. The novelty of the giraffe sticks with us because the features are unique. Note: this is a rather classic example, and I claim no originality.

Typical view of the one-shot learning problems. Image borrowed from Siamese networks paper

Matching and Siamese networks are typical examples of solutions to the one-shot problem. Siamese network consists of two CNNs which share weights. The goal is to learn a set of encodings that explain whether the images are matches or not (0 or 1). Mostly learning the features of the images that make the match possible. Ideally, in the giraffe example, the network would identify the encodings that correspond to the long neck and re-use these in future predictions.

But what do you do when you have no examples to learn from. This issue is more reflective of the child example above, where the child only needs to see it once to determine the significance. The child gives the giraffe a special place in their knowledge representation. The ability to provide new data representation automatically will be one of the many fascinating next steps for AI.

Written by

Angela Wilkins

I like science, machine learning, start-ups, venture capital, and technology.