Human Intelligence appears to be special (most days). We have the ability to learn quickly with a few samples of data. The capacity to do so, comes from our innate ability to abstract lessons from previous lessons and recognizing when context and concepts are similar. This is not limited to humans; animals share these skillsets (monkeys and crows).
In machine learning, we will take a model that is already working fairly well and re-use in a new place. This concept is transfer learning. Some tweaking may be needed but if the objectives and data is similar enough, this often works. But the tasks often fails if the task diverges too far from the original training.
Though transfer learning is useful, it is not really “learning” to learn like us. The ability to adjust to a few pieces of data and know which thought process or “algorithm” to use in real time is a different challenge. In other words, we can solve a new task with little training data because of our ability to organize the learned knowledge from previous tasks. In machine learning, Meta-learning algorithms attempts to learn the learning algorithm itself (learning to learn).
A common meta-learning task is one-shot learning. One-shot learning task requires correctly making predictions when only a single example of data is available for each class. We humans do this all the time with certain amount of accuracy. A child sees a giraffe for the first time and never forgets what a giraffe is thereafter. The novelty of the giraffe sticks with us because the features are special. Note: this is a rather classic example and I claim no originality.
Matching and Siamese networks are common examples of solutions to the one-shot problem. Siamese network is made up of two CNNs which share weights. The goal is to learn a set of encodings that explain whether the images are deemed matches or not (0 or 1). Essentially learning the features of the images that make the match possible. Ideally in the giraffe example, the network would identify the encodings that corresponds to the long neck and re-use these in future predictions.
But what do you do when you have no examples to learn from. This is more like the child example above, where the child only needs to see it once to learn the significance. The child gives the giraffe a special place in their knowledge representation. The ability to give new data representation automatically will be one of the many interesting next steps for AI.