AI Atlas: When Two Heads are Better Than One: Twin Neural Networks

AI breakthroughs, concepts, and techniques that are tangibly valuable, specific, and actionable. Written by Glasswing Founder and Managing Partner, Rudina Seseri


🗺️ What is a twin neural network?

Twin neural networks (TNNs) are a class of neural network architectures that contain two or more identical sub-networks, consisting of the same configuration, which are used to determine the similarity between two inputs. A helpful example to visualize this structure would be a detective comparing fingerprints: two separate fingerprints receive the same treatment, drawing out the key shape and features of each. These outputs are then compared to determine whether the fingerprints came from the same person. In this example, the TNN would be the detective and the fingerprints would be two points of data from any dataset. In fact, TNNs are used for fingerprint and signature recognition!

Many traditional methods of deep learning focus on drawing conclusions based on a given datapoint. In other words, this means analyzing an input to classify its identity, infer its characteristics, and predict a response. Furthermore, traditional neural networks learn to predict categories by training on a large dataset. This poses a problem when we need to add or remove new classes to the data, such as when defining new customer markets or a new line of inventory, which requires us to update the neural network and retrain it on the new data set.

Twin neural networks, on the other hand, are not concerned with what an object actually is; rather, they concentrate solely on how an input resembles or differs from other inputs. TNNs learn a similarity function which is used to simply tell whether two pieces of data are the same. This structure enables TNNs to define new classes of data without retraining the entire network and while requiring only a few datapoints to make new inferences.

🤔 What is the significance of twin neural networks and what are their limitations?

TNNs are significant because they introduce a novel method of classifying data. If you are interested in detecting anomalies between datasets or differentiating objects on a screen, it is not necessary to define categories for every input that may be introduced. Rather, twin neural networks are able to draw conclusions based on the relative nature of everything in their environment. The advantages of such an approach include:

  • More robust inference: Only a few datapoints per category are sufficient for a twin neural network to recognize similar datapoints in the future, a concept known as few-shot learning. This means that new categories can be quickly learned and easily refined.
  • Complement for existing models: TNNs can be leveraged alongside traditional classification models such as convolutional neural networks (CNNs), which are used to identify patterns in images. This combination can produce more accurate results than a CNN would by itself. Leveraging multiple AI networks in tandem is known as an ensemble method.
  • Capable of learning from context: A twin neural network can be used to generate embeddings, which capture the context behind datapoints such as natural language by comparing the correlation between words and phrases. This capability makes them versatile for various applications beyond image recognition.

Of course, no AI technique is perfectly suited to every situation. Twin neural networks have several notable limitations, including:

  • Longer training time: In order to see all information available, TNNs learn by comparing each datapoints to every other. This means that that training takes longer than with traditional learning methods.
  • Classification is relative: TNNs classify inputs based on their similarity to each other, rather than each input’s intrinsic features. This means that classification is only based on the relationship between the data.
  • Confusion with inconsistency: If an object changes over time or otherwise does not stay the same, a TNN will struggle to identify it as a single thing (such as an object moving across frames of a video).

🛠️ Applications of twin neural networks

Many common applications of twin neural networks draw upon the architecture’s ability to validate data and enable few-shot learning, identifying new classes of data with only a handful of examples. Thus, TNNs find practical applications in areas where accurate classification is important yet subjects change often, such as:

  • Supply Chain: Twin neural networks can be used to verify inventory classes and quantities. For example, Glasswing portfolio company Verusen very successfully utilizes twin neural networks alongside its proprietary AI stack to significantly optimize industrial working capital cycles.
  • Object detection and physical security: TNNs are useful when applied to image recognition such as in surveillance footage to understand the makeup of an given area because they can identify new objects even when only a few examples of those objects are available.
  • Authentication: The ability of twin neural networks to classify similarity is leveraged to produce more accurate authentication technologies, such as voice recognition, facial identification, or signature verification.

Stay up-to-date on the latest AI news by subscribing to Rudina’s AI Atlas.