AI Atlas: How KANs Rethink AI Problem-Solving

AI breakthroughs, concepts, and techniques that are tangibly valuable, specific, and actionable. Written by Glasswing Founder and Managing Partner, Rudina Seseri


At its core, AI is designed to recognize patterns. A neural network ingests data in order to learn the relationship between points, which is represented by a formula. The flow of information within a network is influenced by weights, which determine the strength of connections between neurons. These weights are ultimately what needs to be “learned” by the model.

One of the most fundamental neural networks is the Multi-Layer Perceptron (MLP), which processes inputs through multiple stages in order to generate an output. This simplicity and versatility has made MLPs one of the most widely-used “building blocks” in AI. They can be used by themselves or in the creation of complex architectures such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Transformers.

Simple structures such as MLPs work best when only a few parameters need to be learned, but creating more complicated architectures is difficult and requires sophisticated ensembles of many different components. This is a major bottleneck for enterprises looking to adopt AI in use cases such as with natural language, which may involve billions or even trillions of parameters.

So how can we improve our building blocks and construct more complex AI systems? In today’s AI Atlas, I dive into a recent breakthrough out of MIT, Northeastern University, and CalTech that could revolutionize the fundamentals of AI: the Kolmogorov-Arnold Network.

🗺 What is a Kolmogorov-Arnold Network (KAN)?

A Kolmogorov-Arnold Network (KAN) is a novel neural network architecture that shifts the traditional paradigm of AI by learning activation functions between nodes rather than weights. In other words, compare the connection between neurons as a delivering a package – typical neural networks learn when to flag which packages are important, but a KAN learns what makes a package important in the first place, meaning it can capture much more complex relationships within data.

This approach is based on the Kolmogorov-Arnold Representation Theorem, which states that any continuous multi-variable function can be approximated by a combination of simpler, single-variable functions. This means that a KAN is able to break down complex problems into simpler parts, enabling them to achieve far higher accuracy with fewer parameters and less data. As a result, KANs are much more accurate than MLPs with a significantly lower number of nodes and can be used to create smaller and more powerful models.

🤔 What is the significance of KANs, and what are their limitations?

KANs represent a revolutionary new building block for AI by increasing the number of parameters that are learnable, reducing the need for human operators to specify criteria in advance of training. This means that models built using KANs could make much deeper and more useful inferences, such as more accurately addressing context clues in a conversation, while also improving robustness against bias introduced by initial assumptions. Additional advantages of KANs include: 

  • Enhanced accuracy: KANs achieve better function approximation with fewer parameters compared to MLPs, leading to higher overall accuracy in tasks such as pattern recognition, classification, and prediction.
  • Reduced data dependency: KANs require less data for training compared to MLPs, broadening the potential for AI in situations where data availability is limited or expensive to obtain.
  • Faster inference: With fewer parameters and a simpler architecture, KANs can potentially lead to faster inference times and unlock real-time applications such as in cybersecurity.

As researchers and practitioners delve deeper into the capabilities of KANs, we can anticipate further breakthroughs, making it an exciting prospect to track. However, research on the technology is still in the earliest days and has many unknowns, particularly with regard to:

  • Scalability: While KANs show promise in smaller-scale experiments, it is unclear how well they will scale to massive datasets, which will be necessary for true usefulness in enterprise systems.
  • Handling discontinuous functions: The Kolmogorov-Arnold Theorem upon which KANs are based is primarily intended for approximating continuous functions, where data is always related with few irregular spikes, such as relating price to consumer demand.
  • Training: KANs are much more accurate and efficient when making inferences, but initial training may require more sophisticated algorithms and optimization techniques compared to traditional neural networks such as MLPs.

🛠 Use cases of KANs

KANs show substantial promise in tasks that involve learning complex patterns or relationships within data, unlocking real-time decision-making, resource efficiency, and accuracy in areas such as:

  • Natural language processing: KANs can excel at capturing complex linguistic structures and semantics within textual data. By learning breaking down problems into simpler subsets, KANs can model sophisticated language patterns more accurately, providing value to tasks such as sentiment analysis, language translation, and text generation.
  • Image recognition: KANs can represent intricate image features more efficiently, enabling better generalization and robustness to variations in lighting, viewpoint, and occlusions.
  • Social network analysis: KANs could be used to uncover complex social interactions by learning a wider range of data features, empowering downstream tasks such as targeted marketing and product recommendation systems.

Stay up-to-date on the latest AI news by subscribing to Rudina’s AI Atlas.