KAN explained
This post aims to demystify one of the most crucial components of Kolmogorov-Arnold Networks (KAN) - the B-spline activation function. While traditional neural networks rely on fixed activation functions like ReLU or sigmoid, KAN leverages the flexibility of B-splines to create adaptive, learnable activations. This approach is not just a minor architectural variation but a fundamental rethinking of how neural networks transform information. By understanding B-splines and their role in KAN, we can better grasp why these networks achieve impressive expressivity with theoretical guarantees. The interactive visualization below enables hands-on exploration of how B-splines work and how adjusting their parameters affects the resulting transformations.