Abstract:
This talk explores how the Neural Tangent Kernel (NTK) can be used as a practical lens for understanding data influence, noise, and uncertainty in modern neural networks. Rather than focusing only on predictive accuracy, we study how individual training examples affect predictions, how models behave under noisy observations, and how to quantify when predictions should be trusted. Building on the well-established NTK connection between neural networks, kernel methods, and Gaussian processes, we present three results: an information theoretic approach to data attribution, a characterization of how regularization corresponds to observation noise in wide networks, and an efficient method for uncertainty estimation that captures more structure than standard last layer approaches while remaining computationally practical. Together, these results show how the NTK perspective can provide simple and interpretable tools for reasoning about model behaviour beyond accuracy.
叠颈辞:听
Kamil Ciosek is a researcher working on the theory and practice of machine learning, with interests spanning Neural Tangent Kernels, Gaussian process perspectives on neural networks, Bayesian uncertainty estimation in deep learning, and calibration. His recent work studies the connections between wide neural networks and Gaussian processes, including the role of NTK features in Bayesian inference and uncertainty quantification. Separately, he has also worked on calibration of predictive models, including both first-order and second-order calibration methods. Kamil is currently a Senior 91桃色 Scientist at Spotify and previously held research positions at Microsoft 91桃色 Cambridge and the University of Oxford. He received his PhD in Machine Learning from UCL.
听
There will be a networking opportunity following this talk. If you would like to attend, please complete the registration form. Registration will close on Tuesday 16 June at 1200.