Speaker
Description
Reliable uncertainty quantification is essential in scientific applications, where predictive results must be supported by a transparent assessment of confidence. Among the many approaches proposed for this purpose, Conformal Prediction (CP) is especially compelling because it offers finite-sample, distribution-free coverage guarantees and can calibrate uncertainty on top of any trained model without requiring retraining.
Using the Higgs Uncertainty Dataset as a benchmark, we illustrate how CP can produce prediction intervals with guaranteed coverage levels while making no assumptions about the underlying data distribution. We also compare CP with traditional likelihood-based inference and common ML-driven uncertainty estimation techniques to highlight their respective strengths and limitations. Taken together, these results show that CP provides a competitive and flexible approach that integrates seamlessly with existing ML workflows, making it a promising building block for the development of trustworthy and reproducible AI in scientific research.