Publications/ Preprints


[12]  Nicole Mücke, Enrico Reiss, Jonas Rungenhagen, Markus Klein, 

Data splitting improves statistical performance in overparametrized regimes, arXiv:2110.10956

 [11] Bernhard Stankewitz, Nicole Mücke, Lorenzo Rosasco, From inexact optimization to learning via gradient concentration, arXiv:2106.05397 

[10] Nicole Mücke, Stochastic Gradient Descent Meets Distribution Regression, Proceedings of the 24th International Conference on Artificial Intelligence and Statistics (AISTATS) 2021, San Diego, California, USA. PMLR: Volume 130.

 [9] Nicole Mücke, Enrico Reiss, Stochastic Gradient Descent in Hilbert Scales: Smoothness, Preconditioning and Earlier Stopping, arXiv:2006.10840

[8] Nicole Mücke, Ingo Steinwart, Global Minima of DNNs: The Plenty Pantry, 

[7] Ernesto De Vito, Nicole Mücke, Lorenzo Rosasco, Reproducing kernel Hilbert spaces on manifolds: Sobolev and Diffusion spaces, Analysis and Applications 2020,

[6] Nicole Mücke, Gergely Neu, Lorenzo Rosasco, Beating SGD Saturation with  Tail-Averaging and Minibatching, 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada,  arXiv:1902.08668

[5] Nicole Mücke, Reducing training time by efficient localized kernel regression

 Proceedings of Machine Learning Research, PMLR 89:2603-2610, 2019.

[4] Nicole Mücke, Gilles Blanchard, Parallelizing Spectrally Regularized Kernel Algorithms, Journal of Machine Learning Research (2018) 

[3] Nicole Mücke, Adaptivity for Regularized Kernel Methods by Lepskii's Principle,
arXiv:1804.05433v1 (2018)

[2] Gilles Blanchard, Nicole Mücke, Optimal Rates for Regularization of Statistical Inverse Learning Problems,
Foundations of Computational Mathematics (2017)  

[1] Gilles Blanchard, Nicole Mücke, Kernel regression, minimax rates and effective dimensionality: beyond the regular case,   Analysis and Applications (2019)
arXiv:1611.03979v1 (2016)