Publications/ Preprints

 

[15]  Mollenhauer, Mattes, Nicole Mücke, and T. J. Sullivan. Learning linear operators: Infinite-dimensional regression as a well-behaved non-compact inverse problem. (2022). 

arXiv preprint arXiv:2211.08875 

[14]  Nguyen, Mike, Charly Kirst, and Nicole Mücke. Local SGD in Overparameterized Linear Regression.  (2022). 

arXiv preprint arXiv:2210.11562   

[13] Jan Christian Hauffen, Peter Jung and Nicole Mücke,  Algorithm Unfolding for Block-sparse and MMV Problems with Reduced Training Overhead,  2022
https://doi.org/10.48550/arXiv.2209.14139

[12]  Nicole Mücke, Enrico Reiss, Jonas Rungenhagen, Markus Klein, 

Data splitting improves statistical performance in overparametrized regimes,
 InInternational Conference on Artificial Intelligence and Statistics 2022 May 3 (pp. 10322-10350). PMLR
https://proceedings.mlr.press/v151/muecke22a/muecke22a.pdf

 [11] Bernhard Stankewitz, Nicole Mücke, Lorenzo Rosasco, From inexact optimization to learning via gradient concentration,  Computational Optimization and Applications. 2023 Jan;84(1):265-94 
https://rdcu.be/c7NsV 


[10] Nicole Mücke, Stochastic Gradient Descent Meets Distribution Regression, Proceedings of the 24th International Conference on Artificial Intelligence and Statistics (AISTATS) 2021, San Diego, California, USA. PMLR: Volume 130.

http://arxiv.org/abs/2010.12842


 [9] Nicole Mücke, Enrico Reiss, Stochastic Gradient Descent in Hilbert Scales: Smoothness, Preconditioning and Earlier Stopping, arXiv:2006.10840

[8] Nicole Mücke, Ingo Steinwart, Global Minima of DNNs: The Plenty Pantry, https://arxiv.org/abs/1905.10686 


[7] Ernesto De Vito, Nicole Mücke, Lorenzo Rosasco, Reproducing kernel Hilbert spaces on manifolds: Sobolev and Diffusion spaces, Analysis and Applications 2020, https://www.worldscientific.com/doi/abs/10.1142/S0219530520400114

[6] Nicole Mücke, Gergely Neu, Lorenzo Rosasco, Beating SGD Saturation with  Tail-Averaging and Minibatching, 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada,  arXiv:1902.08668
 


[5] Nicole Mücke, Reducing training time by efficient localized kernel regression

 Proceedings of Machine Learning Research, PMLR 89:2603-2610, 2019.

[4] Nicole Mücke, Gilles Blanchard, Parallelizing Spectrally Regularized Kernel Algorithms, Journal of Machine Learning Research (2018) 

[3] Nicole Mücke, Adaptivity for Regularized Kernel Methods by Lepskii's Principle,
arXiv:1804.05433v1 (2018)

[2] Gilles Blanchard, Nicole Mücke, Optimal Rates for Regularization of Statistical Inverse Learning Problems,
Foundations of Computational Mathematics (2017)  

[1] Gilles Blanchard, Nicole Mücke, Kernel regression, minimax rates and effective dimensionality: beyond the regular case,   Analysis and Applications (2019)
arXiv:1611.03979v1 (2016)