Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Statistical Learning
Time:
Thursday, 14/Mar/2024:
10:30am - 12:10pm

Session Chair: Johannes Lederer
Location: Collegezaal B

Aula Congrescentrum Mekelweg 5 2628 CC Delft

Show help for 'Increase or decrease the abstract text size'
Presentations
10:30am - 10:55am

Modern Extremes: Methods, Theories, and Algorithms

Johannes Lederer

University of Hamburg, Germany

We introduce an approach to high-dimensional extremes. Based on concepts from high-dimensional statistics and modern convex programming, we can obtain fine-grained models within seconds on a standard laptop. We will illustrate these properties with finite-sample theories and empirical analyses.



10:55am - 11:20am

Image classification: A new statistical viewpoint

Sophie Langer

University of Twente, Netherlands, The

The surge of massive image databases has spurred the development of scalable machine learning methods particularly convolutional neural network (CNNs), for filtering and processing such data. Current theoretical advancements in CNNs primarily focus on standard nonparametric denoising problems. However, in image classification datasets, the variability arises not from additive noise but from variations in object shape and other characteristics of the same object across different images. To address this problem, we consider a simple supervised classification problem for object detection in grayscale images. While from a function estimation point of view, every pixel is a variable and large images lead to high-dimensional function recovery tasks suffering from the curse of dimensionality, increasing the number of pixels in our image deformation model enhances image resolution and simplifies object classification problem easier. We introduce and theoretically analyze two procedures: one based on support alignment, demonstrating perfect classification under minimal separation conditions, and another fitting CNNs to the data showcasing a misclassification error depending on the sample size and number of pixels. Both methods are empirically validated using the MNIST handwritten digits database.

This is joint work with Johannes Schmidt-Hieber (Twente).



11:20am - 11:45am

Dropout Regularization Versus \ell_2-Penalization in the Linear Model

Gabriel Clara, Sophie Langer, Johannes Schmidt-Hieber

University of Twente, Netherlands, The

We investigate the statistical behavior of gradient descent iterates with dropout in the linear regression model. In particular, non-asymptotic bounds for expectations and covariance matrices of the iterates are derived. In contrast with the widely cited connection between dropout and ell_2-regularization in expectation, the results indicate a much more subtle relationship, owing to interactions between the gradient descent dynamics and the additional randomness induced by dropout.

For more details see [2306.10529] on arXiv.



11:45am - 12:10pm

Inference on derivatives of high dimensional regression function with deep neural network(NN)

Weining Wang

University of Groningen, Netherlands, The

We present a significance test for any given variable in nonparametric regression with many variables {via estimating derivatives of a nonparametric function}. The test is based on the moment generating function of the partial derivative of an estimator of the regression function, where the estimator is a deep neural network whose structure is allowed to become more complex as the sample size grows.
This test finds applications in model specification and variable screening for high-dimensional data. To render our test applicable to high-dimensional inputs, whose dimensions can also increase with sample size, we make the assumption that the observed high-dimensional predictors can effectively serve as proxies for certain latent, lower-dimensional predictors that are actually involved in the regression function. Additionally, we finely adjust the regression function estimator, enabling us to achieve the desired asymptotic normality under the null hypothesis, as well as consistency for any fixed scenarios and certain local alternatives.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: SMSA 2024
Conference Software: ConfTool Pro 2.8.103+CC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany