Charla: "Revisiting non-linear PCA with progressively grown autoencoders"

Abstract: In this talk I will revisit the old problem of nonlinear dimensionality reduction with hierarchical representations. That is, representations where the first n components induce the n-dimensional manifold (with some degree of smoothness)  that best approximates the data points, similar to standard PCA. With this goal in mind, I will introduce a method that allows to progressively grow the latent dimension of a deep autoencoder, without losing this hierarchy condition, based on a teacher-student paradigm. I will make the connection with the problem of learning disentangled representations. Experimental results using real data in both unsupervised and supervised scenarios will be shown. This work was published at ICLR 2019.

Bio: José Lezama received an Electrical Engineering diploma from Universidad de la República in Uruguay (2007), and an MSc and PhD in applied mathematics from the Center for Mathematical studies and their Applications at École Normale Supérieure Paris-Saclay in France (2015). He was a postdoctoral researcher at the Electrical and Computer Engineering Department at Duke University, US (2015 to 2016). Currently he is a postdoctoral researcher at the Signal Processing Department at Universidad de la República in Montevideo. His main research interests are in the areas of machine vision and intelligence and his current focus is on supervised and unsupervised learning of image representations and their probabilistic modeling.


Comunicaciones DCC

  • Tags

Auditorio Philippe Flajolet
Facultad de Cs. Físicas y Matemáticas
Universidad de Chile

Beauchef 851, edificio norte, 3er piso

Fecha del evento
16 de Enero de 2020
16:00 - 17:30

Jorge Pérez