Monday, July 08, 2013

ROKS2013 Extended abstracts, SAHD Program Agenda

While I mentioned that SPARS13 was starting today, it so happens that ROKS 2013 is also happening in old Europe starting today as well. The list of extended abstracts is listed below. In about two weeks, SAHD 2013, Duke's Workshop on Sensing and Analysis of High-Dimensional Data will also take place, this time in the U.S. The program is listed below with a link to the abstracts. I note that one of the many interesting speakers include none other than Tomaso Poggio, one of the author of the feedforward model for the visual cortex and mentioned here back in 2007 ( Compressed Sensing in the Primary Visual Cortex ?) and in Sunday's Morning Insight: Faster Than a Blink of an Eye. Lots of good things are happening this summer.



From the ROKS2013 program

Monday July 8

12:00-13:00Registration and welcome coffee in Salons Arenberg castle
13:00-13:10Welcome by Johan Suykens
13:10-14:00Deep-er Kernels
John Shawe-Taylor (University College London) [abstract]
14:00-14:50Connections between the Lasso and Support Vector Machines
Martin Jaggi (Ecole Polytechnique Paris) [abstract]
14:50-15:10Coffee break
15:10-16:40Oral session 1 (3 x 30 min): Feature selection and sparsity
16:40-17:30Kernel Mean Embeddings applied to Fourier Optics
Bernhard Schoelkopf (Max Planck Institute Tuebingen) [abstract]
17:30-19:00Reception in Salons Arenberg Castle

Tuesday July 9

09:00-09:50Large-scale Convex Optimization for Machine Learning
Francis Bach (INRIA) [abstract]
09:50-10:40Domain-Specific Languages for Large-Scale Convex Optimization
Stephen Boyd (Stanford University) [abstract]
10:40-11:00Coffee break
11:00-12:00Oral session 2 (2 x 30 min): Optimization algorithms
12:00-12:20Spotlight presentations Poster session 1 (2 min/poster)
12:20-14:30Group picture, Lunch in De Moete, Poster session 1 in Rooms S
14:30-15:20Dynamic L1 Reconstruction
Justin Romberg (Georgia Tech) [abstract]
15:20-16:10Multi-task Learning
Massimiliano Pontil (University College London) [abstract]
16:10-16:30Coffee break
16:30-18:30Oral session 3 (4 x 30 min): Kernel methods and support vector machines
19:00Dinner in Faculty Club

Wednesday July 10

09:00-09:50Subgradient methods for huge-scale optimization problems
Yurii Nesterov (Catholic University of Louvain) [abstract]
09:50-10:40Living on the edge: A geometric theory of phase transitions in convex optimization
Joel Tropp (California Institute of Technology) [abstract]
10:40-11:00Coffee break
11:00-12:30Oral session 4 (3 x 30 min): Structured low-rank approximation
12:30-12:50Spotlight presentations Poster session 2 (2 min/poster)
12:50-14:30Lunch in De Moete, Poster session 2 in Rooms S
14:30-15:20Minimum error entropy principle for learning
Ding-Xuan Zhou (City University of Hong Kong) [abstract]
15:20-16:10Learning from Weakly Labeled Data
James Kwok (Hong Kong University of Science and Technology) [abstract]
16:10-16:30Coffee break
16:30-18:00Oral session 5 (3 x 30 min): Robustness
18:00Closing

Oral session 1: Feature selection and sparsity
(July 8, 15:10-16:40)


  • The graph-guided group lasso for genome-wide association studies
    Zi Wang and Giovanni Montana
    Mathematics Department, Imperial College London
    [abstract]

  • Feature Selection via Detecting Ineffective Features
    Kris De Brabanter (1) and Laszlo Gyorfi (2)
    (1) KU Leuven ESAT-SCD
    (2) Dep. Comp. Sc. & Inf. Theory, Budapest Univ. of Techn. and Econ.
    [abstract]

  • Sparse network-based models for patient classification using fMRI
    Maria J. Rosa, Liana Portugal, John Shawe-Taylor and Janaina Mourao-Miranda
    Computer Science Department, University College London
    [abstract]

Oral session 2: Optimization algorithms
(July 9, 11:00-12:00)


  • Incremental Forward Stagewise Regression: Computational Complexity and Connections to LASSO
    Robert Freund (1), Paul Grigas (2) and Rahul Mazumder (2)
    (1) MIT Sloan School of Management
    (2) MIT Operations Research Center
    [abstract]

  • Fixed-Size Pegasos for Large Scale Pinball Loss SVM
    Vilen Jumutc, Xiaolin Huang and Johan A.K. Suykens
    KU Leuven ESAT-SCD
    [abstract]

Oral session 3: Kernel methods and support vector machines
(July 9, 16:30-18:30)


  • Output Kernel Learning Methods
    Francesco Dinuzzo (1), Cheng Soon Ong (2) and Kenji Fukumizu (3)
    (1) MPI for Intelligent Systems Tuebingen
    (2) NICTA, Melbourne
    (3) Institute of Statistical Mathematics, Tokyo
    [abstract]

  • Deep Support Vector Machines for Regression Problems
    M.A. Wiering, M. Schutten, A. Millea, A. Meijster and L.R.B. Schomaker
    Institute of Artif. Intell. and Cognitive Eng., Univ. of Groningen
    [abstract]

  • Subspace Learning and Empirical Operator Estimation
    Alessandro Rudi (1), Guille D. Canas (2) and Lorenzo Rosasco (3)
    (1) Istituto Italiano di Tecnologia
    (2) MIT-IIT
    (3) Universita di Genova
    [abstract]

  • Kernel based identification of systems with multiple outputs using nuclear norm regularization
    Tillmann Falck, Bart De Moor and Johan A.K. Suykens
    KU Leuven, ESAT-SCD and iMinds Future Health Department
    [abstract]

Oral session 4: Structured low-rank approximation
(July 10, 11:00-12:30)


  • First-order methods for low-rank matrix factorization applied to informed source separation
    Augustin Lefevre (1) and Francois Glineur (1,2)
    (1) ICTEAM Institute and (2) CORE Institute, Universite catholique de Louvain
    [abstract]

  • Structured low-rank approximation as optimization on a Grassmann manifold
    Konstantin Usevich and Ivan Markovsky
    Dep. ELEC, Vrije Universiteit Brussel
    [abstract]

  • Scalable Structured Low Rank Matrix Optimization Problems
    Marco Signoretto (1), Volkan Cevher (2) and Johan A.K. Suykens (1)
    (1) KU Leuven, ESAT-SCD
    (2) LIONS, EPFL Lausanne
    [abstract]

Oral session 5: Robustness
(July 10, 16:30-18:00)


  • Learning with Marginalized Corrupted Features
    Laurens van der Maaten (1), Minmin Chen (2), Stephen Tyree (2) and Kilian Weinberger (2)
    (1) Delft University of Technology
    (2) Washington University in St. Louis
    [abstract]

  • Robust regularized M-estimators of regression parameters and covariance matrix
    Esa Ollila, Hyon-Jung Kim and Visa Koivunen
    Department of Signal Processing and Acoustics, Aalto University
    [abstract]

  • Robust Near-Separable Nonnegative Matrix Factorization Using Linear Optimization
    Nicolas Gillis (1) and Robert Luce (2)
    (1) ICTEAM Institute, Univ. catholique de Louvain
    (2) Technische Univ. Berlin
    [abstract]

Poster session 1
(July 9, 13:15-14:30)


  • Data-Driven and Problem-Oriented Multiple-Kernel Learning
    Valeriya Naumova and Sergei V. Pereverzyev
    Affiliations: Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy of Sciences
    [abstract]

  • Support Vector Machine with spatial regularization for pixel classification
    Remi Flamary (1) and Alain Rakotomamonjy (2)
    (1) Lagrange Lab., CNRS, Universite de Nice Sophia-Antipolis
    (2) LITIS Lab., Universite de Rouen
    [abstract]

  • Regularized structured low-rank approximation
    Mariya Ishteva and Konstantin Usevich and Ivan Markovsky
    Dept. ELEC, Vrije Universiteit Brussel
    [abstract]

  • A Heuristic Approach to Model Selection for Online Support Vector Machines
    Davide Anguita, Alessandro Ghio, Isah A. Lawal and Luca Oneto
    DITEN, University of Genoa
    [abstract]

  • Lasso and Adaptive Lasso with Convex Loss Functions
    Wojciech Rejchel
    Nicolaus Copernicus University, Torun, Poland
    [abstract]

  • Conditional Gaussian Graphical Models for Multi-output Regression of Neuroimaging Data
    Andre F Marquand (1), Maria Joao Rosa (2) and Orla Doyle (1)
    (1) King`s College London
    (2) University college London
    [abstract]

  • High-dimensional convex optimization problems via optimal affine subgradient algorithms
    Masoud Ahookhosh and Arnold Neumaier
    Faculty of Mathematics, University of Vienna
    [abstract]

  • Joint Estimation of Modular Gaussian Graphical Models
    Jose Sanchez and Rebecka Jornsten
    Mathematical Sciences, Chalmers Univ, of Technology and University of Gothenburg
    [abstract]

  • Learning Rates of l1-regularized Kernel Regression
    Lei Shi, Xiaolin Huang and Johan A.K. Suykens
    KU Leuven, ESAT-SCD
    [abstract]

  • Reduced Fixed-Size LSSVM for Large Scale Data
    Raghvendra Mall and Johan A.K. Suykens
    KU Leuven, ESAT-SCD
    [abstract]

Poster session 2
(July 10, 13:15-14:30)


  • Pattern Recognition for Neuroimaging Toolbox
    Jessica Schrouff (1), Maria J. Rosa (2), Jane Rondina (2), Andre Marquand (3), Carlton Chu (4), John Ashburner (5), Jonas Richiardi (6), Christophe Phillips (1) and Janaina Mourao-Miranda (2)
    (1) Cyclotron Research Centre, University of Liege
    (2) Computer Science Dep., University College London
    (3) Institute of Psychology, King`s College, London
    (4) NIMH, NIH, Bethesda
    (5) Wellcome Trust Centre for Neuroimaging, University College London
    (6) Stanford University
    [abstract]

  • Stable LASSO for High-Dimensional Feature Selection through Proximal Optimization
    Roman Zakharov and Pierre Dupont
    ICTEAM Institute, Universite catholique de Louvain
    [abstract]

  • Regularization in topology optimization
    Atsushi Kawamoto, Tadayoshi Matsumori, Daisuke Murai and Tsuguo Kondoh
    Toyota Central R&D Labs., Inc., Nagakute
    [abstract]

  • Classification of MCI and AD patients combining PET data and psychological scores
    Fermin Segovia, Christine Bastin, Eric Salmon and Christophe Phillips
    Cyclotron Research Centre, University of Liege
    [abstract]

  • Kernels design for Internet traffic classification
    Emmanuel Herbert (1), Stephane Senecal (1) and Stephane Canu (2)
    (1) Orange Labs, Issy-les-Moulineux
    (2) LITIS/INSA, Rouen
    [abstract]

  • Kernel Adaptive Filtering: Which Technique to Choose in Practice
    Steven Van Vaerenbergh and Ignacio Santamaria
    Department of Communications Engineering, University of Cantabria, Spain
    [abstract]

  • Structured Machine Learning for Mapping Natural Language to Spatial ontologies
    Parisa Kordjamshidi and Marie-Francine Moens
    Dep. of Computer Science, Katholieke Universiteit Leuven
    [abstract]

  • Windowing strategies for on-line multiple kernel regression
    Manuel Herrera and Rajan Filomeno Coelho
    BATir Dep., Universite libre de Bruxelles
    [abstract]

  • Non-parallel semi-supervised classification
    Siamak Mehrkanoon and Johan A.K. Suykens
    KU Leuven, ESAT-SCD
    [abstract]

  • Visualisation of neural networks for model reduction
    Tamas Kenesei and Janos Abonyi
    University of Pannonia, Department of Process Engineering
    [abstract]

  • Convergence analysis of stochastic gradient descent on strongly convex objective functions
    Cheng Tang and Claire Monteleoni
    Dep. of Computer Science, The George Washington University
    [abstract]
From the SAHD program agenda

Tuesday, July 23


9-11am: Oral Session T1

11-12:30: Whiteboard Session T2

12:30-1:30: Box lunch (provided)

1:30-3:30: Oral Session T3

3:30-5: Poster Session T4 (Last names starting P-Z)



Wednesday, July 24



9-11am: Oral Session W1

11-12:30: Whiteboard Session W2
  • Tony Jebara
  • Stan Osher - The magic of l1 regularization.

12:30-1:30: Box lunch (provided)

1:30-4:00: Oral Session W3

4-5: Poster Session W4 (last names starting I-Q)

Evening: Workshop banquet with dinner speaker Ingrid Daubechies (provided)

Thursday, July 25



9-11am: Oral Session H1

11-12:30: Poster Session H2 (last names starting A-H)

12:30-1:30: Box lunch (provided)

1:30-4:00: Oral Session H3

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly