IDA Machine Learning Seminars - Spring 2016
Wednesday, February 3, 3.15 pm, 2016.
Learning methods for a developing humanoid robotChristian Balkenius, Cognitive Science, Department of Philosophy, Lund University
Abstract: I will outline the goals of "The First Year"-project were we aim at reproducing the cognitive and sensory-motor development of an infant during its first year in a humanoid robot. Every part of the control architecture is based on different types of incremental learning algorithms that allow the robot to undergo continuous development as a result of its interaction with the environment. Every subsystem is coarsely modelled after different brain regions, but implemented in computationally efficient ways that allows for real-time operation. This includes subsystems that learn sensory and motor categories, forward models for sensory-motor coordination and action-outcome learning that allows for goal-directed behavior, episodic and working memory system, and reinforcement/emotion-based subsystems that support action evenaluation and decision making.
Location: Visionen
Organizer: Arne Jönsson
Wednesday, March 2, 3.15 pm, 2016.
Gibbs sampling for state space models: blocking, stability, and particle MCMCFredrik Lindsten, Uppsala University and University of Cambridge
Abstract: Sampling from the latent state variables of a nonlinear/non-Gaussian state space model, conditionally on the observations, is a nontrivial operation even in the context of Markov Chain Monte Carlo (MCMC). The traditional approach has been to sample the state variables one-at-a-time in a Gibbs sampler, but it is well known that this strategy can result in poor convergence speeds due to the often strong dependencies between consecutive state variables. In the first part of this talk I investigate blocking strategies for such Gibbs samplers. That is, we consider sampling consecutive blocks of state variables jointly and analyze the theoretical properties of such an approach. It is shown that the resulting blocked Gibbs sampler is stable as the number of observations/latent states tend to infinity, under certain conditions on the blocking scheme. In the second part of the talk I discuss practical implementations of the blocked Gibbs sampler, based on particle MCMC (PMCMC). In particular, the stability results are extended to a blocked PMCMC sampler which is shown to be stable as the number of observations/latent states tend to infinity even when using a fixed number of particles (in the particle filter underlying the PMCMC implementation). Finally, I discuss the use of "ancestor sampling"--a slight modification of PMCMC which has been found to have superior empirical performance--and discuss possible connections with blocking.
Location: Visionen
Organizer: Mattias Villani
Wednesday, March 30, 3.15 pm, 2016.
What if...? Machine Learning & Causal InferenceFredrik Johansson, Machine Learning, Algorithms and Computational Biology Research Group, Chalmers University.
Abstract: Inferences made by machine learning methods increasingly form the basis of actions in the real world. To learn how to act requires understanding of cause-effect relationships, and while often overlooked in machine learning, modern applications like personalised medicine cannot function without causal inference. A common problem arising in such settings is that of counterfactual inference: "What would have happened if X instead of Y?" We put this question in the context of machine learning methods, such as regularization and representation learning, and discuss relevant theory and applications.
Location: Visionen
Organizer: Mattias Villani
Wednesday, May 4, 3.15 pm, 2016.
Recent Advances in Deep LearningMax Welling, University of California Irvine and University of Amsterdam
Abstract:Deep learning has become the dominant modeling paradigm in machine learning. It has been spectacularly successful in application areas ranging from speech recognition, image analysis, natural language processing, and information retrieval. But a number of important challenges remain un(der)solved. In this talk I will list a few of these challenges and discuss work in my lab that is addressing them:
Challenge 1: Combining generative probabilistic (graphical models) with deep learning.
Our solution: variational auto-encoders. (w/ D. Kingma)
Challenge 2: Reliable confidence intervals on deep learning predictions.
Our solution: Matrix normal deep Bayesian neural networks (w/ C. Louizos)
Challenge 3: Deep learning with small data
Our solution: Group-equivariant Convnets (w/ T. Cohen)
Challenge 4: Energy efficient and event based NNs
Our solution: Quantized (spiking) NNs (w/ P. O'Connor)
If time allows I will also discuss new developments in visualizing deep neural nets (w/ L. Zindtgraf et al) and privacy preserving machine learning (w/ M. Park et al).
Location: Visionen
Organizer: Mattias Villani
Wednesday, May 25, 3.15 pm, 2016.
Time-Series Models with Explicit Memory MechanismsSilvia Chiappa, Google DeepMind
Abstract: In the first part of the talk, I will discuss explicit-duration Markov Switching Models. This is a class of probabilistic time-series models that uses a set of variables to define the duration spent in each dynamic regime, and includes models known in the literature as Hidden Semi-Markov Models, Segmental Models, Changepoint Models and Reset Models. I will present an application of these models to robotics. In the second part of the talk, I will discuss Long-Short-Term-Memory-type Models and show how these models can be applied to learn to simulate environments in agent-based problems.
Location: Visionen
Organizer: Jose M. Peña
Page responsible: Fredrik Lindsten
Last updated: 2016-09-05