Past


Past seminars

You can find below the list of past seminars.

March 9, 2021, Jean-François Bonnefon (Toulouse School of Economics)
Title: The Moral Machine Experiment
Abstract: With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. I will describe the results of this experiment, paying special attention to cross-cultural variations in ethical preferences. I will discuss the role that these data can play to inform public policies about the programming of autonomous vehicles.

February 9, 2021, Raquel Fernández (University of Amsterdam, Institute for Logic, Language and Computation)
Title: Individual and social processes in image description generation
Abstract: Most language use takes place in situated environments where visual perception goes hand in hand with language processing. I will discuss some recent projects by my research group related to modelling language generation in visually grounded contexts using state-of-the-art AI techniques. I will first focus on individual cognitive processes and present a model of image description generation that exploits information from human gaze patterns recorded during language production. In the second part of the talk, I will move on to two-person dialogue setups. I will discuss our recent work on generating referring descriptions that are grounded in the conversational and visual context. I argue that computational models of language generation can help us to better understand the cognitive processes underpinning these abilities in humans, as well as contribute to more robust language technology tools and to user adaptation in dialogue systems.

January 12, 2021, Aida Nematzadeh (Deepmind)
Title: Categories and Instances in Human Cognition and AI
Abstract: Turning regularities into categories is an important aspect of human cognition. We can make generalizations about new events and entities based on the categories we think they belong to. Structuring knowledge into categories also facilitate search and retrieval. Moreover, remembering specific instances of categories (e.g., the first day at a job) is crucial for how we process information. Similarly, artificial intelligence systems require the capacity to represent and reason about both categories and instances. In this talk, I describe two tasks inspired by experiments in developmental psychology for evaluating this capacity. The first task, novel noun generalization, examines whether our existing models can determine the correct level of a hierarchical taxonomy (e.g., dog or animal) a novel word refers to. The second task evaluates models' ability to represent different states of the world (i.e., the position of an item). I discuss how current models perform on these tasks and what inductive biases can help models succeed.

May 5, 2020, Xavier Hinaut (INRIA Bordeaux)
Title: Neural mechanisms of encoding, learning and producing complex sequences and their syntax
Abstract: The work of Xavier Hinaut is at the frontier of various domains (neurosciences, machine learning, robotics and linguistics): from the modeling of the neuronal encoding of primate motor sequences categories to the decoding of sensorimotor neuronal activity in songbirds (domestic canaries). An important part is dedicated to the modeling of the processing of human sentences with recurrent neural networks, which is the starting point of applications to Human-Robot Interaction with natural language. He is interested in the neural mechanisms of encoding, learning and producing complex sequences and their syntax. One of the translational goals is to find a common generic neural substrate model, based on random recurrent neural networks. Such generic substrate can learn the bases of syntax in several languages and model the learning of motor and vocal sequences in various species. Such model is also applied to the learning of bird singing by approaching biological and developmental constraints. Although we cannot talk about the language of birds, some birds like domestic canaries produce songs with a complex syntax that is difficult to characterize only in terms of Markovian processes (i.e. with transitions based on very short-term memory). Therefore, he also studies the working memory capabilities and limitations of recurrent random neural networks. In particular, he looks how such networks can learn information gating mechanisms.

Jan 21, 2020, Alexis Dubreuil (ENS Département D'Études Cognitives)
Title: Mechanics of cognitive processes.
Abstract: Cognitive abilities arise from the interactions of neurons organized in structured networks. Understanding the emergence of cognitive abilities requires to bridge two gaps: the one between behavioral variables and network’s dynamics, and the one between dynamics and structure of networks. Theoretical investigations have been able to propose such mappings for elementary cognitive processes but it has remained difficult to come up with network structures able to implement more elaborated cognitive processes. To overcome this difficulty it has been proposed to take advantage of machine learning algorithms to build network’s structure that implement behaviors of interest. However these networks remain high-dimensional non-linear systems that are often referred to as ‘black-boxes’. I will start by presenting the approach I have developed with Srdjan Ostojic that allows to fully reverse-engineer trained networks and bridge the gaps between behavior, network’s dynamics and network’s structure. I will then illustrate this approach by focusing on a context-dependent decision making task. Finally I will briefly discuss preliminary attempts in applying this approach to characterize neural network mechanisms underlying the production of structured sequences.

Jan 21, 2020, Mathilde Caron (INRIA/Facebook AI research)
Title: Learning visual représentations without labels
Abstract: nothing

Dec 10, 2019, Yves Boubenec (ENS Département D'Études Cognitives)
Title: Neural specialization for speech and music revealed by cross-species comparison
Abstract: nothing

Nov 19, 2019, Justine Cassell (CMU/Prairie INRIA)
Title: From science to system: the case of social conversational agents
Abstract: nothing

Nov 12, 2019, Alain de Cheveigné (INRIA Bordeaux)
Title: Brain Data Analysis and Decoding
Abstract: nothing

Nov 12, 2019, Chloé Clavel (Télécom ParisTech)
Title: Natural Language Processing for social computing
Abstract: nothing