Place: Where: MA:6 Annexet*, Sölvegatan 20, Lund, Sweden, LTH, Lund University
Contact: Jonas [dot] Wisbrant [at] cs [dot] lth [dot] se
Save event to your calendar
This AIML@LU fika-to-fika workshop focuses on the development of the technologies that form the basis of Artificial Intelligence and Machine Learning. Possible topics to discuss are the research front for different types of AI, but also to look at different techniques for machine learning.
When: 30 August at 9.30 - 15.30
Where: MA:6 Annexet*, Sölvegatan 20, Lund, Sweden, LTH, Lund University
Abstract: Whereas humans would prefer to program on a high level of abstraction, for instance through natural language, robots require very detailed instructions, for instance time series of desired joint torques. In this research, we aim to meet the robots half way, by enabling programming by demonstration.
Abstract: Things such as organizations, persons, or locations are all around us, particularly in the news, forum posts, facebook updates, and tweets. With named things, we can introduce background in news articles, summarize articles, build question-answering systems, and much more. However, it is challenging to find and link them, as they often may be ambiguous. In this work, we aim to enrich the knowledge graph Wikidata with new relations and things only found in the articles of multilingual Wikipedia. The long term goal is the development of a multilingual system that can answer any natural question and improve how we find new relevant information.
Abstract: Given a video with a target or object marked in the first frame, we aim to track and segment the target throughout the video. A fundamental challenge is to find an effective representation of the target and background appearance. In this work, we propose to tackle this challenge by integrating a probabilistic model as a differentiable and end-to-end trainable deep neural network module.
Abstract: Not many would argue against the Bayesian paradigm being the most useful one in modeling problems where parameter estimations are inherently uncertain. But unfortunately most interesting models, especially the ones we know from deep learning, have been very hard to fit in any reasonable amount of time. When dealing with +10 million parameters and +100 thousand data points, Markov Chain Monte Carlo just isn't a viable option. This is why almost every practitioner in deep learning defaults to maximum likelihood estimates through optimization via stochastic gradient descent, because it's much faster. In this talk we'll explore a promising way of doing full Bayesian inference on large scale models via stochastic black box variational inference.
Abstract: Humans as well as other animals are curious beings that develop cognitive skills on their own without the need for external goals or supervision.
Inspired by this, how can we encourage AIs to learn and solve tasks by themselves?
This talk presents the fascinating area of intrinsic reward in the context of reinforcement learning by showcasing recent articles and results.
* Former known as 'Matteannexet'.
Machine learning is a broad subject and in this call we state three different subject directions, coupled to three departments. The position is part of the Wallenberg AI, Autonomous Systems and Software Program (WASP). Last day for application i October 31, 2019. Welcome to apply.