Please note there will be two lectures of 50 minutes each.
Lecture 1-Title: “Inventing the future with Nvidia”
Yaniv Benami is Biz. Dev. and sales at Nvidia Israel. His background on top of VP sales at Dell EMC Israel and other biz dev. Roles He was Presale and Product line manager in high-tech companies (Alvarion, Data Direct Network, etc) He is with Nvidia Israel for about 9 months
Abstract: introduction to GPU Computing with Exascale Performance design Nvidia CUDA parallel programming for scaling enterprise applications and accelerated analytics Artificial intelligence with GPU Deep learning and breaking through use cases Nvidia software: Deep learning SDK frameworks and Tensor RT accelerated inference engine Open discussion and next steps
Nvidia Briefing Summery 05-2018.pdf
Lecture 2-Title: Decision-theoretic planning via probabilistic programming
Host: Kobi Gal
Vaishak Belle is an assistant professor, Chancellor’s Fellow, and Turing Fellow at the University of Edinburgh. His research is at the intersection of machine learning and symbolic systems (logics, programs), in service of the science and technology of artificial intelligence. He is motivated by the need to augment learning and perception with high-level structured, commonsensical knowledge, to enable systems to learn faster and more accurate models of the world. I am most keen on computational frameworks that are able to explain their decisions, modular, re-usable, and robust to variations in problem description.
Concretely, he works on the following themes (in no particular order):
probabilistic and statistical knowledge bases
exact and approximate probabilistic inference
statistical relational learning
automated planning and high-level programming
modal logics (knowledge, action, belief)
multi-agent systems and epistemic planning
Abstract: We study planning in Markov decision processes involving discrete and continuous states and actions, and an unknown number of objects. Planning in such domains is notoriously challenging and often requires restrictive assumptions. We introduce HYPE: a sample-based planner for hybrid domains that is very general, which combines model-based approaches with state abstraction. Most significantly, the domains where such planners are deployed are usually very complex with deep structural and geometric constraints. HYPE is instantiated in a probabilistic programming language that allows compact codification of such constraints.
In our empirical evaluations, we show that HYPE is a general and widely applicable planner in domains ranging from strictly discrete to strictly continuous to hybrid ones. Moreover, empirical results showed that abstraction provides significant improvements.
In the final part of the talk, we turn to the question of whether there is any hope of developing computational methodologies that are not based on sampling. In particular, it is tricky in hybrid domains to deal with low-probability observations, and most sampling-based schemes only provide asymptotic guarantees.
This talk is based on a Machine Learning Journal article (2017), and is joint work with Davide Nitti, Tinne De Laet and Luc De Raedt.