Computer Science Research Week 2021


January 6 to 8


The NUS Computer Science Research Week is an event that brings together the best researchers in computer science from academia and industry. The event includes a series of research and tutorial talks, by renowned computer scientists from around the world. The event will be held from January 6 through January 8, 2021.

Registration

Speakers

Andreas Krause
ETH Zurich
Dan Ports
Microsoft Research & University of Washington
Monika Henzinger
University of Vienna
Ranjit Jhala
University of California, San Diego
George Tzanetakis
University of Victoria
Andreas Zeller
Saarland University

Program Details


Wednesday, 6/1/2021, 09:00 – 10:30 Artificial perception, communication, embodiment, and expressivity in music - George Tzanetakis

In this talk I will discuss several projects that me and my research group have engaged over the years with the underlying theme of trying to make computers better understand and create music. I will use these projects to discuss some of the challenges and opportunities we have faced while working in this area that I believe are emblematic of research in Artificial Intelligence in general. I find the term intelligence too vague to be useful and therefore will be focusing more specifically on perception, communication, embodiment, and expressivity.

George Tzanetakis is a Professor of Computer Science (also cross-listed in Music and Electrical and Computer Engineering) at the University of Victoria. He received his PhD degree in Computer Science from Princeton University in May 2002 and was a PostDoctoral Fellow at Carnegie Mellon University working on query-by-humming systems with Prof. Dannenberg and on video retrieval with the Informedia group.

His research deals with all stages of audio content analysis such as feature extraction, segmentation, classification with specific focus on Music Information Retrieval (MIR). His pioneering work on musical genre classification is frequently cited and received an IEEE Signal Processing Society Young Author Award in 2004. He has presented tutorials on MIR and audio feature extraction at several international conferences. He is also an active musician and has studied saxophone performance, music theory and composition. More information can be found HERE.


Wednesday, 6/1/2021, 16:00 – 17:30 Safe and Efficient Exploration in Reinforcement Learning – Andreas Krause

At the heart of Reinforcement Learning lies the challenge of trading exploration -- collecting data for identifying better models -- and exploitation -- using the estimate to make decisions. In simulated environments (e.g., games), exploration is primarily a computational concern. In real-world settings, exploration is costly, and a potentially dangerous proposition, as it requires experimenting with actions that have unknown consequences. In this talk, I will present our work towards rigorously reasoning about safety of exploration in reinforcement learning. I will discuss a model-free approach, where we seek to optimize an unknown reward function subject to unknown constraints. Both reward and constraints are revealed through noisy experiments, and safety requires that no infeasible action is chosen at any point. I will also discuss model-based approaches, where we learn about system dynamics through exploration, yet need to verify safety of the estimated policy. Our approaches use Bayesian inference over the objective, constraints and dynamics, and -- under some regularity conditions -- are guaranteed to be both safe and complete, i.e., converge to a natural notion of reachable optimum. I will also present recent results harnessing the model uncertainty for improving efficiency of exploration, and show experiments on safely and efficiently tuning cyber-physical systems in a data-driven manner.

Andreas Krause is a Professor of Computer Science at ETH Zurich, where he leads the Learning and Adaptive Systems Group. He also serves as Academic Co-Director of the Swiss Data Science Center and Chair of the ETH AI Center. Before that he was an Assistant Professor of Computer Science at Caltech. He received his Ph.D. in Computer Science from Carnegie Mellon University (2008) and his Diplom in Computer Science and Mathematics from the Technical University of Munich, Germany (2004). He is a Microsoft Research Faculty Fellow and a Kavli Frontiers Fellow of the US National Academy of Sciences. He received ERC Starting Investigator and ERC Consolidator grants, the Deutscher Mustererkennungspreis, an NSF CAREER award, the Okawa Foundation Research Grant recognizing top young researchers in telecommunications as well as the ETH Golden Owl teaching award. His research on machine learning and adaptive systems has received awards at several premier conferences and journals, including the ACM SIGKDD Test of Time award 2019 and the ICML Test of Time award 2020. Andreas Krause served as Program Co-Chair for ICML 2018, and is regularly serving as Area Chair or Senior Program Committee member for ICML, NeurIPS, AAAI and IJCAI, and as Action Editor for the Journal of Machine Learning Research.


Thursday, 7/1/2021, 09:00 – 10:30 Accelerating Distributed Systems with In-Network Computation – Dan Ports

Distributed protocols make it possible to build scalable and reliable systems, but come at a performance cost. Recent advances in accelerators have yielded major improvements in single-node performance, increasingly leaving distributed communication as a bottleneck. In this talk, I’ll argue that in-network computation can serve as the missing accelerator for distributed systems. Enabled by new programmable switches and NICs that can place small amounts of computation directly in the network fabric, we can speed up common communication patterns for distributed systems, and reach new levels of performance.

I’ll describe three systems that use in-network acceleration to speed up classic communication and coordination challenges. First, I’ll show how to speed up state machine replication using a network sequencing primitive. The ordering guarantees it provides allow us to design a new consensus protocol, Network-Ordered Paxos, with extremely low performance overhead. Second, I’ll show that even a traditionally compute-bound workload -- ML training -- can now be network-bound. Our new system, SwitchML, alleviates this bottleneck by accelerating a common communication pattern using a programmable switch. Finally, I’ll show that using in-network computation to manage the migration and replication of data, in a system called Pegasus, allows us to load-balance a key-value store to achieve high utilization and predictable performance in the face of skewed workloads.

Dan Ports is a Principal Researcher at Microsoft Research and Affiliate Assistant Professor in Computer Science and Engineering at the University of Washington. Dan’s background is in distributed systems research, and more recently he has been focused on how to use new datacenter technologies like programmable networks to build better distributed systems. He leads the Prometheus project at MSR, which uses this co-design approach to build practical high-performance distributed systems. Dan received a Ph.D. from MIT (2012). His research has been recognized with best paper awards at NSDI and OSDI.


Thursday, 7/1/2021, 16:00 – 17:30 A survey of dynamic graph algorithms – Monika Henzinger

A dynamic graph algorithm is a data structure that maintains information about a graph, while the graph undergoes a sequence of edge updates. We will survey state of the art in dynamic graph algorithms and present the latest techniques used to give upper as well as (conditional) lower bounds for them.

Monika is currently a professor of Computer Science at the University of Vienna. Her research interest lies in combinatorial algorithms, data structures, and their applications. She received a Ph.D. from Princeton University in 1993 and an honorary doctorate from the Technical University of Dortmund in 2013. Before joining the University of Vienna, she was an assistant professor of computer science at Cornell University, then an associate professor at the University of the Saarland, a director of research at Google, and a full professor of computer science at École Polytechnique Fédérale de Lausanne. She received several awards, including NSF Career award in 1995, Best Paper Award at the ACM Symposium on Operating Systems Principles in 1997, Top 25 Women on the Web Award in 2001, SIGIR Test of Time Award in 2017, Price for Science by the City of Vienna in 2018, Carus Medal of the German Academy of Sciences Leopoldina in 2019. She is one of ten inaugural fellows of the European Association for Theoretical Computer Science. She has become ACM Fellow in 2016.


Friday, 8/1/2021, 09:00 – 10:30 Language-Integrated Verification - Ranjit Jhala

The last few decades have seen tremendous strides in various technologies for reasoning about programs. However, we believe these technologies will only become ubiquitous if they can be seamlessly integrated within programming languages with mature compilers, libraries and tools, so that programmers can use them continuously throughout the software development life cycle (and not just as a means of post-facto validation.) In this talk, we will describe how refinement types offer a path towards integrating verification into existing host languages. We show how refinements allow the programmer to extend specifications using types, to extend the analysis using SMT, and finally, to extend verification beyond automatically decidable logical fragments, by allowing programmers to interactively write proofs simply as functions in the host language. Finally, we will describe some of the lessons learned while building and using the language integrated verifier LiquidHaskell. We will describe some problems that are considered hard in theory, but which turn out to be easy to address in practice, and we will describe other problems which might appear easy, but are actually giant roadblocks that will have to be removed to make verification broadly used.

Ranjit Jhala is a professor of Computer Science and Engineering at the University of California, San Diego. He works on algorithms and tools that help engineer reliable computer systems. His work draws from and contributes to the areas of Model Checking, Program Analysis, and Automated Deduction, and Type Systems. He is fortunate to have helped create several influential systems including the BLAST software model checker, RELAY race detector, MACE/ MC distributed language and model checker, and Liquid Types. He received ACM SIGPLAN's Robin Milner Young Researcher Award in 2018.


Friday, 8/1/2021, 16:00 – 17:30 Learning the Language of Failure – Andreas Zeller

When diagnosing why a program fails, one of the first steps is to precisely understand the circumstances of the failure – that is, when the failure occurs and when it does not. Such circumstances are necessary for three reasons. First, one needs them to precisely predict when the failure takes place; this is important to devise the severity of the failure. Second, one needs them to design a precise fix: A fix that addresses only a subset of circumstances is incomplete, while a fix that addresses a superset may alter behavior in non-failing scenarios. Third, one can use them to create test cases that reproduce the failure and eventually validate the fix.

In this talk, I present and introduce tools and techniques that automatically learn circumstances of a given failure, expressed over features of input elements. I show how to automatically infer input languages as readable grammars, how to use these grammars for massive fuzzing, and how to systematically and precisely characterize the set of inputs that causes a given failure – the "language of failure”.

Andreas Zeller, CISPA Helmholtz Center for Information Security
Joint work with Rahul Gopinath and Zeller’s team at CISPA
Biography and photo can be found at https://andreas-zeller.info