Computer Science Research Week 2023


January 4 to 6


The NUS Computer Science Research Week is an event that brings together the best researchers in computer science from academia and industry. The event includes a series of research and tutorial talks, by renowned computer scientists from around the world. The event will be held from January 4 through January 6, 2023.

Speakers

Anastasia Borovykh
Imperial College London
Srdjan Capkun
ETH Zurich
Swastik Kopparty
University of Toronto
Kasper Green Larsen
Aarhus University
Michael Swift
University of Wisconsin-Madison

Program Details

Wednesday, 4/1/2023, 10:00 – 11:20 Gaming the Learning - Amin Karbasi

Consider the task of learning an unknown concept from a given concept class; to what extent does interacting with a domain expert accelerate the learning process? It is common to measure the effectiveness of learning algorithms by plotting the “learning curve”, that is, the decay of the error rate as a function of the algorithm’s resources (examples, queries, etc). Thus, the overarching question in this presentation is whether interaction accelerates learning. It turns out the answer is hidden in better understanding the infinite two-player games. So, if you like games and statistical learning, then this talk is for you.

Bio: Amin Karbasi is currently an associate professor of Electrical Engineering, Computer Science, and Statistics & Data Science at Yale University. He is also a senior visiting scientist at Google NY. He has been the recipient of the National Science Foundation (NSF) Career Award, Office of Naval Research (ONR) Young Investigator Award, Air Force Office of Scientific Research (AFOSR) Young Investigator Award, DARPA Young Faculty Award, National Academy of Engineering Grainger Award, Amazon Research Award, Nokia Bell-Labs Award, Google Faculty Research Award, Microsoft Azure Research Award, Simons Research Fellowship, and ETH Research Fellowship. His work has also been recognized with a number of paper awards, including Medical Image Computing and Computer Assisted Interventions Conference (MICCAI), International Conference on Artificial Intelligence and Statistics (AISTAT), IEEE ComSoc Data Storage, International Conference on Acoustics, Speech, and Signal Processing (ICASSP), ACM SIGMETRICS, and IEEE International Symposium on Information Theory (ISIT) His Ph.D. thesis received the Patrick Denantes Memorial Prize from the School of Computer and Communication Sciences at EPFL, Switzerland.


Wednesday, 4/1/2023, 13:00 – 14:20 The privacy risks of neural networks and how to ensure the models preserve privacy  - Anastasia Borovykh

We will begin by showing how neural network gradients or parameter values can leak sensitive information about the datasets they were trained on. In order to quantify this rigorously we will define several metrics one can use to measure the leakage in the layers of the model. Next, we show an even bigger privacy risk: simply by querying the output of a neural network model, an attacker can extract sensitive information about the training datasets. How can we protect against such privacy breaches? We will discuss a standard mechanisms for protecting against such leakage: adding noise during training, and in particular we will show how adding anisotropic noise can protect the privacy while maintaining the accuracy of the model.

Bio: Anastasia Borovykh is currently an assistant professor at Imperial College London. She works on applications of machine learning in the fields of privacy and neuroscience.


Wednesday, 4/1/2023, 15:00 – 16:20 Verifying computations and the robustness of polynomials - Swastik Kopparty

A powerful computer can *convince*​ a weaker computer of the result of a difficult computation, one that is too complex for the weaker computer to do itself. This phenomenon is intricately linked to some basic results of complexity theory and cryptography, and has recently gained renewed attention because of applications in blockchains and cryptocurrencies. This talk will be about some of the interesting ideas that go into this topic, most notably polynomials and their error-correcting properties.

Bio: Swastik Kopparty is an Associate Professor of Mathematics and Computer Science at the University of Toronto. His research areas include error-correcting codes, probabilistic proof systems, computational complexity theory, finite fields, randomness and pseudorandomness. Swastik got his BS from the University of California, Riverside and his PhD from MIT. Prior to joining the University of Toronto, Swastik was a faculty member at Rutgers University and a postdoc at the Institute for Advanced Study.


Wednesday, 4/1/2023, 16:30 – 17:50 Cryptographic Speed Bumps in Machine Learning: Planting Undetectable Backdoors and More - Vinod Vaikuntanathan

I will describe two results at the interface of cryptography, statistics and machine learning.

First, in the increasingly common setting where the training of models is outsourced, I will describe a method whereby a malicious trainer can use cryptography to insert an *undetectable* backdoor in a classifier. Using a secret key, the trainer can then slightly alter inputs to create large deviations in the model output. Without the secret key, the existence of the backdoor is hidden. This result relies on the recently formulated hardness of the continuous learning with errors (CLWE) problem.

Second, I will show that CLWE is as hard as the widely studied learning with errors problem using techniques from leakage-resilient cryptography. In turn, I will use this to show the nearly optimal hardness of the long-studied Gaussian mixture learning problem.

Based on joint works with Shafi Goldwasser, Michael P. Kim and Or Zamir; and with Aparna Gupte and Neekon Vafa.

Bio: Vinod Vaikuntanathan is a cryptographer and a professor of computer science at MIT. Vinod is the co-inventor of modern fully homomorphic encryption systems and many other lattice-based (post-quantum secure) cryptographic primitives. His work has been recognized with the Gödel Prize (2022), Harold E. Edgerton Faculty Award (2018), DARPA Young Faculty Award (2018), the Sloan Faculty Fellowship (2013) and the Microsoft Faculty Fellowship (2014). He holds SM and PhD degrees from MIT and a BTech degree from the Indian Institute of Technology Madras.


Thursday, 5/1/2023, 10:00 – 11:20 Optimization challenges in adversarial machine learning - Volkan Cevher 

Thanks to neural networks (NNs), faster computation, and massive datasets, machine learning (ML) is under increasing pressure to provide automated solutions to even harder real-world tasks beyond human performance with ever faster response times due to potentially huge technological and societal benefits. Unsurprisingly, the NN learning formulations present a fundamental challenge to the back-end learning algorithms despite their scalability, in particular due to the existence of traps in the non-convex optimization landscape, such as saddle points, that can prevent algorithms from obtaining “good” solutions.

In this talk, we describe our recent research that has demonstrated that the non-convex optimization dogma is false by showing that scalable stochastic optimization algorithms can avoid traps and rapidly obtain locally optimal solutions. Coupled with the progress in representation learning, such as over-parameterized neural networks, such local solutions can be globally optimal.

Unfortunately, this talk will also demonstrate that the central min-max optimization problems in ML, such as generative adversarial networks (GANs), robust reinforcement learning (RL), and distributionally robust ML, contain spurious attractors that do not include any stationary points of the original learning formulation. Indeed, we will describe how algorithms are subject to a grander challenge, including unavoidable convergence failures, which could explain the stagnation in their progress despite the impressive earlier demonstrations. We will conclude with promising new preliminary results from our recent progress on some of these difficult challenges.

Bio: Volkan Cevher received the B.Sc. (valedictorian) in electrical engineering from Bilkent University in Ankara, Turkey, in 1999 and the Ph.D. in electrical and computer engineering from the Georgia Institute of Technology in Atlanta, GA in 2005. He was a Research Scientist with the University of Maryland, College Park from 2006-2007 and also with Rice University in Houston, TX, from 2008-2009. Currently, he is an Associate Professor at the Swiss Federal Institute of Technology Lausanne and a Faculty Fellow in the Electrical and Computer Engineering Department at Rice University. His research interests include machine learning, signal processing theory, optimization theory and methods, and information theory. Dr. Cevher is an ELLIS fellow and was the recipient of the Google Faculty Research award in 2018, the IEEE Signal Processing Society Best Paper Award in 2016, a Best Paper Award at CAMSAP in 2015, a Best Paper Award at SPARS in 2009, and an ERC CG in 2016 as well as an ERC StG in 2011.


Thursday, 5/1/2023, 13:00 – 14:20 Physical-Layer Attacks and Their Impact on Wireless Networks: Two Case Studies - Srdjan Capkun 

In this talk, I will discuss physical layer attacks on wireless networks, including attacks on GNSS systems, UWB ranging and cellular networks. Although these attacks differ in their goals and adversarial models, they share some commonalities - they attack the mechanisms that cannot be solely protected by cryptographic means. In particular, I will show how physical layer attacks can defeat the security of recently deployed commercial UWB ranging systems that are used in modern phones and vehicles. I will then show how these attacks can be used to deploy powerful DoS and tracking attacks on modern cellular networks. Finally, I will discuss the new research opportunities in this broad domain that emerge with the deployment of new types of networks and communication systems.

Bio: Srdjan Capkun (Srđan Čapkun) is a full professor in the Department of Computer Science, ETH Zurich and Chair of the Zurich Information Security and Privacy Center (ZISC). Originally from Split, Croatia, he received his Dipl. Ing. Degree in Electrical Engineering / Computer Science from the University of Split in 1998, and his Ph.D. degree in Communication Systems from EPFL in 2004. His research interests are in system and network security. His focus areas are wireless security (in particular secure positioning), and system security where he focuses on trusted computing and blockchain technologies. He is a co-​founder of 3db Access, which focuses on secure distance measurement and proximity-​based access control, and of Futurae, a company focusing on usable on-​line authentication. In 2016 he received an ERC Consolidator Grant for a project dedicated to securing positioning in wireless networks (securepositioning.com). He is a fellow of the ACM.


Thursday, 5/1/2023, 15:00 – 16:20 Causal Inference in Complex Networks   - Negar Kiyavash

Causal determinism states that every event is necessitated by precedent events together with governing laws, natural or otherwise. Causal determinism is deeply ingrained with our ability to understand the physical sciences and their explanatory ambitions. Besides understanding phenomena, identifying causal networks is important for effective policy design in nearly any avenue of interest, be it epidemiology, financial regulation, management of climate, etc. Yet determining statistical causation among interacting stochastic processes and variables remains quite challenging. This lecture will review recent advances in causal inference: How far have we come, and where do we go from here?

Bio: Negar Kiyavash is the chair of Business Analytics (BAN) at École polytechnique fédérale de Lausanne (EPFL) at the College of Management of Technology. Prior to joining EPFL, she was a faculty member at the University of Illinois, Urbana-Champaign, and at Georgia Institute of Technology. Her research interests are broadly in the area of statistical learning and applied probability with special focus on network inference and causality. She is a recipient of the NSF CAREER and AFOSR YIP awards.


Thursday, 5/1/2023, 16:30 – 17:50 Revisiting classic learning theory questions - Kasper Green Larsen

In this talk, I will touch on two classic setups in learning theory, namely weak to strong learning and PAC learning in the realizable case. The classic AdaBoost algorithm allows to construct a strong learner, i.e. a learning algorithm with arbitrarily high accuracy given enough training data, from a so-called weak learner. A weak learner is a learning algorithm performing slightly better than random guessing. The question we study is how much training data is necessary to obtain a desired accuracy of the strong learner. Here we present an algorithm requiring less training data than all known algorithms and also complement it with a lower bound showing that it makes optimal use of samples. Next, we revisit PAC learning in the realizable case. Here we are trying to learn an unknown concept that belongs to a concept class of small VC-dimension. Determining the minimal number of samples needed to achieve a given accuracy was a big open problem for decades, until finally Hanneke presented an optimal algorithm in 2016. His algorithm is beautiful theoretical contribution but has not really made it to practice. In this talk, I show that the classic and highly practical Bagging technique, dating back 20 years before Hanneke’s optimal algorithm, in fact also uses an optimal number of samples.

Bio: Kasper Green Larsen is a Full Professor at Aarhus University, Denmark. His research interests span many areas of theoretical computer science, but with a particular emphasis on learning theory, dimensionality reduction, lower bounds, data structures and their applications in cryptography. He has received several best paper awards at top theory conferences, namely the STOC’12 best paper and best student paper, the FOCS’11 best student paper and the CRYPTO’18 best paper. He was also a recipient of the Presburger Award’19, the Aarhus University PhD prize and a Google European Doctoral Fellowship. On a daily basis, he heads the Algorithms, Data Structures and Foundations of Machine Learning research group at Aarhus University.


Friday, 6/1/2023, 10:00 – 11:20 Reinventing virtual memory for modern hardware - Michael Swift

Paged-based virtual memory forms the basis of memory management in modern hardware, and has in the past decades been extended with support for accelerators and I/O devices. As memory sizes grow to terabytes and beyond, though, the complexity and overhead of address translation has become costly.

My research group has been tackling this problem a decade, looking for ways to provide the flexibility and benefit of paged-based memory at greater efficiencies. In this talk, I will cover our recent work on how to adapt virtual memory mechanisms in hardware and software for modern computing environments. First, I will cover a new software approach to memory management, Cost-Based Memory Management (CBMM) that tries to balance the cost of operating system memory operations against the benefit to applications. We found that Linux can spend hundreds of milliseconds allocating a single huge page of memory, who's benefit is typically a few milliseconds in performance gains. Second, I'll discuss devirtualized memory for computation accelerators, a hardware mechanism that provides the protection benefits of virtual memory at lower cost by removing address translation. Finally, I'll discuss efficient address mapping for filesystems on non-volatile memory.

Bio:Michael Swift is a professor at the University of Wisconsin--Madison. His research focuses on the hardware/operating system boundary, including virtual memory, persistence and storage, new compute technologies, and device drivers. He received his BA from Cornell University in 1992 and Ph.D. from the University of Washington in 2005. Before graduate school, he worked at Microsoft in the Windows group, where he implemented authentication and access control functionality in Windows Cairo, Windows NT, and Windows 2000.


Friday, 6/1/2023, 13:00 – 14:20 Towards Better Healthcare with AIs Made for Human Validation - Finale Doshi-Velez

AIs have much to offer when it comes to bettering healthcare, but achieving the benefits requires overcoming multiple challenges. In this talk, I will focus on the challenge of designing (mostly reinforcement learning) models for human validation. This work acknowledges the fact that while we can--and should!--perform as many statistical checks on our models as possible, the complexity of health settings means that these statistical checks are necessary but not sufficient.

I will begin with a brief introduction to reinforcement learning, in particular for batch settings where one must make recommendations about new treatment policies given only data from current practice. I will describe methods for validation in batch reinforcement learning, emphasizing that both statistical and human checks are needed.

Next, I will focus on the human validation aspect. To facilitate human inspection, we have developed a number of novel techniques to build small, task-focused generative models that still have high performance. I will discuss these techniques in the context of identifying promising treatments for hypotension in the ICU. Finally, I will touch upon our work on a related question of whether people process provided information in ways we expect--and when not, possible ways to improve the quality of the human+AI team.

Bio: Finale Doshi-Velez is a Gordon McKay Professor in Computer Science at the Harvard Paulson School of Engineering and Applied Sciences. She completed her MSc from the University of Cambridge as a Marshall Scholar, her PhD from MIT, and her postdoc at Harvard Medical School. Her interests lie at the intersection of machine learning, healthcare, and interpretability.


Venue

All talks will be held in Seminar Room 1, in the NUS School of Computing, COM1 building.