Computer Science Research Week 2025


January 08 to 10


The NUS Computer Science Research Week is an event that brings together the best researchers in computer science from academia and industry. The event includes a series of research and tutorial talks, by renowned computer scientists from around the world. The event will be held from January 08 through January 10, 2025.

Speakers

Alastair F. Donaldson
Imperial College London
Ana Klimović
ETH Zurich
Barna Saha
University of California, San Diego
Gus Xia
MBZUAI, Abu Dhabi
Kevin Jamieson
University of Washington
Lieven Eeckhout
Ghent University, Belgium
Nate Foster
Cornell University
Prabal Dutta
University of California, Berkeley
Sakriani Sakti
NAIST, Japan
Thomas Ristenpart
Cornell University

Program

Wednesday, 08/1/2025

9:30 – 10:00 Morning coffee

10:00 – 11:20 Computer Security in Known-Adversary Threat Models - Tom Ristenpart

11:20 - 12:45 Lunch break

13:00 – 14:20 On the Instance-dependent Sample Complexity of Reinforcement Learning- Kevin Jamieson

14:20 - 15:00 Afternoon coffee

15:00 – 16:20 Rethinking System Software for Efficient, Elastic Cloud Computing - Ana Klimović

16:30 – 17:50 Role of Structured Matrices in Fine-Grained Algorithm Design - Barna Saha


Thursday, 09/1/2025

9:30 – 10:00 Morning coffee

10:00 – 11:20 Is What You See What You Execute? The Challenges of Validating Compilers - Alastair F. Donaldson

11:20 - 12:45 Lunch break

13:00 – 14:20 Sustainable Computer System Design - Lieven Eeckhout

14:20 - 15:00 Afternoon coffee

15:00 – 16:20 From Wireless Sensors to Pervasive Perpetual Networks - Prabal Dutta


Friday, 10/1/2025

9:30 – 10:00 Morning coffee

10:00 – 11:20 High-level Abstractions for Network Programming - Nate Foster

11:20 - 12:45 Lunch break

13:00 – 14:20 Knowledge Incorporation and Emergence for Music AI - Gus Xia

14:20 - 15:00 Afternoon coffee

15:00 – 16:20 Machine Speech Chain: Modeling Human Speech Perception and Production with Auditory Feedback - Sakriani Sakti

Program Details

Wednesday, 08/1/2025, 10:00 – 11:20
Computer Security in Known-Adversary Threat Models - Tom Ristenpart

Abstract: I’ll describe our work exploring computer security in contexts where the adversary is known to the victim, and a member of their social circles. Such known-adversary threats come up in a variety of interpersonal abuse contexts, such as intimate partner violence, human trafficking, and more. I’ll provide an overview of our work with IPV survivors to understand the computer security issues they face, our work developing interventions to directly assist people facing known-adversary threats, and how this has inspired new lines of research on authentication, cryptography, and more.

Bio: Thomas Ristenpart is a Professor at Cornell Tech and a member of the Computer Science department at Cornell University. Before joining Cornell Tech in May, 2015, he spent four and a half years as an Assistant Professor at the University of Wisconsin-Madison. He completed his PhD at UC San Diego in 2010. His research spans a wide range of computer security topics, with recent focuses including digital privacy and safety in intimate partner violence, anti-abuse mitigations for encrypted messaging systems, improvements to authentication mechanisms including passwords, and topics in applied and theoretical cryptography. His work is routinely featured in the media and has been recognized by numerous distinguished paper awards, two ACM CCS test-of-time awards, a USENIX Security test-of-time award, an Advocate of New York City award, an NSF CAREER Award, and a Sloan Research Fellowship.


Wednesday, 08/1/2025, 13:00 - 14:20
On the Instance-dependent Sample Complexity of Reinforcement Learning - Kevin Jamieson

Abstract: An autonomous agent is placed in an unfamiliar environment with unknown rules and hidden rewards. How quickly can the agent learn to navigate and maximize its accumulated rewards? This question lies at the heart of reinforcement learning (RL), a sequential decision making problem formulation with applications ranging from video games to personalized healthcare.
In this talk, I will start with the most fundamental reinforcement learning scenario: environments with a finite set of states, actions and time horizon, which illustrates core challenges and foundational methods in the field. The first half of the lecture will introduce multi-armed bandits and classical strategies for RL like Upper Confidence Bound (UCB), which are provably optimal for worst-case environments. In the second half, I will present my lab’s recent work on instance-dependent approaches—algorithms that not only handle worst-case scenarios but also adaptively capitalize on easier environments, yielding better performance when possible. These results offer new insights into what makes certain reinforcement learning problems easy or difficult, ultimately providing a complete characterization of these challenges.

Bio: Kevin Jamieson is an Associate Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. He received his B.S. in 2009 from the University of Washington, his M.S. in 2010 from Columbia University, and his Ph.D. in 2015 from the University of Wisconsin - Madison under the advisement of Robert Nowak, all in electrical engineering. He returned to the University of Washington as faculty in 2017 after a postdoc at the University of California, Berkeley working with Benjamin Recht. Jamieson's work has been recognized by an NSF CAREER award and Amazon Faculty Research award.


Wednesday, 08/1/2025, 15:00 – 16:20
Rethinking System Software for Efficient, Elastic Cloud Computing - Ana Klimović

Abstract: Cloud computing platforms have evolved from renting virtual machines to providing elastic compute and storage services that abstract hardware resources from users. Shifting the responsibility of resource allocation and task scheduling from cloud users to providers makes the cloud easier to use and gives providers the opportunity to optimize under the hood for performance and energy efficiency.
Yet, elastic compute services like Functions as a Service (FaaS) are highly inefficient for providers to run today and offer poor performance guarantees for users. This is largely because today’s FaaS platforms are still based on system software that was designed for the older, more traditional cloud execution model of users renting long-running virtual machines. FaaS platforms run user code as a black box inside MicroVMs, limiting the provider's ability to optimize scheduling and data fetching. Furthermore, despite being more lightweight than traditional VMs, MicroVMs still incur significant startup times. To minimize the impact on request latency, providers keep many MicroVMs pre-initialized in memory. This is expensive and leads to variable performance for users as some requests still incur cold starts.
Rather than retrofitting traditional VMs and cloud orchestration systems, we propose Dandelion, a clean slate platform for elastic cloud computing. To build Dandelion, we co-design a new cloud-native programming model and execution system that: 1) exposes application dataflow to the cloud platform, such that the platform can perform application-aware scheduling and data prefetching, 2) reduces the attack surface of untrusted application code, such that the platform can securely isolate tasks with lightweight sandboxes that boot faster and can be binpacked more densely than MicroVMs, and 3) abstracts hardware, enabling the platform to seamlessly offload different types of tasks to heterogeneous hardware (including GPUs and SmartNICs) to further optimize performance per cost. Dandelion aims to enable fast, efficient, elastic computing with secure isolation guarantees for cloud applications such as distributed log processing, elastic data processing with user-defined functions, and agentic AI systems.

Bio: Ana Klimovic is an Assistant Professor in the Systems Group of the Computer Science Department at ETH Zurich. Her research interests span operating systems, computer architecture, and their intersection with machine learning. Ana's work focuses on computer system design for large-scale applications such as cloud computing services, data analytics, and machine learning. Before joining ETH in August 2020, Ana was a Research Scientist at Google Brain and completed her Ph.D. in Electrical Engineering at Stanford University.


Wednesday, 08/1/2025, 16:30 – 17:50
Role of Structured Matrices in Fine-Grained Algorithm Design - Barna Saha

Abstract: Fine-grained complexity attempts to precisely determine the time complexity of a problem and has emerged as a guide for algorithm design in recent times. Some of the central problems in fine-grain complexity deals with computation of distances. For example, computing all pairs shortest paths in a weighted graph, computing edit distance between two sequences or two trees, and computing distance of a sequence from a context free language. Many of these problems reduce to computation of matrix products over various algebraic structures, predominantly over the (min,+) semiring. Obtaining a truly subcubic algorithm for (min,+) product is one of the outstanding open questions in computer science.
Interestingly many of the aforementioned distance computation problems have some additional structural properties. Specifically, when we perturb the inputs slightly, we do not expect a huge change in the output. This simple yet powerful observation has led to better algorithms for many problems for which we were able to improve the running time after several decades. This includes problems such as the Language Edit Distance, RNA folding, and Dyck Edit Distance. Indeed, this structure in the problem leads to matrices that have the Lipschitz property, and we gave the first truly subcubic time algorithm for computing (min,+) product over such Lipschitz matrices. Follow-up work by several researchers obtained improved bounds for monotone matrices, and for (min,+) convolution under similar structures leading to improved bounds for a series of optimization problems. These result in not just faster algorithms for exact computation but also for approximation algorithms. In particular, we show how fast (min,+) product computation over monotone matrices can lead to better additive approximation algorithms for computing all pairs shortest paths on unweighted undirected graphs, leading to improvements after twenty four years.

Bio: Barna Saha is The Harry E. Gruber Endowed Chair Professor at the University of California San Diego holding faculty appointments in the Department of Computer Science & Engineering, and Halıcıoğlu Data Science Institute. She is the Director of the National NSF TRIPODS Institute for Emerging CORE Methods in Data Science (EnCORE) at the University of California San Diego. Previously, Saha was a tenured Associate Professor at the University of California Berkeley, and even before that on the faculty of Computer Science at UMass Amherst, and as a Senior Research Scientist at the AT&T Shannon Research Laboratory. Saha's research interests are broadly in theoretical computer science. Specifically, she studies designing fast algorithms for classical problems as well as problems in the emerging fields of AI and Data Science. Among her many accolades include a Presidential Early Career Award for Scientists and Engineers (PECASE) which is the highest honor given by the United States Government to early-career researchers, a Sloan fellowship, an NSF CAREER award, and multiple industry faculty fellowship awards.


Thursday, 09/1/2025, 10:00 – 11:20
Is What You See What You Execute? The Challenges of Validating Compilers - Alastair F. Donaldson

Abstract: Compilers are critical pieces of software infrastructure, and it is paramount that they work reliably. However, modern optimising compilers are incredibly complex, and this complexity is a breeding ground for subtle bugs. This has led to a thriving field of research in compiler testing, with a particular focus on randomized compiler testing, also known as compiler fuzzing, whereby compilers are automatically tested using randomly generated or randomly mutated test programs.
I will give an overview of this research field, which has led to innovative solutions to several fundamental problems in software testing: the oracle problem, the test case validity problem, and the problem of test case reduction. I will delve into these problems in detail and describe various solutions in the context of compiler testing, including differential and metamorphic testing as workarounds for the oracle, approaches to creating valid-by-construction programs to get a handle on the test case validity problem, and recent advances in automated test case reduction which are essential for randomized compiler testing to be useful in practice.
In doing so I will draw on my own experience investigating techniques for randomized testing of GPU compilers, which led to me founding the GraphicsFuzz spinout company, acquired by Google in 2018, and my subsequent experience deploying compiler testing techniques in an industrial setting. I will also discuss recent work and future challenges on going beyond the testing of compilers to the testing of program analysis tools.

Bio: Alastair Donaldson is a Professor in the Department of Computing at Imperial College London where he is Director of Research and leads the Multicore Programming Group, investigating novel techniques and tool support for programming, testing and reasoning about highly parallel systems and their programming languages. He was Founder and Director of GraphicsFuzz Ltd., a start-up company specialising in metamorphic testing of graphics drivers, which was acquired by Google in 2018, after which he spent time working with Google as a software engineering and then as a Visiting Researcher. He was the recipient of the 2017 BCS Roger Needham Award and an EPSRC Early Career Fellowship and has published more than 100 articles in the fields of programming languages, formal verification, software testing and parallel programming. Alastair was previously a Visiting Researcher at Microsoft Research Redmond, an EPSRC Postdoctoral Research Fellow at the University of Oxford and a Research Engineer at Codeplay Software Ltd. He holds a PhD from the University of Glasgow and is a Fellow of the British Computer Society.


Thursday, 09/1/2025, 13:00 – 14:20
Sustainable Computer System Design - Lieven Eeckhout

Abstract: Sustainability and climate change is a major challenge for our generation. In this talk I will argue that sustainable development requires a holistic approach and involves multi-perspective thinking. Applied to computing, sustainable development means that we need to consider the entire environmental impact of computing, including raw material extraction, component manufacturing, product assembly, transportation, use, repair/maintenance, and end-of-life processing (disassembly and recycling/reuse). Analyzing current trends reveals that the embodied footprint is, or will soon be, more significant compared to the operational footprint. I will present a simple, yet insightful, first-order model to assess and reason about the sustainability of computer systems in light of the inherent data uncertainty. Applying the model to a variety of case studies illustrates what computer architects and engineers can and should do to better understand the sustainability impact of computing, and to design sustainable computer systems.

Bio: Lieven Eeckhout (PhD 2002) is a Senior Full Professor at Ghent University, Belgium, in the Department of Electronics and Information Systems (ELIS). His research interests include computer architecture with a specific emphasis on performance evaluation and modeling, dynamic resource management, microarchitecture, and sustainability. He is the recipient of the 2017 ACM SIGARCH Maurice Wilkes Award and the 2017 OOPSLA Most Influential Paper Award, and was elevated to IEEE Fellow in 2018 and ACM Fellow in 2021. Other awards include five IEEE Micro Top Pick selections, the MICRO 2024 Best Paper Award, the ISPASS 2013 Best Paper Award. He served as the Program Chair for ISPASS 2009, CGO 2013, HPCA 2015 and ISCA 2020, and serve(d/s) as General Chair for ISPASS 2010, IISWC 2023 and ASPLOS 2025. He previously served as Editor-in-Chief of IEEE Micro (2015-2018), and as technical program committee member for 50+ computer architecture conferences.


Thursday, 09/1/2025, 15:00 – 16:20
From Wireless Sensors to Pervasive Perpetual Networks - Prabal Dutta

Abstract: A quarter century ago, a set of MobiCom challenge papers catalyzed a research community to pursue the vision of wirelessly networked sensors of increasingly diminishing proportions that could densely monitor the physical world. Today, much of the original vision has been realized, and a bewildering array and variety of systems have been fielded that allow us to gather and process unprecedented amounts of data about the physical world.
But this progress has also exposed many new challenges and opportunities. This talk will draw on my lab’s efforts in designing, deploying, and commercializing wireless sensors for a range of applications. The march of technology and evolution of these efforts—from seemingly trivial connected sensors with simple cloud analytics to more complex networked sensors with sophisticated sensing and communications to sustainable perceptual networks that perform multi-spectral data fusion and inference at the edge to detect complex but sparse faults—has highlighted numerous exciting directions ripe for attention from the research community.

Bio: Prabal Dutta is a Professor of Electrical Engineering and Computer Sciences at University of California, Berkeley. His interests span circuits, systems, and software, with a focus on mobile, wireless, embedded, networked, and sensing systems that have applications in health, energy, and the environment. His work has yielded dozens of hardware and software systems, has won a Test-of-Time Award (SenSys’22), five Top Pick/Best Paper Awards (MICRO’16, SenSys'10, IPSN'10, HotEmNets'10, and IPSN'08), two Best Paper Nominees, numerous demo, design, poster, and industry contests, has been directly commercialized by a dozen companies and indirectly by many dozens more, and is on display at Silicon Valley’s Computer History Museum. His work has been recognized with an Okawa Foundation Grant, a Sloan Fellowship, an NSF CAREER Award, a Popular Science Brilliant Ten Award, and an Intel Early Career Award. He has served as a program chair for MobiSys, BuildSys, SenSys, IPSN, HotMobile, ESWEEK IoT Day, HotMobile, and HotPower, as general chair for EWSN, and as a member of the DARPA ISAT Study Group. He holds a Ph.D. in Computer Science from UC Berkeley. He has co-founded several companies based on his research including Cubeworks, Gridware, nLine, and Vizi.


Friday, 10/1/2025, 10:00 - 11:20
High-level Abstractions for Network Programming - Nate Foster

Abstract: Programmable networks have gone from a dream to a reality. Software-defined networking (SDN) architectures provide interfaces for specifying network-wide control algorithms, and emerging hardware platforms are exposing programmability at the forwarding plane level as well. But despite much progress, several fundamental questions remain: What are the right abstractions for writing network programs? How do they differ from the abstractions we use to write ordinary software? Can we reason about programs automatically and implement them efficiently in hardware? This talk will attempt to answer these questions by exploring the design and implementation of high-level abstractions for network programming. I will present NetKAT, a language for programming the forwarding plane based on a surprising connection to regular languages and finite automata, along with several extensions.

Bio: Nate Foster is a Professor of Computer Science at Cornell University and a Visiting Researcher at Jane Street. The goal of his research is to develop languages and tools that make it easy for programmers to build secure and reliable systems. He received PhD in Computer Science from the University of Pennsylvania. His awards include the ACM SIGPLAN Robin Milner Young Researcher Award, the ACM SIGCOMM Rising Star Award, a Sloan Research Fellowship, and an NSF CAREER Award.


Friday, 10/1/2025, 13:00 – 14:20
Knowledge Incorporation and Emergence for Music AI - Gus Xia

Abstract: Large language models have demonstrated remarkable capabilities in both symbolic and audio music generation. However, they still fall short in embodying human-like music knowledge, which limits their interpretability and control. In this talk, Gus will explore two approaches to enhance the interpretability of music generative models. The first approach involves directly incorporating hierarchical music structures into the model, leading to state-of-the-art results in whole-song pop music generation. The second approach leverages metaphysical inductive biases to allow human-like music knowledge to "emerge" naturally from the learning process. Pioneer studies in this direction have already given rise to fundamental music concepts like pitch and timbre. Together, these strategies pave the way for more controllable and interpretable music AI systems.

Bio: Dr Gus Xia is an assistant professor of machine learning at MBZUAI, as well as an affiliated faculty at NYU Shanghai, CILVR at the Center for Data Science, and MARL at Steinhardt. He received his Ph.D. in the machine learning department at Carnegie Mellon University (CMU) in 2016, and he was a Neukom Fellow at Dartmouth from 2016 to 2017. Xia’s research is very interdisciplinary and lies in the intersection of machine learning, HCI, robotics, and computer music. Some representative works include interactive composition via style transfer, human-computer interactive performances, autonomous dancing robots, and haptic guidance for flute tutoring. Xia is also a professional Di and Xiao (Chinese flute and vertical flute) player. He plays as a soloist in the NYU Shanghai Jazz Ensemble, Pitt Carpathian Ensemble, and Chinese Music Institute of Peking University. In 2022, Xia and his students held a Music AI concert in Dubai.


Friday, 10/1/2025, 15:00 – 16:20
Machine Speech Chain: Modeling Human Speech Perception and Production with Auditory Feedback - Sakriani Sakti

Abstract: The development of automatic speech recognition (ASR) and text-to-speech synthesis (TTS) has enabled computers to learn how to listen or speak, imitating the capability of human speech perception and production. However, computers still cannot hear their own voice, as the learning and inference to listen and speak are made separately and independently. Consequently, the separate training of ASR and TTS in a supervised fashion requires a large amount of paired speech-text data—furthermore, there is no ability to grasp the situation and overcome the problem during inference.
On the other hand, humans learn how to talk by constantly repeating their articulations and listening to the sounds produced. By simultaneously listening and speaking, the speaker can monitor her volume, articulation, and the general comprehensibility of her speech. Therefore, a closed-loop speech chain mechanism with auditory feedback from the speaker’s mouth to her ear is crucial.
In this talk, I will introduce a machine speech chain framework based on deep learning. First, I will describe the training mechanism that learns to listen or speak and to listen while speaking. The framework enables semi-supervised learning in which ASR and TTS can teach each other given unpaired data. Applications of multilingual and multimodal machine speech chains to support low-resource ASR and TTS will also be presented. After that, I will also describe the inference mechanism that enables TTS to dynamically adapt (“listen and speak louder”) in noisy conditions, given the auditory feedback from ASR.

Bio: Sakriani Sakti is currently the head of the Human-AI Interaction (HAI) Research Laboratory at the Nara Institute of Science and Technology (NAIST) in Japan. She also serves as a full professor at NAIST, an adjunct professor at the Japan Advanced Institute of Science and Technology (JAIST) in Japan, a visiting research scientist at the RIKEN Center for Advanced Intelligent Project (RIKEN AIP) in Japan, and an adjunct professor at the University of Indonesia. A member of JNS, SFN, ASJ, ISCA, IEICE, and IEEE, she currently serves on the IEEE Speech and Language Technical Committee (2021-2026) and as an associate editor for IEEE/ACM TASLP, Frontiers in Language Sciences, and IEICE. Recently, she was appointed as the Oriental-COCOSDA Convener.
Previously, she was actively involved in international collaboration activities such as Asian Pacific Telecommunity Project (2003-2007) and various S2ST research projects, including A-STAR and U-STAR (2006-2011). She served as a visiting scientific researcher at INRIA Paris-Rocquencourt, France (2015-2016) under JSPS Strategic Young Researcher Overseas Visits Program for Accelerating Brain Circulation. She served also as the general chair for SLTU 2016, chaired the "Digital Revolution for Under-resourced Languages (DigRevURL)" Workshops at INTERSPEECH 2017 and 2019, and was part of the organizing committee for the Zero Resource Speech Challenge in 2019 and 2020. She played a pivotal role in establishing the ELRA-ISCA Special Interest Group on Under-resourced Languages (SIGUL), where she has been chair since 2021 and organizes the annual SIGUL Workshop. In collaboration with UNESCO and ELRA, she was the general chair of the "Language Technologies for All (LT4All)" Conference in 2019, focusing on "Enabling Linguistic Diversity and Multilingualism Worldwide," and will lead LT4All 2.0 in 2025 under the theme "Advancing Humanism through Language Technologies.


Venue

All talks will be held in Multi-Purpose Hall in the NUS School of Computing, COM 3 building.