**Monday, 7/1/2019**

09:30 – 10:00 Welcome coffee

10:00 – 11:30 High-Speed and Low-Power SerDes Architectures using Chord Signaling – Amin Shokrollahi

11:30 – 13:00 Lunch break

13:00 – 14:30 Knowledge and Distributed Coordination – Yoram Moses [slides]

14:30 – 15:00 Coffee break

15:00 – 16:30 Byzantine Agreement in the Clear – Valerie King [tutorial | slides]

16:30 – 18:00 Blockmania & Chainspace: from byzantine consensus to scalable cross shard distributed ledgers – George Danezis [slides]

**Tuesday, 8/1/2019**

09:30 – 10:00 Welcome coffee

10:00 – 11:30 Economics and Computer Science of a Radio Spectrum Reallocation – Kevin Leyton-Brown [slides]

11:30 – 13:00 Lunch break

13:00 – 14:30 Robustness Meets Algorithms – Ankur Moitra [slides]

14:30 – 15:00 Coffee break

15:00 – 16:30 Applications of Artificial Intelligence to the Automated Software Testing of Cyber-Physical Systems – Lionel Briand [slides]

**Wednesday, 9/1/2019**

09:30 – 10:00 Welcome coffee

10:00 – 11:30 Formal Methods and Tools for Distributed Systems – Thomas Ball [slides]

11:30 – 13:00 Lunch break

13:00 – 14:30 Algorithmic Fairness and Unfairness: A New Research Area – Suresh Venkatasubramanian [slides]

14:30 – 15:00 Coffee break

15:00 – 16:30 Data Summarization for Machine Learning – Graham Cormode [slides]

16:30 – 18:00 Security and Privacy for Software Systems and Neural Networks – Ulfar Erlingsson

**Thursday, 10/1/2019**

09:30 – 10:00 Welcome coffee

10:00 – 11:30 The Pit and the Pendulum – Lorenzo Alvisi [slides]

11:30 – 13:00 Lunch break

13:00 – 14:30 MaxHS a hybrid approach to solving Maxsat – Fahiem Bacchus [slides]

Monday, 7/1/2019, 10:00 – 11:30
**High-Speed and Low-Power SerDes Architectures using Chord Signaling** – Amin Shokrollahi

Communication between IC components in an electronic device is one of the key components determining the performance of the device. The increase in the volume of digital data calls for ever higher throughputs between chips. The standard response has been to increase the communication rate. This leads to increased susceptibility to noise, and to an unproportionally higher power consumption. The us of higher speeds is at the core of this problem, and the apparent need to do so is the inefficient use of the available bandwidth. In this talk I will introduce Chord Signaling, an information theoretic class of signaling methods for chip-to-chip communication that makes a more efficient use of the bandwidth. I will explain parts of the theory, and how it can be used in new SerDes architectures for very high speed and low power links of various complexities.

The talk starts with a tutorial on the preliminaries and the theoretical foundations of this topic.

Amin Shokrollahi finished his PhD at the University of Bonn in 1991 where he was an assistant professor until 1995. From 1995 to 1998 he was at the International Computer Science Institute in Berkeley. In 1998 he joined the Bell Laboratories as a Member of the Technical Staff. From 2000 to 2009 he was the Chief Scientist of Digital Fountain. In 2003 he joined EPFL as a full professor of Mathematics and Computer Science. In 2011 he founded the company Kandou Bus which uses novel approaches for fast and energy efficient chip-to-chip links.
Dr. Shokrollahi’s research covers a wide range of topics from pure mathematics to electronics. He has 140+ publications, and 120+ pending and granted patent applications. An IEEE Fellow, Dr. Shokrollahi’s honors include several IEEE Paper Awards, the IEEE Eric E. Sumner Award, the Advanced Research Grant of the ERC, the IEEE Hamming Medal, and the ISSCC Jan van Vessem Award for outstanding European paper.

Monday, 7/1/2019, 13:00 – 14:30
**Knowledge and Distributed Coordination ** – Yoram Moses

At the heart of distributed protocols and distributed systems are the notions of indistinguishability, and its dual - knolwedge. This talk will present a fundamental connection between knowledge and action, and show it can be used to simplify and optimize solutions to popular distributed systems problems, and to facilitate the use of timing guarantees in solving coordination problems. The talk will be self contained and not assume prior familiarity with the topic.

The talk starts with a tutorial on the preliminaries and the theoretical foundations of this topic.

Yoram Moses is a professor of electrical engineering and the Israel Pollak Academic Chair at the Technion. He received the Godel Prize in 1997 and the Dijkstra Prize in 2009, for work on the application of knowledge theory in distributed computing. His recent work focuses on the interaction between communication, time, and coordination in distributed systems and circuits.

Monday, 7/1/2019, 15:00 – 16:30
**Byzantine Agreement in the Clear ** – Valerie King

Forty years ago, Leslie Lamport formulated a fundamental problem of coordination in a distributed network. He asked us to imagine an army led by generals, who send messages to each other with the goal of coming to agreement on a strategy. Planted among those generals are spies who seek to thwart this goal. Not long after this Byzantine agreement problem was presented, there were a few developments for an asynchronous network: an impossibility result for any deterministic scheme, a randomized exponential time algorithm, and a demonstration that one globally known coin toss could solve the problem in constant expected time. Some researchers turned to the use of committed secret coinflips via cryptography, while others turned to the study of collective coin flipping with full information.

Interest in Byzantine agreement has risen in recent years with the development of distributed ledgers, like bitcoin. In this talk I’ll survey some communication-efficient techniques for Byzantine agreement in the clear, and consider how they relate to distributed ledger systems. Such techniques avoid the overhead of cryptographic techniques and assumptions about limits on the computational power of adversaries. In particular, I’ll include a description of the first polynomial time algorithm for Byzantine Agreement in an asynchronous model, which is obtained by solving a new collective coin flipping problem.

The talk starts with a tutorial on the preliminaries and the theoretical foundations of this topic.

Valerie King is Professor in the Department of Computer Science at the University of Victoria, in BC, Canada. She received her PhD in computer science from the University of California, Berkeley under the supervision of Richard Karp, and a JD from UC Berkeley School of Law. She was a postdoctoral fellow at the University of Toronto under Faith Ellen and at Princeton under Andrew CC Yao. She was a visiting professor at Hebrew University, University of Copenhagen, University of Toronto, UC Berkeley, and Ecole Normale Superieur in Paris. Her industrial research experience includes Microsoft Research (SVC), Compaq and HP Systems Research Center in Palo Alto, and NECI in Princeton. In addition she was a member of the Institute for Advanced Study in Princeton and the Simons Institute in Berkeley. Her recent service includes membership in the Independent Panel on Internet Voting for Elections BC and PC chair of STOC 2017. In 2014, she received the distinction ACM Fellow for her work on randomized algorithms, especially dynamic graph algorithms and Byzantine fault tolerant distributed computing.

Monday, 7/1/2019, 16:30 – 18:00
**Blockmania & Chainspace: from byzantine consensus to scalable cross shard distributed ledgers** – George Danezis

Smart contract platforms have received considerable interest, after the rise of Etherium. However, currently deployed systems suffer from inherent scalability drawbacks, that new designs are trying to overcome. I will present Chainspace, a scalable distributed ledger and smart contract platform that scales arbitrarily according the number of nodes committed into the system. The novel cross shard atomic commit protocol it uses is an independent building block that can inspire other systems, but its practical implementation requires tweaks to ensure security, which I will also discuss. I will also talk about Blockmania, a modernized variant of Practical Byzatine Fault Tolerant consensus, that in our lab experiments outperforms naïve implementations, as well as how it is integrated into Chainspace or other Blockchain or traditional consensus systems.

The talk starts with a blackground on byzantine consensus (ala PBFT) and present Blockmania; then talk about sharding and Chainspace. The first part will contain a certain amount of tutorial material.

George Danezis is a Professor of Security and Privacy Engineering at the Department of Computer Science of University College London, and Head of the Information Security Research Group. He has been working on anonymous communications, privacy enhancing technologies (PET), and traffic analysis since 2000. He has previously been a researcher for Microsoft Research, Cambridge; a visiting fellow at K.U.Leuven (Belgium); and a research associate at the University of Cambridge (UK), where he also completed his doctoral dissertation under the supervision of Prof. R.J. Anderson.
His theoretical contributions to the Privacy Technologies field include the established information theoretic and other probabilistic metrics for anonymity and pioneering the study of statistical attacks against anonymity systems. On the practical side he is one of the lead designers of the anonymous mail system Mixminion, as well as Minx, Sphinx, Drac and Hornet; he has worked on the traffic analysis of deployed protocols such as Tor.
His current research interests focus around secure communications, high-integirty systems to support privacy, smart grid privacy, peer-to-peer and social network security, as well as the application of machine learning techniques to security problems. He has published over 70 peer-reviewed scientific papers on these topics in international conferences and journals.
He was the co-program chair of ACM Computer and Communications Security Conference in 2011 and 2012, IFCA Financial Cryptography and Data Security in 2011, the Privacy Enhancing Technologies Workshop in 2005 and 2006. He sits on the PET Symposium board and ACM CCS Steering committee and he regularly serves in program committees of leading conferences in the field of privacy and security. He is a fellow of the British Computing Society since 2014.

Tuesday, 8/1/2019, 10:00 – 11:30
**Economics and Computer Science of a Radio Spectrum Reallocation** – Kevin Leyton-Brown

Over 13 months in 2016-17 the US Federal Communications Commission conducted an "incentive auction" to repurpose radio spectrum from broadcast television to wireless internet. In the end, the auction yielded $19.8 billion USD, $10.05 billion USD of which was paid to 175 broadcasters for voluntarily relinquishing their licenses across 14 UHF channels. Stations that continued broadcasting were assigned potentially new channels to fit as densely as possible into the channels that remained. The government netted more than $7 billion USD (used to pay down the national debt) after covering costs (including retuning). A crucial element of the auction design was the construction of a solver, dubbed SATFC, that determined whether sets of stations could be "repacked" in this way; it needed to run every time a station was given a price quote.

This talk describes the design of both the auction and of SATFC. Compared to typical market design settings, the auction design was particularly unconstrained, with flexibility in the definitions of participants' property rights, the goods to be traded, their quantities, and the outcomes the market should seek to achieve. Computational tractability was also a first-order concern. The design of SATFC was achieved via a data-driven, highly parametric, and computationally intensive approach we dub "deep optimization". More specifically, to build SATFC we designed software that could pair both complete and local-search SAT-encoded feasibility checking with a wide range of domain-specific techniques, such as constraint graph decomposition and novel caching mechanisms that allow for reuse of partial solutions from related, solved problems. We then used automatic algorithm configuration techniques to construct a portfolio of eight complementary algorithms to be run in parallel, aiming to achieve good performance on instances that arose in proprietary auction simulations.

Experiments on realistic problems showed that within the short time budget required in practice, SATFC solved more than 96% of the problems it encountered. Furthermore, simulations showed that the incentive auction paired with SATFC produced nearly optimal allocations in a restricted setting and achieved substantially better economic outcomes than other alternatives at national scale.

The talk starts with a tutorial on the preliminaries and the theoretical foundations of this topic.

Kevin Leyton-Brown is a professor of Computer Science at the University of British Columbia and an associate member of the Vancouver School of Economics. He holds a PhD and M.Sc. from Stanford University (2003; 2001) and a B.Sc. from McMaster University (1998). He studies the intersection of computer science and microeconomics, addressing computational problems in economic contexts and incentive issues in multiagent systems. He also applies machine learning to various problems in artificial intelligence, notably the automated design and analysis of algorithms for solving hard computational problems.

He currently advises Auctionomics, AI21, and Qudos. He is a co-founder of Kudu.ug and Meta-Algorithmic Technologies. He was scientific advisor to UBC spinoff Zite until it was acquired by CNN in 2011. His past consulting has included work for Zynga, Trading Dynamics, Ariba, and Cariocas.

Tuesday, 8/1/2019, 13:00 – 14:30
**Robustness Meets Algorithms** – Ankur Moitra

In every corner of machine learning and statistics, there is a need for estimators that work not just in an idealized model but even when their assumptions are violated. Unfortunately, in high-dimensions, being provably robust and efficiently computable are often at odds with each other.

In this talk, we give the first efficient algorithm for estimating the parameters of a high-dimensional Gaussian which is able to tolerate a constant fraction of corruptions that is independent of the dimension. Prior to our work, all known estimators either needed time exponential in the dimension to compute, or could tolerate only an inverse polynomial fraction of corruptions. Not only does our algorithm bridge the gap between robustness and algorithms, it turns out to be highly practical in a variety of settings.

The talk starts with a tutorial on the preliminaries and the theoretical foundations of this topic.

Ankur Moitra is the Rockwell International Associate Professor in the Department of Mathematics at MIT. He is a principal investigator in the Computer Science and Artificial Intelligence Lab (CSAIL), a core member of the Theory of Computation Group, Machine Learning@MIT and the Center for Statistics. The aim of his work is to bridge the gap between theoretical computer science and machine learning by developing algorithms with provable guarantees and foundations for reasoning about their behavior. He is a recipient of a Packard Fellowship, a Sloan Fellowship, an NSF CAREER Award, an NSF Computing and Innovation Fellowship and a Hertz Fellowship.

Tuesday, 8/1/2019, 15:00 – 16:30
**Applications of Artificial Intelligence to the Automated Software Testing of Cyber-Physical Systems** – Lionel Briand

Enabling Automated Software Testing in Cyber-Physical Systems with Artificial Intelligence. Testing is the main mechanism used in industry to assess and improve the dependability of software systems. To be scalable to the increasingly complex systems that are being developed in many application domains, testing must be automated. However, in many contexts such as cyber-physical systems, this is challenging and various techniques from Artificial Intelligence, e.g., machine learning and evolutionary computing, have come to the rescue and have recently shown to alleviate problems that had been thought to be intractable. This talk will cover recent examples of research projects done in collaboration with industry in cyber-physical domains. Lessons learned and future research directions will then be discussed.

The talk starts with a tutorial on the preliminaries and the theoretical foundations of this topic.

Lionel C. Briand is professor in software verification and validation at the SnT centre for Security, Reliability, and Trust, University of Luxembourg, where he is also the vice-director of the centre. He is currently running multiple collaborative research projects with companies in the automotive, satellite, financial, and legal domains. Lionel has held various engineering, academic, and leading positions in five other countries before that. He was one of the founders of the ICST conference (IEEE Int. Conf. on Software Testing, Verification, and Validation, a CORE A event) and its first general chair. He was also the EiC of Empirical Software Engineering (Springer) for 13 years and led the journal to the top tier of the very best publication venues in software engineering.

Lionel was elevated to the grade of IEEE Fellow in 2010 for his work on the testing of object-oriented systems. He was granted the IEEE Computer Society Harlan Mills award and the IEEE Reliability Society engineer-of-the year award for his work on model-based verification and testing, respectively in 2012 and 2013. He received an ERC Advanced grant in 2016 — on the topic of modelling and testing cyber-physical systems — which is the most prestigious individual research grant in the uropean Union. His research interests include: software testing and verification, model-driven software development, search-based software engineering, and empirical software engineering.

Wednesday, 9/1/2019, 10:00 – 11:30
**Formal Methods and Tools for Distributed Systems** – Thomas Ball

Microsoft is a leader in the development and deployment of tools for distributed systems based on formal methods, the mathematical/logical specification, design and verification of systems. In this talk, I’ll give an overview of some of the key methods and tools that are in regular use in Microsoft’s Azure Cloud Computing Platform, which range from mathematical specification of high-level design (TLA+), programming and systematic testing of protocols (P# and Ivy), and the automated checking of network and firewall configurations (SecGuru). Many of these tools address difficult verification problems by reduction to logic and the application of automated theorem proving (Z3). TLA+, P#, Ivy, and Z3 are all open source technologies made available to the wider community (https://github.com/tlaplus, https://github.com/p-org/, https://github.com/Microsoft/ivy, https://github.com/Z3Prover, https://github.com/Z3Prover/FirewallChecker).

The talk starts with a tutorial on the preliminaries and the theoretical foundations of this topic.

Thomas (Tom) Ball is a principal researcher and manager at Microsoft Research. In 1999, He initiated the influential SLAM software model-checking project with Sriram Rajamani, which led to the creation of the Static Driver Verifier tool for finding defects in Windows device drivers. Tom is a 2011 ACM Fellow for ‘contributions to software analysis and defect detection’. As a manager, he has nurtured research areas such as automated theorem proving, program testing/verification and empirical software engineering. His current focus is the Microsoft Makecode platform for programming with physical computers.

Wednesday, 9/1/2019, 13:00 – 14:30
**Algorithmic Fairness and Unfairness: A New Research Area** – Suresh Venkatasubramanian

As automation creeps into our decision making pipelines, it is becoming clearer that decision-by-algorithm presents us with challenges -- to fairness, accountability and transparency -- that will require new technology for learning predictive models, new ways to inspect these models, and even a deeper understanding of how ML technology integrates with existing decision-making pipelines.

We are seeing the birth of a new research area focused directly on these questions and straddles the boundary between computer science, the social sciences, and the law. In this talk I'll lay out the landscape of questions -- technical and sociotechnical -- that researchers are now studying, with some examples drawn from my own work in this area.

The talk starts with a tutorial on the preliminaries and the theoretical foundations of this topic.

Suresh Venkatasubramanian is a professor at the University of Utah. His background is in algorithms and computational geometry, as well as data mining and machine learning. His current research interests lie in algorithmic fairness, and more generally the problem of understanding and explaining the results of black box decision procedures. Suresh was the John and Marva Warnock Assistant Professor at the U, and has received a CAREER award from the NSF for his work in the geometry of probability, as well as a test-of-time award at ICDE 2017 for his work in privacy. His research on algorithmic fairness has received press coverage across North America and Europe, including NPR’s Science Friday, NBC, and CNN, as well as in other media outlets. He is a member of the Computing Community Consortium Council of the CRA, a member of the board of the ACLU in Utah, and a member of New York City’s Failure to Appear Tool (FTA) Research Advisory Council.

Wednesday, 9/1/2019, 15:00 – 16:30
**Data Summarization for Machine Learning** – Graham Cormode

The notion of summarization is to provide a compact representation of data which approximately captures its essential characteristics. If such summaries can be created, they can lead to efficient distributed algorithms which exchange summaries in order to compute a desired function.

In this talk, I’ll describe recent efforts in this direction for problems inspired by machine learning: building graphical models over evolving, distributed training examples, and solving constrained regression problems over large data sets.

The talk starts with a tutorial on the preliminaries and the theoretical foundations of this topic.

Graham Cormode is a Professor in Computer Science at the University of Warwick in the UK, where he works on research topics in data management, privacy and big data analysis. Previously, he was a principal member of technical staff at AT&T Labs-Research. His work has attracted over 12,000 citations in the literature and has appeared in over 100 conference papers, 40 journal papers, and been awarded 30 US Patents.

Cormode is the co-recipient of the 2017 Adams Prize for Mathematics for his work on Statistical Analysis of Big Data. He has also edited two books on applications of algorithms to different areas, and co-authored a third.

Wednesday, 9/1/2019, 16:30 – 18:00
**Security and Privacy for Software Systems and Neural Networks** – Ulfar Erlingsson

For the last several years, Google has been leading the development and real-world deployment of state-of-the-art, practical techniques for learning statistics and ML models with strong privacy guarantees for the data involved. This work has been motivated by a desire to simultaneously improve both the security and privacy guarantees of production systems. I'll give an overview of our work, and the practical techniques we've developed for training Deep Neural Networks with strong privacy guarantees. In particular, I'll cover recent results that show how local differential privacy guarantees can be strengthened by the addition of anonymity, and explain the motivation for that work. I'll also cover recent work on uncovering and measuring privacy problems due to unintended memorization in machine learning models.

The talk starts with a tutorial on the preliminaries and the theoretical foundations of this topic.

Úlfar Erlingsson is a Senior Staff Research Scientist in the Google Brain team, currently working primarily on privacy and security of deep learning systems. Previously, Úlfar has led computer security research at Google, and been been a researcher at Microsoft Research, Silicon Valley and Associate Professor at Reykjavik University. Úlfar was co-founder and CTO of the Internet security startup Green Border Technologies and Director of Privacy Protection at deCODE Genetics. Úlfar holds a PhD in computer science from Cornell University.

Thursday, 10/1/2019, 10:00 – 11:30
**The Pit and the Pendulum** – Lorenzo Alvisi

A key challenge in the design of distributed data stores is the balancing two conflicting goals: strong consistency (which directly translates into ease of programming) and performance. We’ll explore this dilemma in a sequence of two talks.

In the first talk, I will review notions of correctness for concurrently executing transactions that correspond to different ways to balance these concerns and retrace the efforts to define their guarantees rigorously and in an implementation-independent fashion.

In the second talk, we will explore more deeply the extent to which the tension between performance and ease of programming is fundamental, and I will share what my students and I have recently learned while trying to overcome the traditional terms of this classic tradeoff.

Lorenzo Alvisi is the Tisch University Professor of Computer Science at Cornell University. Prior to joining Cornell, he held an endowed professorship at UT Austin, where he is now a Distinguished Professor Emeritus. Lorenzo received his Ph.D. in 1996 from Cornell, after earning a Laurea cum Laude in Physics from the University of Bologna. His research interests are in the theory and practice of distributed computing. He is a Fellow of the ACM and IEEE, an Alfred P. Sloan Foundation Fellow, and the recipient of a Humboldt Research Award, an NSF Career Award, and several teaching awards. He serves on the editorial boards of ACM TOCS and Springer’s Distributed Computing. In addition to distributed computing, he is passionate about classical music and red Italian motorcycles.

Thursday, 10/1/2019, 13:00 – 14:30
**MaxHS a hybrid approach to solving Maxsat** – Fahiem Bacchus

The talk starts with a brief introduction to CDCL Sat solvers. In this tutorial we discuss the core algorithm used in modern Conflict Driven Clause Learning satisfiablity solvers. The main topics will be: watch literals for detecting unit clauses, 1-UIP clause learning, bounded variable elimination preprocessing, and assumption based Sat solving.

Maxsat is the optimization version of satisfiability, in which one wants to find a truth assignment of minimum cost. Weighted Maxsat is in the complexity class FP to the NP, and as such it can be solved by a polynomial number of calls to a Satisfiability oracle. Most algorithms for solving Maxsat use this idea. MaxHS on the other hand uses the idea of incrementally constructing a hitting set problem and using an Integer Programming solver to find a minimal cost hitting set. This provides a lower bound on the cost of a optimal solution. A Sat solver on the other hand generates upper bounds or augments the hitting set problem thus increasing the lower bound. This hybrid approach has proved to be quite effective, and MaxHS is currently the best performing Maxsat solver on weighted problems and one of the best on unweighted problems. In this talk we will explain the algorithmic ideas underpinning MaxHS and review its performance.

Fahiem Bacchus is a Professor of Computer Science at the University of Toronto. His research fits broadly in the area of Knowledge Representation and Reasoning, a sub-field of Artificial Intelligence. He has made a number of key contributions during his career, including the development of first-order logics for representing different forms of probabilistic knowledge, and automated planning algorithms that can exploit domain specific control knowledge expressed in Linear Temporal Logic (LTL). For the past 15 years his work has concentrated on automated reasoning algorithms for solving various forms of the satisfiability problem: finding a model (SAT), counting the number of models (#SAT), solving quantified Boolean formulas (QBF), and finding optimal models (MaxSat). His group has been successful in building award winning solvers for all of these problems. He has served as Program Chair of several major AI conferences, including UAI, ICAPS and SAT; is a member of the IJCAI board of trustees, having served as Conference Chair of IJCAI-17; is currently Chair of the SAT Association which organizes the annual SAT conference; and has won a number of distinguished and influential paper awards. Fahiem is also a Fellow of the Association for the Advancement of AI (AAAI) since 2006.

**Scientific Committee**

Divesh Aggarwal

Kuldeep Meel

David Rosenblum

Abhik Roychoudhury

Jonathan Scarlett

Reza Shokri (chair)

Xiaokui Xiao

Haifeng Yu

Yair Zick

**Organizing Committee**

Siew Foong Ho

Cheryl Lee

Reza Shokri