Lectures

Alina Oprea, Northeastern University

On the connection between integrity and privacy attacks in adversarial machine learning

Machine learning (ML) is increasingly being used for automated decisions in applications such as health care, finance, and personalized recommendations. These critical applications require strong guarantees on both the integrity of the ML  models and the privacy of the user data used to train these models. The area of adversarial machine learning studies the effect of adversarial attacks against ML models and aims to design robust defense algorithms. This talk will highlight a surprising connection between machine learning integrity and privacy, and discuss how poisoning training datasets of ML models results in higher success of several privacy attacks. In this context, I will discuss our work on using poisoning attacks for auditing private machine learning and mounting efficient property inference attacks. We will conclude by introducing some open questions on protecting the integrity and privacy of ML systems.

Anca Dragan, University of California, Berkeley

Robotics algorithms that take people into account

I discovered AI by reading “Artificial Intelligence: A Modern Approach”. What drew me in was the concept that you could specify a goal or objective for a robot, and it would be able to figure out on its own how to sequence actions in order to achieve it. In other words, we don’t have to hand-engineer the robot’s behavior — it emerges from optimal decision making. Throughout my career in robotics and AI, it has always felt satisfying when the robot would autonomously generate a strategy that I felt was the right way to solve the task, and it was even better when the optimal solution would take me a bit by surprise. In “Intro to AI” I share with students an example of this, where a mobile robot figures out it can avoid getting stuck in a pit by moving along the edge. In my group’s research, we tackle the problem of enabling robots to coordinate with and assist people: for example, autonomous cars driving among pedestrians and human-driven vehicles, or robot arms helping people with motor impairments (together with UCSF Neurology). And time and time again, what has sparked the most joy for me is when robots figure out their own strategies that lead to good interaction — when we don’t have to hand-engineer that an autonomous car should inch forward at a 4-way stop to assert its turn, for instance, but instead, the behavior emerges from optimal decision making. In this talk, I want to share how we’ve set up optimal decision making problems that require the robot to account for the people it is interacting with, and the surprising strategies that have emerged from that along the way. And I am very proud to say that you can also read a bit about these aspects now in the 4th edition of “Artificial Intelligence: A Modern Approach”, where I had the opportunity to edit the robotics chapter to include optimization and interaction.

Ani Nenkova, Adobe Research

Research Informing Practice in Language Technology

Language technology built upon neural models has burst in the public discourse, as the source of great changes to come in the interaction between artificial intelligence systems and people. Research-wise, these technologies open up questions rarely asked in past research: How different are research language workbenchs and consumer needs for language technologies? When these differ, how much of the academic literature can we use to inform the creation of new technology products? Of the many possible new functionalities, which will benefit people the most? I will use automatic summarization, a research field that began in the 1960s and currently a ubiquitous application of neural language models, to examine these questions.

Björn Brandenburg, Max Planck Institute for Software Systems

PROSA: Trustworthy Schedulability Analysis for Safety-Critical Real-Time Systems

Real-time systems are computing systems subject to stringent timing constraints and found in many safety-critical domains (e.g., avionics, automotive, robotics). An essential step in the safety certification of such systems is schedulability analysis: a category of formal methods that assess the system’s temporal correctness (i.e., compliance with the stated timing constraints). However, how much trust should we place in the results of a schedulability analysis, and more generally, results obtained with formal methods, in the certification of critical systems? Using schedulability analysis as a specific example, I will examine several ways in which analysis results might be unsound or otherwise unsatisfying and argue that there is ample room for reasonable doubt. However, that begs the question: why should industry practitioners adopt methods conceived to increase confidence in a system’s (temporal) correctness if these methods are not so confidence-inspiring after all? Formal verification offers a path out of this conundrum. While the implied effort and learning curve may initially seem off-putting, I will argue that it is, in fact, both viable and desirable to verify schedulability analyses formally. As a proof of concept, I will present PROSA, a Coq framework for machine-checked schedulability analysis, which we have been developing since 2015. Out of the box, PROSA provides machine-checked and hence trustworthy schedulability analyses suitable for use in critical systems. Additionally, PROSA has enabled us to find more general results and derive explainable analyses, as I will explain in my talk. Finally, I will report on two ongoing projects, one seeking to close the “model gap” between PROSA and a system’s codebase and one aiming to enable mathematically sound below-worst-case provisioning.

Catalin Hritcu, Max Planck Institute for Security and Privacy

Formally Secure Compilation of Unsafe C Compartments

Undefined behavior is endemic in the C programming language: buffer overflows, use after frees, double frees, invalid type casts, various concurrency bugs, etc., cause mainstream C compilers to produce code that can behave completely arbitrarily. This leads to devastating security vulnerabilities that are often remotely exploitable, and both Microsoft and Chrome report that around 70% of their high severity security bugs are caused by memory safety issues alone. We study how compartmentalization can mitigate this problem by restricting the scope of undefined behavior both spatially to just the compartments that encounter undefined behavior, and temporally by still providing protection to each compartment up to the point in time when it encounters undefined behavior. In particular, this talk will report on our quest to build a formally verified secure compilation chain for unsafe C compartments based on an extension of the CompCert formally verified C compiler. It will focus on the main conceptual challenges our recent research has overcome: (1) defining formally what it means for a compilation chain to be secure in this setting, which led to the discovery of a wide-range of secure compilation criteria that provide good alternatives to full abstraction; (2) enforcing water-tight protection using low-level compartmentalization mechanisms such as software fault isolation, programmable tagged architectures, and capability machines; and (3) devising scalable proof techniques that provide strong, machine-checked secure compilation guarantees to realistic languages and compilers, and that support a natural interaction between compartments (e.g. sharing memory dynamically by passing pointers).

Katharina Reinecke, University of Washington

Biases in Everyday Digital Technology and How to Address Them

From social media to conversational AI, digital technology has become a mainstay in the lives of many people around the world. Many of these inventions have been made in large technology centers, like Silicon Valley, and are inherently biased toward the views and experiences of product designers and developers who do not reflect a broad demographic in terms of age, education level, culture, race, and physical abilities. In this talk, I will show how bias in the design of digital technology can systematically disadvantage specific groups of people. Specifically, I will present my lab’s prior work on diverse users’ experiences with digital technologies—ranging from websites and online communities to security & privacy interfaces and conversational AI—outlining how technology is often useful for some, but not all users. I will also present approaches we have developed for recognizing biases in digital technology design, such as by collecting data from diverse populations using our volunteer-based LabintheWild platform, and by anticipating unintended consequences of technology using our BLIP system.

Manuel Gomez Rodriguez, Max Planck Institute for Software Systems

Improving Decision Making with Machine Learning, Provably

Automated decision support systems promise to help human experts solve tasks more efficiently and accurately. However, existing systems typically require experts to understand when to cede agency to the system or when to exercise their own agency. Moreover, if the experts develop a misplaced trust in the system, their performance may worsen. In this talk, I will discuss the development of automated decision support systems that, by design, do not require experts to understand when each of their recommendations is accurate to improve their performance. Moreover, I will present both theoretical and experimental results that demonstrate that such decision support systems are robust to the accuracy of the machine learning algorithms they rely on, however, the amount of human effort they help reducing does depend on their accuracy.

Manya Ghobadi, MIT

Next-Generation Optical Networks for Emerging ML Workloads

In this talk, I will explore three elements of designing next-generation machine learning systems: congestion control, network topology, and computation frequency. I will show that fair sharing, the holy grail of congestion control algorithms, is not necessarily desirable for deep neural network training clusters. Then I will introduce a new optical fabric that optimally combines network topology and parallelization strategies for machine learning training clusters. Finally, I will demonstrate the benefits of leveraging photonic computing systems for real-time, energy-efficient inference via analog computing. Pushing the frontiers of optical networks for machine learning workloads will enable us to fully harness the potential of deep neural networks and achieve improved performance and scalability.

Minlan Yu, Harvard School of Engineering and Applied Science

Rearchitecting the network stack for high performance applications

To meet the increasing performance requirements of applications, we rethink the functions in the network stack and identify the best division of labor between the network stack, the kernel, and the NIC. I will present two projects in this space: The first project is IOTCP, which leverages smartNICs to offload IO operations for content delivery services for high throughput, while leaving TCP control functions to the CPU. The second project is Electrode, which leverages eBPF to accelerate network communication functions of distributed protocols to reduce the user-kernel crossings.

Miryung Kim, University of California, Los Angeles

Software Engineering for AI

Software developers are rapidly adopting AI to power their applications.  Current software engineering techniques do not provide the same benefits to this new class of compute and data-intensive applications.  To provide productivity gains that developers desire, our research group has designed a new wave of software engineering methods. In the first part of this talk, I will discuss technical challenges of making custom hardware accelerators accessible to software developers. I will showcase HeteroGen on automated program repair and test input generation for heterogeneous application development with FPGA. HeteroGen takes C/C++ code as input and automatically generates an HLS (high-level synthesis) version with behavior preservation and better performance. I will briefly describe HeteroFuzz that detects platform-dependent divergence between CPU and FPGA. In the second part of this talk, I will describe software engineering methods for big data analytics. I will deep-dive into symbolic-execution based test generation for Apache Spark. I will conclude with open problems in software engineering to meet the needs of AI developers.

Necmiye Ozay, University of Michigan

Formal methods for Cyber Physical Systems: State of the Art and Future Challenges

Modern cyber-physical systems, like high-end passenger vehicles, aircraft, or robots, are equipped with advanced sensing, learning, and decision making modules. On one hand these modules render the overall system more informed, possibly providing predictions into the future. On the other hand, they can be unreliable due to problems in information processing pipelines or decision making software. Formal methods, from verification and falsification to correct-by-construction synthesis hold the promise to detect and possibly eliminate such problems at design-time and to provide formal guarantees on systems’ correct operation. In this talk, I will discuss several recent advances in control synthesis and corner case generation for cyber-physical systems with a focus on scalability, and what role data and learning can play in this process. I will conclude the talk with some thoughts on challenges and interesting future directions.

Shan Lu, University of Chicago

Improving the Correctness and Efficiency of Machine Learning Applications

An increasing number of software applications adopt machine learning (ML) techniques to solve real-world problems. Unfortunately, the unique probabilistic and cognitive nature of machine learning presents great challenges in developing correct and efficient ML applications. In this talk, I will present our recent work on improving the quality of ML applications at the algorithm, system, and application levels. At the algorithm level, we have developed a family of any time deep neural networks that offers flexible tradeoffs among accuracy, performance, and energy efficiency; at the system level, we have developed a runtime scheduler that uses control theory techniques to dynamically configure neural networks and system resources; at the application level, I will present our empirical study that shows how widespread and severe ML API misuse is and our attempt in tackling this problem.

Viktor Vafeiadis, Max Planck Institute for Software Systems

Taming weak memory consistency

Concurrent programs have complex semantics because (due to out-of-order execution and propagation delays) each processor may observe the same instructions execute in a different order. This paradigm, known as weak memory consistency, makes developing, understanding, and reasoning about concurrent programs extremely difficult. In the talk, I will present some tools for assisting programmers with these three tasks.