Join us on Nov 8th, 2019 for a day of programming languages led by the speakers below.
The meeting will take place in room 3514 (the seminar room on the third floor) at Northwestern’s Mudd Library. Northwestern has visitor parking available; the North Lot usually has space and you do not need to pre-arrange a permit.
One of the standard correctness criteria for gradual typing is the dynamic gradual guarantee, which ensures that loosening type annotations in a program does not affect its behavior in arbitrary ways. Though natural, prior work has pointed out that the guarantee does not hold of any gradual type system for information-flow control. Toro et al.’s GSL Ref language, for example, had to abandon it to validate noninterference.
We argue that this issue is a consequence of using type ascription to classify data in the presence of no-sensitive-upgrade checks for avoiding information leakage. We propose a solution that changes the semantics of type ascription so that it never raises a value’s dynamic label, but merely checks that it is below the ascribed level. We demonstrate this idea with GLD, a gradually typed language inspired by the LIO Haskell library that enforces both the gradual guarantee and noninterference, featuring higher-order functions, general references, coarse-grained information-flow control, security subtyping and first-class labels.
Blockchains are intended to support safe computation on untrusted machines. Unfortunately, this safety has been violated by bugs in smart contracts, which are programs that run on blockchains. In this talk, I will describe Obsidian, our new smart contract language, whose type system uses typestate and linearity to prevent and detect bugs that have plagued programs written in previous languages. In addition, I will talk about our unique language design process, which integrates user studies in order to make the language more usable by programmers.
If a program takes time T to run on one processor, how long will it take to run on P processors? This is a fundamental question when designing parallel algorithms, programs and languages, and answering it has been the subject of decades of work. The idea of "bounded implementations" pioneered by Blelloch and Greiner in the 1990s gives a framework for relating parallel programs to abstract dependency graphs, using existing results about such graphs to give reasonable bounds on time and space cost, and then showing that an implementation of the program or language can realize these bounds.
In this talk, I propose that the idea of bounded implementations be used as a blueprint for exploring new problems in parallel computing and for designing new parallel languages. As a running example, I use the design of PriML, a language developed for my PhD dissertation. PriML enables the construction of an important class of parallel programs: interactive programs for which a suitable notion of cost must account for responsiveness and not just throughput. Reasoning about this extended notion of cost has required extending the dependency graph-based cost models to account for responsiveness. In turn, these models have influenced the development of PriML by helping us reason about and eliminate performance pitfalls, such as priority inversions, that arise in this new setting.
Artificial Pancreas (AP) systems refer to a set of increasingly closed-loop biomedical devices which seek to automate blood glucose regulation in individuals with type-1 diabetes mellitus (T1DM). While incredibly promising for the treatment of T1DM, these systems present a one-sided control problem, able to dose insulin to decrease blood glucose levels, but unable to counteract the effects of too high a dose. In addition, these systems are safety critical, with too high a dose leading to long-term organ damage, coma, and potentially death to an individual. Thus, the need to test and formally guarantee the safety of such systems is critical. In order to test such systems prior to patient trials, in-silico modeling of patient physiology along with the closed-loop interactions between the control algorithms and human physiology is utilized. Historically, nonlinear ordinary differential equations (ODE) models have been used as the patient models. However, such models are deterministic and general, which has resulted in a mismatch between testing for general use of AP systems, and the translation of a system to a specific individual. In this work, we present two types of nondeterministic models which we have developed to model human glucose-insulin physiology on a patient-specific level: linear ARMAX models and neural networks. We demonstrate how these models are learned from data AP systems are already collecting, requiring no additional intrusive measurements, and present three methods for quantifying uncertainty/error in the models which can lead to more exhaustive verification both in model conformance and closed-loop safety verification. Finally, we utilize these models within two tasks (1) performing analysis and patient-specific tuning of a PID control system from literature, and (2) real-time learning of a model predictive control system with safety guarantees, with results shown for both in-silico and real patient data.
Many database applications need to undergo schema refactoring several times during their life cycle. Since schema refactoring requires making significant changes to the database program, it is often a non-trivial and error-prone task. In this talk, I will present verification and synthesis techniques to help developers correctly and easily evolve database applications during schema refactoring. First, we address the problem of verifying equivalence between a pair of database programs that operate over different schemas. In particular, we formalize the notion of equivalence for database programs and propose a proof methodology based on bisimulation invariants. This technique is helpful for ensuring that the program change during schema refactoring does not introduce any additional bugs. As a further step, we propose a novel technique that automatically synthesizes a new version of a database program given its original version and the source and target schemas. This method does not require manual user guidance and ensures that the synthesized program is equivalent to the original one.