In a dependently typed language, we can guarantee correctness of our programs by providing formal proofs. To check these proofs, the typechecker elaborates our programs and proofs from the high-level surface language into a low level core. However, this core language is by nature hard to understand by mere humans, so how can we know we proved the right thing? In this paper we study this problem in particular for dependent copattern matching, a powerful language construct for writing programs and proofs by dependent case analysis and mixed induction/coinduction. A definition by copattern matching in the surface language consists of a list of equations called clauses that are elaborated to a series of case splits structured as a case tree, which can be further translated to primitive eliminators. This second step has gotten a lot of attention in previous work, but in comparison the first step has been mostly ignored so far.
We present a dependently typed core language with inductive datatypes, coinductive record types, an identity type, and defined symbols represented by well-typed case trees. We also present an elaboration algorithm translating definitions by dependent copattern matching to a case tree in this core language. To make sure the user of our language does not have to look at the generated case tree, we prove that elaboration preserves the first-match semantics of the clauses given by the user. Based on the ideas in this paper, we reimplemented the algorithm used by Agda to check left-hand sides of definitions by pattern matching. The new implementation is at the same time more general and less complex, and fixes a number of bugs and usability issues with the old implementation. Thus this paper brings us one step closer towards a formally verified implementation of a practical dependently typed language.
Wed 26 SepDisplayed time zone: Guadalajara, Mexico City, Monterrey change
16:30 - 17:15
|Elaborating Dependent (Co)pattern Matching
|Generic Zero-Cost Reuse for Dependent Types