Workshop - 1st December 2017

On Friday, December 1st, 2017, there will be a workshop on Constraint Programming with members of the jury of Charlotte Truchet's hablitation defense, and PhD students from the Coverif project and the TASC team.


9h00-10h45 Brigitte Vallée Towards a Realistic Analysis of Sorting and Searching Algorithms (abstract)
Nadia Creignou On the Complexity of Hard Enumeration Problems
Giovanni Lo Bianco Comptage de solutions pour des contraintes de cardinalité
10h45-11h15 Café
11h15-12h30 Antoine Miné Un algorithme de recherche d'invariants numériques inductifs inspiré par la programmation par contraints
Bibek Kabi Combining zonotopic abstraction and constraint programming for finding invariant
14h-16h30 Thomas Jensen Hybrid Information Flow Analysis Against Web Tracking
Pascal Van Hentenryck Optimal Power Flows
15h10-15h45 Ghiles Ziat et Marie Pelleau AbSolute (demo)
Ekaterina Arafailova Systematic Derivation of Invariants Describing the Set of Feasible Combinations of Characteristics of an Integer Sequence (abstract)

How to find the room

The workshop will be at the Faculty of Sciences of the University of Nantes. Instructions to find the campus can be found here. The building is number 11 on the map of the campus. The room 3 is on the ground floor, right of the entrance.


Towards a Realistic Analysis of Sorting and Searching Algorithms Brigitte Vallée, CNRS, GREYC and Université de Caen

Every student of a basic algorithms course is taught that, on average, the complexity of QuickSort is O (n \log n), that of QuickSelect is O(n) and that of binary search is O (\log n). These ‘’classical'' analyses deal with the mean number K(n) of key comparisons, as the comparison between to keys is viewed as unitary. We revisit the analysis of some of the main algorithms for Sorting and Searching, where we view keys as words produced by a source. Our probabilistic models belong to a broad category of information sources that encompasses memoryless (i.e., independent-symbols) and Markov sources, as well as many unbounded-correlation sources. Then, words are compared via their symbols in lexicographic order. Then, the “realistic” cost of the algorithm is now the total number of symbol comparisons performed, and, in this context, the average-case analysis aims to provide estimates for the mean number S(n) of symbol comparisons.
First, we observe that, for commonly used data models, the mean costs S(n) and K(n) of any algorithm under the symbol comparison and the key comparison model, respectively, are connected by the universal relation S(n) = O(\log n) K(n). This results from the fact that at most O(log n) symbols suffice, with high probability, to distinguish n keys. The surprise is that there are cases where this upper bound is tight, as in QuickSort; others where both costs are of the same order, as in QuickSelect.
(Joint work with Philippe Flajolet, Julien Clément, Jim Fill and Thu Hien Nguyen Thi)

Systematic Derivation of Invariants Describing the Set of Feasible Combinations of Characteristics of an Integer Sequence Ekaterina Arafailova, TASC, LS2N, Institut Mines Telecom Bretagne Atlantique

We propose two systematic methods for describing the set of feasible combinations of characteristics of an integer sequence. - First, we generate linear implied constraints, i.e. cuts, that link the values of these characteristics and are, possibly, parameterised by the length of the integer sequence. This method applies when each of the considered characteristics is the value returned by some register automata, whose registers are increased by (or reset to) some non-negative integer value on each transition. - Second, we discover and prove non-linear invariants characterising infeasible sets of points inside the convex hull, which are independent of the integer values used in the sequence, but are possibly parameterized by the sequence size. The method consists of three steps, namely (1) a mining phase where we systematically generate integer sequences up to a given size, and extract conjectures from the corresponding data sets, (2) a proof phase where we try to prove conjectures by checking whether the conjunction of multiple constraints over this sequence is infeasible. We do this by exploiting the descriptions of constraints as register automata. Finally we use the proven conjectures in an (3) exploitation phase to improve the propagators for the constraints. Preliminary tests indicate that the discovered invariants can significantly speed up, both, the proof of infeasibility, and more surprisingly, the generation of solutions for a conjunction of constraints on a same sequence. Co-auteurs: Nicolas Beldiceanu, Helmut Simonis