
LIAFA
5 Projects, page 1 of 1
assignment_turned_in ProjectFrom 2013Partners:LIAFA, Laboratoire dInformatique Algorithmique : Fondements et Applications, Laboratoire Algorithmique Complexité et Logique, LIGLIAFA,Laboratoire dInformatique Algorithmique : Fondements et Applications,Laboratoire Algorithmique Complexité et Logique,LIGFunder: French National Research Agency (ANR) Project Code: ANR-12-BS02-0007Funder Contribution: 313,352 EURA huge amount of algorithms have been designed and presented using different frameworks. But the question "what is an algorithm?" remained unsettled until the end of the twentieth century. The solution came only recently, as a result of a combination of three successive approaches to our intuitive notion. The first approach is historically called "denotational". By denotational approach, we mean the study of input/output behavior of the algorithm (considered as a black box): which function or relation does it compute? Turing machines formalized this approach. Other models were then constructed and proved to define the same class of computable functions. They lead to the so-called Church-Turing thesis in 1936. However, these models now called Turing-complete or "denotationally"-complete usually have behaviors (ways to compute the functions), which are quite different. Hence, for Turing-complete models classes of implementable algorithms associated with these models are usually distinct. And in fact, no known model before 1984 does compute all algorithms. Let us insist the Church-Turing thesis is not about algorithms but about what they compute. Which brings us to the second approach called "operational". By operational we mean the study of the execution of the algorithm, how does it work? What is its step-by-step behavior and how evolves its environment? A model of computation, which captures the behavior of all algorithms, is called “operationally” complete. Such a compelling model, that of the Abstract State Machine (ASM), was exhibited by Yuri Gurevich {Gur-84, Gur-85}, which led to the sequential thesis {Gur-00} and later to Blass-Gurevich parallel thesis {BG-03 BG-08}. These two theses are the analog of the Church-Turing thesis but for the notion of algorithm instead of computability. They assert that the execution of any sequential algorithm (resp. parallel) can be emulated step for step by the execution of a sequential (resp. parallel) ASM. Our goal is not to further justify the thesis of Gurevich based on some mathematical principles (this has already been extensively done since 1984). And finally the last approach is based on axiomatic presentation. By axiomatic approach we consider the approach originally developed by R. Gandy in {Ga-80} and followed by W. Sieg {Sie-08}. Gandy intended to complement Turing’s analysis of human computers with an analysis of computation by mechanical devices. He gives a finite set of axioms for “machine computation”: discrete, deterministic action that is limited to "local causation" by the speed of light. However, Gandy’s axioms are rather for defining computational (by machines) functions than for defining algorithms. In {Gur-84, Gur-85}, the author gives three axioms for defining algorithms. We pursue three aims: the identification of alternative algorithmically-complete models of computation, that of algorithmically complete programming languages, and justification of these through arguments based on laws of physics.
more_vert assignment_turned_in ProjectFrom 2014Partners:LINA, Laboratoire dInformatique Algorithmique: Fondements et Applications, Institut de recherche en communications et cybérnetique de Nantes, LIAFA, Department of Computer Science, Aalborg University +3 partnersLINA,Laboratoire dInformatique Algorithmique: Fondements et Applications,Institut de recherche en communications et cybérnetique de Nantes,LIAFA,Department of Computer Science, Aalborg University,Laboratoire dInformatique de Paris Nord,Laboratoire d'Informatique de Paris Nord,University of NantesFunder: French National Research Agency (ANR) Project Code: ANR-14-CE28-0002Funder Contribution: 453,896 EURModel-checking and formal modelling are now techniques with a certain academic recognition, but their applicability in practice remain somewhat inferior to expectations. This is in part due to two main problems: rather rigid modelling of the systems impairing abstraction and scalability, and insufficient feedback from the verification process. In this project, we address these issues by lifting these techniques to the more flexible and rich setting of parametrised formal models. In that setting, some features of the system like the number of processes, the size of some buffers, communication delays, deadlines, energy consumption, and so on, may be not numerical constants, but rather unknown parameters. The model-checking questions then become more interesting: is some property true for all values of the parameters? Or does there exist some value such that it is? Or even what are all the possible values such that it is? Building on the skills of the consortium on topics like regular model-checking, timed systems, probabilistic systems, and our previous contributions in model-checking of systems with a parametrised number of processes and parametrised timed systems, including the development of software tool prototypes, we develop in this project new models, techniques, and tools to extend the applicability of parameters in formal methods. To achieve this objective, we study parameters in the context of discrete and timed/hybrid systems, both of them possibly augmented with quantitative information relating to costs (e.g. energy consumption), and probabilities. This gives the following six tasks: 1. Discrete parameters 2. Timing parameters 3. Discrete and timing parameters 4. Parameters in cost-based models 5. Parameters in discrete models with probabilities 6. Parameters in timed models with probabilities Parametrised models are of obvious interest but the associated theoretical problems are hard. For instance, in the model of parametric timed automata, the basic model for timed systems with time parameters, the mere existence of a parameter value such that the system can reach some given state, is generally undecidable, even with only one parameter. As a consequence, in all these tasks, we follow a common methodology, acknowledging these difficulties, and consisting in formalising the problem, studying decidable subclasses and designing efficient algorithms for the the parametrised model-checking problems (including in particular parameter synthesis), building efficient semi-algorithms for the general class that behave well in realistic cases, and finally implement the techniques in tool prototypes. This raises many challenging and original problems like extending regular model-checking to graphs to model parametrised systems with an arbitrary topology, using infinite-state automata to represent sets of configurations, finding useful decidable classes of parametrised timed/hybrid systems or properties, provide techniques for approximate synthesis of parameter values, study models with parametrised costs, study probabilistic parametric models, and extend statistical verification techniques to parametric systems. We aim at producing high-quality scientific results, published in the recognized venues of the formal methods and model-checking communities, but also at producing software tool prototypes to make these results available in practice, both for the research community and for higher education. Finally, we want to promote the field of parametrised model-checking through the organisation of a yearly open workshop, as a scope-extended version of the SynCoP workshop organised in 2014. Beyond the classical application fields of formal methods (e.g. embedded systems), we envision new applications domains like smart homes, where parameters may account for the specifics of the residents. In that setting, cost-based parametrised models are particularly interesting for a focus on optimising energy consumption.
more_vert assignment_turned_in ProjectFrom 2013Partners:LIAFA, Laboratoire dInformatique Algorithmique : Fondements et Applications, Laboratoire d’Informatique de l’Ecole Normale SupérieureLIAFA,Laboratoire dInformatique Algorithmique : Fondements et Applications,Laboratoire d’Informatique de l’Ecole Normale SupérieureFunder: French National Research Agency (ANR) Project Code: ANR-12-BS02-0005Funder Contribution: 510,359 EURThis project aims to better understand computation when access to input data is restricted in various ways. The scenarios lead to a fresh study of the usual notions of complexity such as time or space. How can one deal with massive quantities of data? Running time considerations lead to natural restrictions modeled by sublinear time algorithms. Sublinear time algorithms, and among them property testers, impose a partial view of the input. They are inherently robust against corrupted data. Space considerations lead to natural restrictions modeled by streaming algorithms, and more generally algorithms with external memory access, that have sublinear internal memory and sequential access to input data. Communication complexity provides tools for proving impossibility results. How can one deal with pieces of private data revealed at the discretion of their owners? Access to inputs may also be restricted in the sense that the input seen by the algorithm may not be the "true" input. Game-theoretic settings then provide a model for algorithmic design, that can also be explored to incorporate privacy concerns. Quantum game theory provides some tools for proving impossibility results. How can one deal with network data that dynamically evolves over time? Instead of viewing the changes as liabilities, they can be used as assets that highlight trends and pervasive structure. This project brings together researchers from different fundamental areas of computer science, and leverages their expertise to develop technical tools to design algorithms and to study the complexity of restricted data access models. Some tools will be adapted from existing ones, and others will be created.
more_vert assignment_turned_in ProjectFrom 2013Partners:University of Leoben and Graz University of Technology, LIAFA, Laboratoire dInformatique Algorithmique : Fondements et ApplicationsUniversity of Leoben and Graz University of Technology,LIAFA,Laboratoire dInformatique Algorithmique : Fondements et ApplicationsFunder: French National Research Agency (ANR) Project Code: ANR-12-IS01-0002Funder Contribution: 199,333 EURIn Austria as well as in France research on fractal sets with special emphasis on fractals coming from various kinds of numeration has a long tradition. The aim of this project is to bundle the forces existing in both countries and to exploit the synergies emerging from putting together the different viewpoints on fractals and numeration. This enables us to consider the topic of this proposal in a much broader way than it would be possible for a project carried out on a national level. We consider these synergies as a strong feature and a definite added value of the present project. The present project aims at studying fractal sets arising from various numeration systems. To this matter, we subdivide our project into four tasks, each of them throwing a different light on our topic. Arithmetics, dynamics and expansions (ADE) Topological properties of fractals (ToF) Rauzy fractals and substitutions (RaSub) Fractals with a view towards applications (FracApp) In the first task (ADE), we are concerned with arithmetic properties of numeration. The topics here vary from transcendence properties in the spirit of the van der Poorten-Loxton Theorem which has been proved recently by two members of the French team [AB3], over natural extensions of continued fraction algorithms, to numeration in algebraic number fields, to redundant number systems and their applications in cryptography. In these problems about generalized numeration is intimately linked to geometric and topological properties of underlying fractals. Thus in the second task (ToF), we will investigate these properties. Although there is a vast literature on topological properties of fractal sets, most results are valid only for the two-dimensional case. In close contact with the research group of the topologists Jim Cannon and Greg Conner from Brigham Young University (Utah, USA) we wish to break this limitation and obtain new results on the topology for self-affine fractals in higher dimensions. Moreover, we wish to exploit the relevance of these results to the arithmetic of the number systems studied in ADE. While the fractal sets in first two tasks are related to numeration from a very general point of view, the third task (RaSub) is devoted to the special case of Rauzy fractals and their associated substitutions. Here, using the general results of the previous tasks, we wish to gain new results on Rauzy fractals especially in the more general context of S-adic expansions. For instance, these fractals will give us new insight on generalized continued fraction algorithms. It is therefore desirable to explore as many of their properties as possible. Of course, we also wish to relate our research to the Pisot conjecture and illustrate the conjecture through a variety of new methods and techniques coming together in this project. In the last task (FracApp) we will consider various properties of fractals like the intersection of fractals with lines and fractal homeomorphisms that play a role in image processing. This part of the project can be regarded as an interface to applications of fractals in other branches of science. It shows that the ideas and methods developed in this project are interesting also in other branches of science. Besides that, we also wish to explore an interesting relation between fractal transformations in image processing and number theory.
more_vert assignment_turned_in ProjectFrom 2014Partners:INRIA Paris-Rocquerncourt, Laboratoire Bordelais de Recherche en Informatique, Inria Rennes - Bretagne Atlantique Research Centre, Laboratoire dInformatique Algorithmique: Fondements et Applications (LIAFA), LIAFAINRIA Paris-Rocquerncourt,Laboratoire Bordelais de Recherche en Informatique,Inria Rennes - Bretagne Atlantique Research Centre,Laboratoire dInformatique Algorithmique: Fondements et Applications (LIAFA),LIAFAFunder: French National Research Agency (ANR) Project Code: ANR-13-BS02-0011Funder Contribution: 361,900 EURModel Checking is nowadays a mature field of research, being awarded a Turing award in 2007. It consists in describing formally a system in the form of a mathematical model, specifying the properties it has to fulfill in a formal logic, and providing algorithms to decide whether the model satisfies the property. Model checking started with Boolean systems and properties, sequential finite state models, etc. This is now well established. In order to handle realistic (and thus large) systems, abstractions techniques such as the so called CEGAR method, namely counter example guided abstraction refinement (see e.g. [CCKSVW02]), or abstract interpretation (see the work of Cousot et al.) have been developed. These methods work extremely well for verifying (large, sequential) programs, where Boolean predicates (e.g. variable x is below value a) are well suited. However, current abstraction techniques are not suited to the verification of large stochastic systems. To verify large stochastic systems, techniques such as Statistical Model Checking (based on Monte Carlo property testing) can be used. However, the latter do not give sure answers, but only answers correct up to some probabilities. To date, there is no (sure) Model Checking method which scales to large stochastic systems (e.g. Markov Chains with 10^20 states and over, for instance 20 variables which can take 10 different values each). For instance, the PRISM model checker and other comparable tools scale up to 10^10 states. This is the main reason why Clarke, Cousot et al. launched the CMACS initiative (Computational Modeling and Analysis for Complex Systems) in order to create MCAI 2.0, the next-generation Model Checking and Abstract Interpretation. The aim of our research project Stoch-MC is to develop scalable model checking techniques that can handle large stochastic systems and give sure answers. Large stochastic systems arise naturally in many different contexts, from networks to system biology. We will demonstrate the techniques on two key stochastic models: a model for Hela cells death and a model of stress response in yeast. Our key idea to obtain scalable algorithms is that probabilities can be approximated. A bound on the error made by the approximation is computed a posteriori, based on the characteristics of the model. We thus obtain a probability range (e.g. [0.4, 0.6]) instead of a precise point value as in current model checkers. If the answer is not precise enough (e.g. we want to know whether the probability is bigger than 0.45), then more time is devoted to the approximation, such that the error decreases, until a definite answer can be given. This mimics the CEGAR method in spirit. The method thus calls for several tools. First, parametric algorithms should be proposed which can perform good approximated analysis. The parameter allows making it more precise at the expense of computational time. Second, an error analysis should be proposed, in order to evaluate a bound on the error. Some preliminary approaches have already been proposed by us, see [AAGT12, PALGT12]. Ultimately, we want to model check large Markov Decision Processes (MDPs). In the realm of system biology, compared to Markov Chains, MDPs allows model with different modes (e.g. pathway saturated/non-saturated by a drug). Also, MDPs allow to model different possibilities (the amount of drug to put at some moment, etc). Model checking a MDP thus amounts to finding optimal choices according to a given reward function, for instance designing cells with the right behavior. We believe that designing scalable model checking techniques for large MDPs against a full version of the logic is not doable in the next 4 years. We will however develop techniques for every key feature and demonstrate them on biological pathways, forming a proof of concept that scalable (sure) model checking that can handle large MDPs is feasible, and will be the basis of an ERC project proposal in 4 years.
more_vert