The logic seminar is a weekly event on logic and the theoretical aspects of computer science.
24th September 2014
|
Logics for General Game Play
The AAAI General Game Playing Competition tests the ability of computer programs to play games in general, rather than just the ability to play a specific game. Participants in the competition are provided with previously unknown games, and are required to dynamically and autonomously determine how best to play these games. The Game Description Language (GDL) is a logic-based formalism for representing the rules of games with complete information, and is extended to GDL-II which allows to describe games with incomplete information. In GDL, the game descriptions are well-formed if they satisfy a few key properties, such as termination, playability, and winnability. We show that such properties can be naturally expressed in the Alternating-time Temporal Logic (ATL) and we provide reasoning support to GDL by linking it to ATL model checking. The link is then extended to the incomplete information case. We give some complexity and experimental results and discuss further challenges.
|
27th March 2014
|
Polyadic Inductive Logic and Spectrum Exchangeability
Inductive Logic 'a la Carnap & Johnson' was founded on the idea that the probabilities we assign in our everyday lives may be logical in the sense that they are determined by our knowledge via general rationality considerations. [So this is the �Inductive Logic� of Philosophy and is not the same as the �Inductive Logic� of Computer Science.] As a potentially applicable topic Inductive Logic is nowadays generally considered within Philosophy to have failed, largely on account of Nelson Goodman�s GRUE Paradox. However as a �pure� mathematical investigation into rationality, with its relevance to the aspirations of AI, it survives and has recently seen some interesting developments. In particular the serious investigation into Polyadic Inductive Logic (Carnap et al only ever considered unary properties) has uncovered some new rational principles which appear to be remarkably powerful, even when reflected down to the unary level.
In my talk I shall try to give a fairly informal introduction to one such principle. |
13th March 2014
|
Verification of Auctions as Artifact Systems: Decidability via Finite Abstraction
Artifact Systems are a novel paradigm in business processes that has recently received considerable attention in service-oriented computing. Artifact Systems (AS) are best described in terms of interacting modules, or artifacts, consisting of a data model and a lifecycle, which account respectively for the relational structure of data and their evolutions over time. However, by considering data and processes as equally relevant tenets of AS, the typical questions concerning the verification of these services can no longer be solved by current methodologies, as the presence of data means that the number of states in AS is infinite in general. In this talk we illustrate how the framework of Artifact Systems can be applied fruitfully to game-theoretic scenarios. Specifically, we provide a formalisation of English (ascending bid) auctions as AS. Then, we present agent-based abstraction techniques to model check AS against temporal epistemic specification, thus obtaining decidability results for the auction scenario in question.
The research presented in this talk includes joint work with Prof A. Lomuscio and Dr F. Patrizi. |
6th March 2014
|
Generalised Quantifiers in Vector Space Models of Natural Language
Vector space models of natural language argue that meanings of words can be represented by vectors obtained from distributions of words in contexts. Compositional vector space models, combine these vectors to obtain vector representations for larger language units such as phrases and sentences.
These combinations can be simple addition and multiplication of vectors (Mitchell and Lapata), or more complex operators such as matrix multiplication and tensor contraction (Clark-Coecke-Sadrzadeh, Baroni-Zamparelli, Preller). In any case, it is difficult to impose a classical logical structure on vectors. Hence, modelling meanings of logical connectives, relative pronouns, and quantifiers becomes a difficult task.
In this talk, I will present the vector space and compositional vector space models, go over some experimental results (joint work with Grefenstette and Kartsaklis), and show how one can do some logic. In particular, I will show how a notion of intersection (obtained from a Frobenius Algebra) allows one to model relative pronouns and from there and together with the cosine distance between vectors, one can obtain a notion of generalised quantification. The latter part is joint work with A. Rypacek (Oxford). |
27th February 2014
|
Ontology-based data access with OWL 2 QL: Theory and practice
Ontology-based data access (OBDA) is widely regarded as a key ingredient for the new generation of information systems. In the OBDA paradigm, an ontology defines a high-level global schema of data sources and provides a vocabulary for user queries. An OBDA system rewrites such queries and ontologies into the vocabulary of the data sources and then delegates the actual query evaluation to a suitable query answering system such as a relational database management system, RDF store or a datalog engine.
This talk will focus on OBDA with the ontology language OWL 2 QL, one of three profiles of the W3C standard Web Ontology Language OWL 2, and relational databases. It will discuss the succinctness problem for conjunctive query rewriting over OWL 2 QL ontologies, and also practical OBDA with Ontop (http://ontop.inf.unibz.it), including a little demo. |
20th February 2014
|
Argument based practical reasoning in normative domains
This talk will describe how argumentation can be used for practical reasoning within a normative multi-agent domain. In such domains, agents may have different goals, and be constrained by different permissions, obligations and prohibitions. I will discuss how an agent can utilise argumentation to decide how to act in such situations. Now game theory can be used to address the practical reasoning problem, and I will therefore provide some justifications for the use of argument, based on the idea of system scruitability, and its application to human-agent teams. Extending the practical reasoning problem through the introduction of uncertainty, and considering the need for cooperation between agents, I will then briefly consider the problem of persuading another agent to act in a manner that one finds desirable through the use of opponent modelling.
|
[show older] |
13th February 2014
|
Linear (Integer) Programming and Fragments of First-Order Logic
Since the middle of the last century, logic has come to be understood, in large part at least, as the study of the relationship between formal languages and the structures those languages describe. Over the last two-and-a-half decades, that study has yielded spectacular progress in the development and analysis of decision procedures for various fragments of first-order logic. One of the most powerful tools employed in this analysis is the reduction of logical satisfiability problems to linear programming or to linear integer programming. The purpose of this talk is to survey some of the results that have been obtained in this way, and to explain the connections between the structure of models for the logical fragments in question and the structure of polyhedra.
|
30th January 2014
|
Knowledge-Based Agents of Automated Trading and Game Playing: Representation, Acquisition, Model and Methodology
In this talk, I will present how knowledge engineering methods are used to develop trader agents and market agents. In particular, I will show some specific systems and discuss their knowledge representation, acquisition, and evolution. Moreover, I will also present a knowledge engineering methodology for negotiating agent development. The validity of the methodology is shown by re-modeling an existing high-impact system and developing a new system. In addition, I will brief our ongoing work: logic-based game theory. Finally, we talk about my plan of knowledge-based automated decision making and analysis for maintaining stable development of China.
|
19th December 2013
|
Pareto Optimality in Coalition Formation
A minimal requirement on allocative efficiency in the social sciences is Pareto op- timality. In this paper, we identify a close structural connection between Pareto optimality and perfection that has various algorithmic consequences for coali- tion formation. Based on this insight, we formulate the Preference Refinement Algorithm (PRA) which computes an individually rational and Pareto optimal outcome in hedonic coalition formation games. Our approach also leads to var- ious results for specific classes of hedonic games. In particular, we show that computing and verifying Pareto optimal partitions in general hedonic games, anonymous games, three-cyclic games, room-roommate games and B-hedonic games is intractable while both problems are tractable for roommate games, W-hedonic games, and house allocation with existing tenants.
|
12th December 2013
|
Ontology-based Data Access: A Study through Disjunctive Datalog, CSP, and MMSNP
Ontology-based data access is concerned with querying incomplete data sources in the presence of domain specific knowledge provided by an ontology. A central notion in this setting is that of an ontology-mediated query, which is a database query coupled with an ontology. In this talk, I discuss several classes of ontology-mediated queries, where the database queries are given as some form of conjunctive query and the ontologies are formulated in description logics or other relevant fragments of first-order logic, such as the guarded fragment and the unary-negation fragment. I will discuss three contributions. First it is shown that popular ontology-mediated query languages have the same expressive power as natural fragments of disjunctive datalog. Second, we establish intimate connections between ontology-mediated queries and constraint satisfaction problems (CSPs) and their logical generalization, MMSNP formulas. Third, we exploit these connections to obtain new results regarding (i) first-order definability and datalog-definability of ontology-mediated queries, (ii) P/NP dichotomies for ontology-mediated queries, and (iii) the query containment problem for ontology-mediated queries.
The talk is based on joint work with M. Bienvenue, B. ten Cate, and C. Lutz. |
5th December 2013
|
Voting on Actions with Uncertain Outcomes
I will present a new model for voting under uncertainty, where a group of voters have to decide on a joint action to take, but the individual voters are uncertain about the current state of the world and thus about the effect that the chosen action would have. No prior knowledge of voting theory will be required to follow the talk. I will start with a short overview of the field of computational social choice and a discussion of its relevance to computer science, AI, and logic.
|
28th November 2013
Room 218
|
Endogenous Degrees of Justification in Abstract Argumentation
I will present a formal theory of the degree of justification of arguments which relies on the topology of the attack graph alone, without requiring exogenous structure like preferences or weights. The theory moves from a simple generalization of the notion of admissibility, which makes it sensitive to the numbers of attacks and counter-attacks of arguments. I will argue for the theory as a natural systematization of the notion of degree of justification, and as a principled generalization of Dung's semantics.
This is a joint work with Sanjay Modgil. |
21st November 2013
|
Binary Aggregation by Selection of the Most Representative Voter
In a binary aggregation problem, a group of voters each express yes/no choices regarding a number of possibly correlated issues and we are asked to decide on a collective choice that accurately reflects the views of this group. A good collective choice will minimise the distance to each of the individual choices, but using such a distance-based aggregation rule is computationally intractable. Instead, we explore a class of aggregation rules that select the most representative voter in any given situation and return that voter's choice as the collective outcome. Two such rules, the average-voter rule and the majority-voter rule, are particularly attractive. We analyse their social choice-theoretic properties, their algorithmic efficiency, and the extent to which they are able to approximate the ideal defined by the distance-based rule. We also discuss the relevance of our results for the related framework of preference aggregation.
This is a joint work with Ulle Endriss. |
14th November 2013
|
Reason-Based Rationalization
We introduce a "reason-based" way of rationalizing an agent�s choice behaviour, which explains choices in terms of "motivationally salient" properties of the options and/or the choice context, thereby explicitly modelling the agent�s conceptualisation of any choice problem. Reason-based rationalizations can explain non-classical choice behaviour, including boundedly rational and sophisticated rational behaviour,
and predict choices in unobserved contexts, an issue neglected in standard choice theory. We characterize the behavioural implications of different reason-based models and distinguish two kinds of context-dependent motivation: "context-variant" motivation, where different choice contexts make different properties motivationally salient, and "context-regarding" motivation, where the agent cares not only about properties of the options, but also about properties relating to the choice context.
|
10th October 2013
|
Methods for Generating Random Specialised Argumentation Frameworks
Experimental approaches have become increasingly important in AI studies as a means of assessing algorithmic performance. An important basis for such studies is the availability of suitable testbeds upon which experimental investigations can operate. In the the context of Dung's abstract model of argumentation and in developments such as value-based (VAF) and extended argumentation frameworks (EAF) while there is a reasonably complete picture of questions relating to computational complexity, very little is known about properties of "typical'' AFs.
We consider a number of methods aimed at constructing random instances of AFs taking various forms. While classical abstract argumentation frameworks are naturally treated through well-established random directed graph methods, in the case of value-based, extended, weighted, recursive attack, etc., such methods may either be unsuitable on account of structural conditions imposed, e.g. the so-called "multi-valued cycle'' assumption in VAFs or even inapplicable since the target objects are not "graphs'' in the usual sense, e.g. EAFs. Focusing on VAFs, we present to algorithmic methods for generating random instances of such cases which can be tailored to specific parameters and which are amenable to formal or empirical analysis. Such approaches thereby provide a means of generating testbeds of example frameworks against which the effectiveness of different computational techniques can be gauged. We also present some preliminary experimental findings regarding the behaviour of "typical" VAFs. This is a joint work with Katie Atkinson and Samer Nofal. |
3rd October 2013
|
Some forms of collectively seeing to it that
In philosophical logic, most work on the logic of action focusses on agency, that is, on characterising the conditions under which one can say that it is the actions of a particular agent that are the cause of, or responsible for, a certain outcome or state of affairs. The semantics is usually based on a branching-time structure of some kind. The best known examples are probably the `stit' (seeing to it that) logics associated with Nuel Belnap and colleagues, though there are
other examples, including a formalism of my own that combines a transition-based account of action with `sees to it that' modalities. Often, it is not the actions of an individual agent but those of a set of agents, collectively, that bring about a certain outcome. Collective agency has received comparatively little attention. I am going to map out several different forms, several different senses in which one can say meaningfully that it is the actions of a particular set of agents, collectively, that are responsible for a certain outcome. This outcome may be unintentional, and perhaps even accidental; I am deliberately factoring out aspects of joint action such as joint intention, communication between agents, awareness of other agents' intentions and capabilities, even the awareness of another agent's existence. The aim is to investigate what can be said about collective agency when all such considerations are ignored, besides mere behaviour. In passing I will relate my account to some tentative suggestions made by Belnap and Perloff in 1993 on the distinction between what they call 'inessential members' and 'mere bystanders'. I will adjust some of their conjectures and distinguish further between what I call 'potentially participating bystanders' and 'impotent bystanders'.
|
27th June 2013
|
Using logic-based tools to understand the evidence in treating lung cancer
There is a long history of applying logic to medicine. In this talk, I discuss some of the previous approaches to clinical-application of logic. I also describe some of the different clinical areas where logic might be applied, and try and show how different clinical domains and requirements might influence the type of logical approaches chosen.
I illustrate this in relation to my work on modelling and reasoning with the results of clincial trials. I show how we can represent the results of clinical trials in a simple logical formalism, and then use argumentation to resolve some of the conflicts. In particular, I show how we can model a complex, real-world clincial domain of chemo-radiotherapy for the treatment of lung cancer. I show how argumentation can be used as a novel knowledge aggregation technique, and how the results of the aggregation compare with existing approaches and results in the clincial literature. |
20th June 2013
|
Text simplification and information access
In this talk, I'll overview a framework for text simplification (RegenT) and discuss its potential for use in information access applications, such as summarisation. I'll also discuss work I have done on analysing citation usage in scientific writing, and possible connections to argument mining. I'll conclude with a more open ended discussion on the potential for combining these two strands of research; for example, to simplify arguments, or reformulate technical writing for less informed audiences.
|
18th June 2013
|
PROLEG: An Implementation of the Presupposed Ultimate Fact
In this talk, we propose a legal reasoning system called PROLEG (PROlog based LEGal reasoning support system). This is a logic programming implementation of the Japanese ``theory of presupposed ultimate facts'' in civil coce. The theory is used for judges under incomplete information. Previously, we proposed a translation of the theory into logic programming, but here we propose a new syntax of the system which reflects lawyers' reasoning as much as possible. The talk will start by brief introduction of my legal reasoning research I call juris-informatics and then explain presupposed ultimate fact theory and PROLEG.
|
6th June 2013
|
Intelim rules for classical connectives
Introduction and elimination (briefly intelim) rules are in a natural sense the simplest of all Horn rules for propositional connectives. They have also played a prominent role in some philosophical discussions of the meaning of the connectives. For this reason, their behaviour has more than a purely formal interest.
We consider two questions about such rules for classical connectives: the existence, for a given truth-functional connective, of at least one such rule that it satisfies, and the uniqueness of a connective with respect to the set of all of them. The answers are easy and well known in the context of rules using set/set sequents of formulae, but more complex and interesting for the restricted (and more often used) context of set/formula sequents on which we focus, as well as for the set/formula-or-empty context, which we also describe briefly if time is available. |
14th March 2013
|
Argumentation, non-monotonic reasoning and classical logic
Argumentation is emerging as the predominant logic-based paradigm for reasoning in the presence of uncertainty and conflict. In this talk I will begin by addressing the question: "Argumentation - what's all the fuss about ?", by highlighting features that distinguish argumentation from traditional approaches to non-monotonic reasoning, and the importance of these features in a wider AI context. I will then further illustrate by briefly reviewing two recent research developments. The first (undertaken in collaboration with Henry Prakken) describes a framework for argumentation applied to conflict resolution in Tarksian logics. I will then briefly present a special case of this framework and show that it corresponds to the well known non-monotonic formalism of Brewka (i.e., Preferred Subtheories). The second (undertaken in collaboration with Marcello D'Agostino) describes an approach to argumentation over inconsistent classical logic theories, that takes into account resource bounds on the reasoning capacities of agents involved in the construction and exchange of arguments.
|
28th February 2013
|
Cyclic Abduction of Inductive Termination Preconditions
The question "does program P terminate, given precondition X?" is a classic one in program analysis. In this talk, we aim to go a bit further and ask the following *abduction* question: "can we find a reasonable precondition X under which P terminates?". When one considers heap-manipulating programs, this problem typically involves inventing the inductive definitions of data structures. For example, a program that traverses "next" fields of pointers until it reaches "nil" will terminate given a heap structured as an acyclic list as input. But given a cyclic list, it will fail to terminate, and given something that is not a list at all, it will typically encounter a memory fault and crash.
Here we demonstrate a new heuristic method, called *cyclic abduction*, for automatically inferring the inductive definitions of termination preconditions for heap-manipulating programs. That is, given a program, this method returns the definition of an inductively defined predicate in separation logic under which the program is guaranteed to terminate (without crashing). Cyclic abduction essentially works by searching for a *cyclic proof* of termination of the program, abducing definitional clauses of the precondition as necessary to advance the proof search process. We have implemented cyclic abduction in an automated tool, which is capable of finding natural termination preconditions for a range of small programs. The abduced predicates may define segmented, cyclic, nested, and/or mutually defined data structures. This is joint work with Nikos Gorogiannis (also at UCL). |
7th February 2013
|
Interval-based Approaches to Temporal Reasoning
The semantics of temporal logics is typically defined with respect to structures in which the primitive entities are time points. The interval-based reasoning is an another approach, that focuses on time intervals instead of time points.
The Halpern-Shoham logic is an example of a temporal logic of time intervals. Some effort has been put in last ten years to classify fragments of this logic with respect to expressibility and decidability of its decision problems. In this talk, I will shorty introduce this logic and present recent results, possible applications and further directions. The remaining time of the talk will be devoted to a bit more theoretical, yet interesting subject: decidable fragments of first-order logic. I will briefly summarize recent results and present main ideas of selected proofs. |
17th January 2013
|
C-SHORe: A Collapsible Approach to Higher-Order Model-Checking
We study the model-checking problem for models of higher-order programs. In particular, via higher-order recursion schemes and their connection to collapsible pushdown systems. We present a saturation method for global backwards reachability analysis of these models. Beginning with an automaton representing a set of configurations, we build an automaton accepting all configurations that can reach this set. We have implemented this technique, and will comment on some of our optimisation techniques and preliminary results.
Joint work with C. Broadbent, A. Carayol and O. Serre in Paris. |
17th December 2012
|
Canonical Minimal Nondeterministic Automata accepting Regular Languages
For any regular language L there is a unique finite nondeterministic automaton accepting L which is minimal amongst all nondeterministic acceptors, relative to a certain measure. It is never larger than the minimal deterministic automaton and can be exponentially smaller. Although it already exists in the literature, the characterisation amongst all acceptors seems to be new, as is the Brzozowski-style minimization algorithm one can use to compute it.
This result sits within a coalgebraic framework which unifies various canonical nondeterministic acceptors of regular languages. The basic idea is to compute the minimal deterministic automaton in a locally finite variety and then reinterpret it as a smaller nondeterministic automaton, relative to the carrier algebra's generators. Joint work with Jiri Adamek, Stefan Milius and Henning Urbat. |
9th August 2012
|
Larry Moss (Indiana University)
Topics in Coalgebra Connected to Logic and Recursion
Coalgebra is a topic in theoretical computer science which studies many of the most prominent semantic models in the subject (automata of various sorts, grammars Markov chains, Kripke models, etc.). It studies these at an abstract level using the tools of category theory, and as a result, the subject enjoys many connections to outside areas. In this talk, I want to mention connections to logic and to recursion. The connections to logic mainly have to do with modal logic, and I will mention some new results related to an old topic: modally saturated models, introduced by Kit Fine. The connections to recursion are extensive, and I will only be able to scratch the surface. I will start with the category-theoretic formulation of well-foundedness. This originates in work of Osius and of Taylor, and I'll discuss a more recent variant called well-pointedness. The main result here is that several of the most important kinds of semantic models (initial algebras, final coalgebras, and initial iterative algebras) can be unified using the notion of a well-pointed coalgebra, at least on sets.
This talk reports on joint work with Jiri Adamek, Stefan Milius, and Lurdes Sousa. |
28th June 2012
|
Andrew Moshier (Chapman University)
A Relational Category of Formal Contexts
Formal contexts (or polarities in Birkhof's terminology) provide a convenient combinatorial way to present closure operators on sets. They have been studied extensively for their applications to concept analysis, particularly for finite contexts, and are used regularly as a technical device in general lattice theory (for example, to describe the Dedekind-MacNeille completion of a lattice). In the most common uses, morphisms of contexts do not play a role. Although various scholars (most thoroughly, Marcel Ern秠have considered certain notions of context morphisms, these efforts have generally concentrated either on special kinds of contexts that closely match certain "nice" lattices or on special kinds of lattice morphisms.
Here we propose a category of formal contexts in which morphisms are relations that satisfy a certain natural combinatorial property. The idea is to take our cue from the fact that a formal context is simply a binary relation between two sets. So the identity morphism of such an object should be that binary relation itself. From this, we get the combinatorial properties of morphisms more or less automatically. The first main result of the talk is that the category of contexts is dually equivalent to the category INF, of complete meet lattices with meet-preserving maps. To get a duality with complete lattices, the second result characterizes those context morphisms that correspond to complete lattice homomorphisms. We also consider various constructions that are well-known in INF to illustrate that formal contexts yield remarkably simple, combinatorial descriptions of many common constructions. Tadeusz Litak (University of Leicester)
Löb, Nakano and their Neighbours
Nakano's 2000 LiCS and 2001 TACS papers alerted the typing community to the use of Löb-style modalities a) for ensuring productivity of guarded (co-)recursion, b) in the (meta-)theory of logicalstep-indexed relations and c) reactive programming. Much subsequent research has been done since in the area; it is enough to mention Appel et al. (POPL'07), Birkedal et al. (LiCS'11, POPL'11, FiCS'10),Krishnaswami & Benton (LiCS'11, ICFP'11), Krishnaswami et al. (POPL'12), Benton and Tabareau (TLDI'09) or Jaber et al. (LiCS'12).
The success story of these modalities ultimately relies on the fact that the such axioms allow us to calculate guarded fixed-points of both A) formulas (type expressions) and of B) suitably factoring morphisms(on the other side of Curry-Howard correspondence). A) is well known to modal logicians - it is in fact Sambin's fixed point theorem in disguise. However, it does not seem that either the logical community or the type-theoretical community is well-aware of this connection. If time allows, I will provide some more details on this --- and on some of my ongoing work on the subject. However, I will spend even more time on the point B) above. From the perspective of a modal logician, the subject is intimately connected to finding right natural deduction systems for strong Löb-like formalisms. I find the subject of categorical completeness of these systems particularly fascinating: the categorical semantics covers most models of guarded recursion studied in the literature, such as ultrametric spaces or, more recently, topos-theoretical models of the Copenhagen group. There is also a natural connection with Bloom and Esik's iterative theories and partial iteration theories. Furthermore, the intended models of guarded recursion and related typing systems proposed in the literature validate some additional super intuitionistic modal laws, which are completely independent fromLöb-like principles. What is the computational importance of such schemes? |
21st June 2012
|
Antonis Kakas (University of Cyprus) and Francesca Toni (Imperial College London)
Argumentation Logic
The development of formal logic was motivated by the aim to capture the process of argumentation between different reasons for and against a position, and formal logic is often regarded as a way to give validity to arguments. In this paper, we study how formal logic can be understood in terms of argumentation, and show, in particular, that Propositional Logic can be equivalently reformulated as an argumentation framework, which we call Argumentation Logic. In Argumentation Logic sets of sentences are understood as arguments and a natural notion of acceptability of arguments, given consistent theories, corresponds to classical entailment. The main idea behind Argumentation Logic is to understand the Reduction ad Absurdum rule in Natural Deduction in terms of argumentation and to control the application of this rule so as to achieve a paraconsistent form of reasoning. For inconsistent theories Argumentation Logic does not trivialise and thus gives a natural generalisation of Propositional Logic.
|
7th June 2012
|
Ben Moszkowski (De Montfort University, Leicester)
Compositional Reasoning using Intervals and Time Reversal
We apply Interval Temporal Logic (ITL), an established temporal formalism for reasoning about time periods, to extending known facts by looking at them in reverse and then reducing reasoning about infinite time to finite time. Our presentation discusses a basic and interesting class of compositional ITL formulas closed under conjunction and the temporal operator "always" as well as some properties obtained from them with the aid of time reversal. We are currently exploring the use of time reversal as part a compositional analysis of some aspects of concurrent behaviour involving mutual exclusion. It also appears that time reversal can sometimes assist in reducing reasoning in ITL to conventional linear-time temporal logic.
Ram Ramanujam (Institute of Mathematical Sciences, Chennai)
Bounds on Proof Size in Security Verification
In the formal analysis of security protocols, what the intruder infers from messages travelling on the network can be abstracted into checking whether a term is derivable in an inference system from a given finite set of terms. Thus verification amounts (essentially) to studying bounds on proof size. We use this technique in the context of distributive encryption to show that the (static) intruder problem is DEXPTIME-complete.
The work reported is joint with A. Baskar (Darmstadt) and S.P.Suresh (Chennai). |
31st May 2012
|
Christopher Hampson (King's College London)
Undecidable products involving the logic of 'elsewhere'
The modal logic of 'elsewhere', or the difference operator, Diff is known to be Kripke complete with respect to the class of symmetric, pseudo-transitive frames; frames not too dissimilar to equivalence relations. Indeed the validity problems for both Diff and S5 are in co-NP.
Products logics of the form L x S5 are usually decidable and S5 x S5 even enjoys the finite model property. In contrast, we present some cases where the transition from L x S5 to L x Diff not only increasing the complexity of the validity problem, but in fact introduces undecidability. In addition, we also show that Diff x Diff lacks the finite model property. The strategy we use for demonstrating undecidability involves a reduction from the non-halting problem for 2-counter Minsky machines, whereas typically, previous strategies in this area have involved Turing machines or grid-tiling problems. This is a joint work with Agi Kurucz. |
24th May 2012
|
Stanislav Kikot (Birkbeck College, University of London)
On modal definability of first-order formulas in expressive description logics
The correspondence between modal axioms and first-order Kripke frame conditions is the heart of modal logic. Developed in 1960-ies, it is still a common tool for proving the completeness of many modal calculi. A typical modern example of its application is different logics of multi-agent systems for reasoning about agents' knowledge, belief, intensions and cooperative actions.
There are two traditional kinds of such a correspondence, namely, the global correspondence between modal formulas and closed first-order formulas and the local correspondence between modal formulas and first-order formulas with one free variable. In "Tools and Techniques of Modal Logic" by Marcus Kracht (~1998) the definition of the local correspondence between finite tuples of modal formulas and first-order formulas with several free variables appeared for the first time. There the basic properties of this correspondence were stated and proved, and a special calculus (so called "Calculus of internal descriptions") was devised to infer the instances of such a correspondence. Also this correspondence was used in this book to give a formal proof to the claim, which is now known as "Kracht's theorem." Until now, this correspondence was used only as a technical tool for proving similar theorems. But recently some query answering algorithms relying on the classical correspondence theory for first-order formulas with one free variable were introduced by E. Zolin. The main idea of these algorithms is to replace a query (i.e. a first-order formula) with a corresponding modal formula. But on account of using the classical local correspondence theory the range of application of these algorithms is restricted to the unary queries, i.e. to first-order formulas with one free variable. Obviously this restriction can be overcome by considering a more general kind of modal definability. And here is the starting point of our work. Our aim is to advance in the correspondence theory for first-order formulas with free variables as far as possible, and to apply it to answering the queries with many answer variables. Ilya Shapirovsky (Institute of Information Transmission Problems, Russian Academy of Sciences)
Products of S5 and modal logics of Hamming spaces
In this talk we consider unimodal logics of the relation H: s H t iff s and t are words of the same length and h(s,t)=1, where h is the Hamming distance.
Unimodal logics of this kind are closely related to many-dimensional logics. If we consider the n-dimensional product of the inequality frame over a given alphabet A, then the Hamming box-operator on words of length n acts as the conjunction of all box-operators of the product. On the other hand, we show that all modalities of the n-dimensional product of A with the universal relation can be expressed in the unimodal language with the Hamming box-operator. It follows that undecidability results for products of S5 transfer to logics of Hamming frames. Joint work with Andrey Kudinov and Valentin Shehtman. |
9th May 2012
|
Benedikt Löwe (ILLC Amsterdam / Universität Hamburg)
Modal logics of set-theoretic constructions
The modal logic of forcing is the set of modal formulas that translate into true sentences of set theory under the "forcing interpretation" (i.e., replace propositional variables by arbitrary sentences of set theory and Box phi by the set-theoretic formula corresponding to "for every forcing extension, phi is true"). The modal logic of provable forcing was proved to be S4.2 by Hamkins and Loewe. In this talk, we shall look at variants of this interpretation (allowing only a subclass of forcings) and the natural dual interpretation, the "ground model interpretation".
|
15th March 2012
|
Practical application (and complications) of the knapsack problem
Having left Imperial for an economic consultancy in 2010, I would like to share some experiences from running into this NP-complete classic in practical applications. Full disclaimer to avoid disappointment: This talk does not contain any new complexity results or proofs, but a short introduction to mechanism design and combinatorial auctions.
|
8th March 2012
|
Agi Kurucz (King's College London)
Undecidable Propositional Logics with a Binary Associative Connective
Recently, several results have been published, by J.Brotherston and M.Kanovich, and by D.Larchey-Wendling and D.Galmiche, on the undecidability of variants of propositional Separation Logic, and substructural logics that combine standard propositional logics with a multiplicative linear logic, and admit a Kripke-style truth interpretation in which worlds are understood as `resources' (such as the Logic of Bunched Implications).
The talk is a brief survey of possibly related results on undecidable equational theories of various classes of semilattice-ordered semigroups, obtained and published in the mid 1990s. Though our assumptions seem to be slightly stronger than those of the recent results, the used methods are different and might still be relevant to decision problems of the above logics. Joint work with H.Andreka, I.Nemeti, I.Sain, and A.Simon . |
1st March 2012
|
Martin Berger (University of Sussex)
Program Logics for Run-Time Meta-Programming
Meta-programming is nearly as old as computing itself, and means generating or analysing programs by other programs, i.e. in an algorithmic way. It is also sometimes referred to under the term programs-as-data. In recent years the systematic use of meta-programming has caught the attention of researchers:
- software engineers use it in their quest to make the hosting of domain-specific languages, a key software engineering tool, easier; - semanticists wonder about the theoretical properties of meta-programming languages, e.g. how the meaning of meta-programs and generated programs relate. One of the key steps towards a better understanding of meta-programming is to develop program logics that enable us to do high-level reasoning. In this talk I will first give an overview of key ideas in meta-programming, and then present the first program logic for a simple, idealised meta-programming language (a variant of Pfenning's and Davies MiniML, which relates closely to certain modal logics). Joint work with Laurie Tratt. |
14th February 2012
|
Szabolcs Mikulas (Birkbeck College, University of London)
Equational Theories of Extended Kleene Algebras
D. Kozen defined the class KA of Kleene algebras as a finitely axiomatised quasi-variety. Sound interpretations of KA include regular languages and families of binary relations. The corresponding classes of algebras are language Kleene algebras, LKA, and relational Kleene algebras, RKA. It is known that they generate the same variety: V(LKA)=V(RKA), but this variety is not finitely axiomatisable. Hence there is no finite, equational axiomatisation of the valid equations. Nevertheless, there is a finite, quasi-equational axiomatisation, since KA generates the same variety as LKA and RKA: V(KA)=V(LKA)=V(RKA).
A natural extension of Kleene algebras is Kleene lattices, KL, where we include meet in the signature. We show that language and relational Kleene lattices, LKL and RKL respectively, generate different varieties, though V(LKL) remains finitely axiomatised over V(RKL). However, if we omit the identity constant, then the generated varieties coincide. V. Pratt suggested another, residuated extension of KA: action algebras, AA. He showed that action algebras form a variety (the quasi-equations in the axiomatisation of KA can be expressed as equations using the residuals of composition). Kozen further extended the signature by meet, resulting in action lattices, AL. We show that, similarly to the Kleene algebra case, relational action algebras, RAA, and lattices, RAL, generate non-finitely axiomatisable varieties. We also show that, unlike in the Kleene algebra case, these varieties cannot be generated by finitely axiomatised quasi-varieties (as long as we require the soundness of the axiomatisation in relational interpretations). Joint work with Hajnal Andreka and Istvan Nemeti. |
9th February 2012
|
Neil Yorke-Smith (American University of Beirut)
Model Checking of Agent Goals and Commitments
How should autonomous agents coordinate their private goals with their public commitments? Researchers have observed that goals and commitments are complementary, but have not yet developed a combined operational semantics for them. We relate their respective lifecycles as a basis for how these concepts cohere for an individual agent, and engender cooperation among agents. Our semantics yields important desirable properties, which we formulate using CTL. We prove the conditions under which a multi-agent system following our semantics will achieve convergence to a coherent state, using the NuSMV symbolic model checker.
This is joint work with Pankaj Telang and Munindar P. Singh (NCSU). Bio: Neil Yorke-Smith is an Assistant Professor of Business Information and Decision Systems at the Suliman S. Olayan School of Business, American University of Beirut, and a Research Scientist at SRI International, USA. His research interests include planning and scheduling, intelligent agents, preference modelling and constraint reasoning, and their real-world applications. He currently serves on the Senior Programme Committees of AAMAS'12 and ICAPS'12. He holds a Ph.D. from Imperial College London. |
19th January 2012
|
Davide Grossi (University of Liverpool)
Abstract Argumentation and Modal Logic
In this talk I will pursue the simple idea of viewing Dung frameworks as Kripke frames. Several techniques and results can accordingly be imported from modal logic to abstract argumentation. Some of these results are of interest from a merely logical point of view (e.g., proof systems for argumentation) some others from the point of view of argumentation theory itself (e.g., adequate games for argumentation), some, I argue, for both. In this talk I will deal in particular with one application of the latter sort, looking at a logical analysis of the fixpoint theory behind abstract argumentation.
|
15th December 2011
|
Franck van Breugel (York University)
On the Complexity of Probabilistic Bisimilarity
Probabilistic bisimilarity is a fundamental notion of equivalence on labelled Markov chains. It has a natural generalisation to a probabilistic bisimilarity pseudometric, whose definition involves the Kantorovich metric on probability distributions. The probabilistic bisimilarity pseudometric has discounted and undiscounted variants, according to whether one discounts the future in observing discrepancies between states.
In his talk, we will look at the complexity of computing probabilistic bisimilarity and the probabilistic bisimilarity pseudometric on labelled Markov chains. We show that the problem of computing probabilistic bisimilarity is P-hard by reduction from the monotone circuit value problem. We also show that the discounted probabilistic bisimilarity pseudometric is rational and can be computed exactly in polynomial time using the network simplex algorithm and the continued fraction algorithm. In the undiscounted case we show that the probabilistic bisimilarity pseudometric is again rational and can be computed exactly in polynomial time using the ellipsoid algorithm. This talk is based on joint work with Di Chen and James Worrell. |
7th December 2011
|
Ian Pratt-Hartmann (University of Manchester)
Topological Logics of Euclidean Spaces
The field of Artificial Intelligence known as Qualitative Spatial Reasoning is concerned with the problem of representing and manipulating spatial information about everyday objects. In recent decades, much activity in this field has centred on "spatial logics"---formal languages whose variables range over regions of space, and whose non-logical primitives represent geometrical relations and operations involving those regions. The central problem is to determine whether the configuration described by a given formula is geometrically realizable in 2D or 3D Euclidean space. When the geometrical relations and operations are all topological in character, we speak of a "topological logic".
In this talk, I consider topological logics based on the (quantifier-free) languages, Bc and Bc0, obtained by augmenting the signature of Boolean algebras with a unary predicate representing, respectively, the property of being connected, and the property of having a connected interior. These languages are interpreted over the regular closed sets in Euclidean space (of various dimensions)---or, alternatively, over the regular closed polyhedral sets in Euclidean space. I shall explain why the realizability problem for Bc is undecidable over the regular closed polyhedra for all dimensions greater than 1, and why the realizability problem for both languages is undecidable over both the regular closed sets and the regular closed polyhedra (polygons) for dimension 2. I shall present a contrasting result, however: the realizability problem for Bc0 is NP-complete over the regular closed sets in all dimensions greater than 2, while the corresponding problem for the regular closed polyhedra is ExpTime-complete. Altogether, these facts suggest that Qualitative Spatial Reasoning is harder than its early proponents conceivably imagined. Joint work with Roman Kontchakov, Yavor Nenov and Michael Zakharyaschev |
1st December 2011
|
Dusko Pavlovic (Royal Holloway, University of London)
Mining for Meaning by Weighted Limits
In a logical system, the meaning of the basic formulas is assumed to be given upfront. In a computer program, the data structures are assumed to be declared, or imported, before the computation begins.
In network computation, however, there are usually no global declarations, and each node can interpret the shared data differently. Assigning the meaning to data becomes an ongoing process, supported by the network services devoted to search, classification, naming, routing. Semantics of computation thus expands into a space with many new dimensions. Its logical problems blend with the problems of learning, information retrieval, and data analysis. How do the mathematical methods of semantics of computation adapt to these new challenges? In this talk, I will propose a new method for concept extraction based on weighted limits and colimits. The distances extracted, say, from the numbers of hits displayed with the Google outputs need to be completed, in a categorical sense, to display their latent semantics, or to uncover the "dark matter" of the network. As a byproduct, I get some mathematical constructions that may be of independent interest: the spectral decomposition of metric bimodules, and the (Dedekind-MacNeille-style) bicompletion of metric and proximity spaces. |
24th November 2011
|
Charles Morisset (Royal Holloway, University of London)
PTaCL: A Formal Language for Authorisation Policies Based on a 3-valued Logic
An authorisation policy describes how passive entities (e.g. files) can be accessed by active entities (e.g. users) within an information system. Many languages and algebras have been proposed in recent years for the specification of such policies, but few offer a rich set of language features with a well-defined semantics, in particular in the context of attribute based access control and open systems. We propose here the language PTaCL, which is decomposed into PTL, for the specification of policy targets (i.e. over which requests a policy applies), and PCL, for the composition of policies. Both PTL and PCL are defined using a 3-valued logic à la Kleene, and we define for each language three operators that are functionally complete. Moreover, we identify an attack based on the hiding of attribute values, and we show how to define policies preventing such attacks.
Joint work with Jason Crampton (Royal Holloway, University of London) |
10th November 2011
|
Global Caching for Flat Coalgebraic Fixed Point Logics
The framework of coalgebraic logic allows for the uniform definition of wide ranges of logics. The (efficient) decision of the satisfiability / provability problem is of particular interest in this context. Various generic algorithms have been recently proposed to this end; in pratice, algorithms which employ the technique of global caching have proven to be particularly efficient. As a first contribution, we propose an improvement of the propagation part of global caching algorithms, replacing the computation of the greatest fixed point of so-called proof transitionals by a series of least fixed point computations.
Subsequently we introduce (flat) fixed point operators to coalgebraic logics and present a correct and optimal global caching algorithm for the obtained flat coalgebraic fixed point logics. This algorithm is built upon a set of so-called focusing sequent rules and makes use of a more intricate instance of the above-mentioned adapted propagation. As a byproduct of the proof of the correctness of this algorithm, we obtain a new small model lemma which improves upon previously known results. |
3rd November 2011
|
Thomas Ågotnes (University of Bergen)
Coordinating Multi-agent Systems using Social Laws
[show abstract]
|
20th October 2011
|
John McCabe-Dansted (University of Western Australia)
A Rooted Tableau for BCTL*
The existing pure tableau technique for satisfiability checking BCTL* [MarkReynolds02012007] begins by constructing all possible colours. Traditional tableaux begin with a single root node, and only construct formulae that are derived from that root. These rooted tableaux provide much better performance on most real world formulae, as they only need to construct a fraction of the possible nodes. We present a rooted variant of this tableau for BCTL*, together with an implementation demonstrating the performance of our rooted variant is superior to the original; this implementation is made available as a Java applet. We discuss further possible optimisations. This research will be useful in finding an optimised rooted tableau for CTL*.
Mark Reynolds (University of Western Australia)
Tableau for CTL*
We present a sound, complete and relatively straightforward tableau method for deciding valid formulas in the propositional version of full computation tree logic CTL*.
|
27th July 2011
|
James Thomson (The Australian National University)
Current Theorem Provers for CTL
Recently several methods and accompanying implementations of theorem provers for Computation Tree Logic have been created. This talk aims to provide some insight into how they compare to each other and why, as well as some comments on the nature of benchmarking.
|
29th June 2011
|
Øystein Linnebo (Birkbeck, University of London)
The Potential Hierarchy of Sets
The iterative conception of sets is often used to motivate (many of) the axioms of ordinary ZFC set theory. I discuss some reasons to regard the iterative hierarchy as potential rather than actual. Motivated by this, I develop a modal theory in which this potentialist conception can be expressed and investigated. The resulting theory is equi-interpretable with ZFC but sheds new light on the set theoretic paradoxes and the foundations of set theory.
|
23rd June 2011
|
Laurie Hendren (McGill University Canada / Oxford University)
Compiler Tools for MATLAB
MATLAB is a popular dynamic programming language used for scientific and numerical programming. As a language, it has evolved from a small scripting language intended as an interactive interface to numerical Wessex Theory Seminar on the occasion of Rob Myers' PhD defenselibraries, to a very popular language supporting many language features and libraries. The overloaded syntax and dynamic nature of the language, plus the somewhat organic addition of language features over the years, makes static analysis of modern MATLAB quite challenging.
In this talk I will motivate why it is important for programming language and compiler researchers to work on MATLAB and I will provide a high-level overview of McLab, a suite of compiler tools my group is developing at McGill. The main technical focus of the talk will be on the foundational problem of resolving the meaning of an identifier in MATLAB. |
9th June 2011
|
Paul Levy (University of Birmingham)
Similarity Quotients as Final Coalgebras
We give a general framework connecing a branching time relation on nodes of a transition system to a final coalgebra for a suitable endofunctor. Examples of relations treated by our theory include bisimilarity, similarity, upper and lower similarity for transition systems with divergence, similarity for discrete probabilistic systems, and nested similarity. Our results describe firstly how to characterize the relation in terms of a given final coalgebra, and secondly how to construct a final coalgebra using the relation.
Our theory uses a notion of ``relator'' based on earlier work of Thijs. But whereas a relator must preserve binary composition in Thijs' framework, it only laxly preserves composition in ours. It is this weaker requirement that allows nested similarity to be an example. |
2nd June 2011
|
Alessio Lomuscio (Imperial College London)
Verification of Multi-agent Systems
Multi-agent systems are distributed autonomous systems in which the components, or agents, act autonomously or collectively in order to reach private or common goals. Logic-based specifications for MAS typically do not only involve their temporal evolution, but also other intensional states, including their knowledge, beliefs, intentions and their strategic abilities.
This talk will survey recent work carried out on model checking MAS within the VAS research group at Imperial. Specifically, serial and parallel algorithms for symbolic model checking for temporal-epistemic logic as well as bounded- model checking procedures will be discussed. MCMAS, an open-source model checker, will be briefly demonstrated. Applications of the methodology to the automatic verification of security protocols, web services, and fault-tolerance will be briefly surveyed. |
19th May 2011
|
Neil Ghani (University of Strathclyde)
Generic Induction and Coinduction
We all have seen induction from the age of 7 at school. Usually a property P of the natural numbers is modelled as a function P:Nat -> Set which maps each number n to the set P(n) of proofs that the property P holds for the number n. In such a setting, induction is just a function of type:
P(0) -> (All n. P(n) -> P(n+1)) -> All n. P n But what happens if we want to reason about types other than the natural numbers? And what happens if we want to model properties differently? And, what if we interpret types not as Sets but as, say, domains, or using realisability? In this talk I will address the above questions and also their duals regarding coinduction. I hope to convince you that a very simple but powerful theory emerges. and I promise only to mention fibrations in the last 15 mins. |
24th March 2011
|
David Makinson (London School of Economics)
Much Ado about Introduction and Elimination Rules
We examine persistent attempts to favour intuitionistic over classical logic on formal grounds concerning the role of introduction, elimination and structural rules (collectively known as elementary rules) for the logical connectives, In particular, we consider a long-standing hope, an open question, and a known fact about such rules, providing some formal results that reveal the hope to be vain, the open question to have an unexpected answer, and the fact to be fragile as it depends on a gratuitous decision.
In more detail, the hope in question is that, in the context of rules using set=>formula sequents, (1) every propositional connective has intuitionistically correct introduction and elimination rules. The open question is whether, in the same context, (2) intuitionistic and classical consequence satisfy distinct sets of introduction and elimination rules. The fact whose robustness is in question is that, in the same context, (3) a well-chosen axiomatization of propositional logic using only structural, introduction and elimination rules and with the falsum primitive, leads to intuitionistic logic. Our formal results dash the hope, answer the open question in a rather surprising way, and complement the known fact. Specifically, in the context of rules using set=>formula sequents: (1) For a very wide range of connectives, including both negation and the falsum, there are no classically or intuitionistically correct introduction rules. Indeed, for each n there are at least as many n-place truth-functions that lack any correct introduction rule as enjoy one. (2) Irrespective of the choice of negation or the falsum as a primitive connective, exactly the same elementary rules are correct for intuitionistic as for classical consequence. (3) Axiomatizations of intuitionistic logic such as those mentioned above are not robust under choice of primitive connectives, becoming demonstrably impossible when negation rather than the falsum is taken as primitive. Of course, if sets of formulae are allowed on the right of sequents then the picture changes. As is well known, for such set=>set sequents our results (1) and (2) fail. But this is not to the advantage of intuitionistic logic. For as is equally well known, in that context the least consequence relation satisfying all classically correct elementary rules becomes just classical consequence, by-passing intuitionistic logic entirely. The talk does not use Gentzen calculi, nor play around with specific axiomatic presentations. Proofs are merely sketched in the talk, and are contained in a paper to appear, copies available on request. Joint work with Lloyd Humberstone. |
10th March 2011
|
John Harding (New Mexico State University)
Modal logics, von Neumann algebras, and Bichains
This will be a brief survey of three unrelated projects. The first concerns the well-known association between modal logics and topological spaces initiated by McKinsey and Tarski. Here we direct our attention to the modal logics associated with Stone spaces of Boolean algebras. The second deals with the poset of abelian subalgebras of a von Neumann algebra, and has connections to sheaf-theoretic approaches to the foundations of quantum mechanics. The final topic deals with structures called Birkhoff systems formed by weakening the absorption law x(x+y) = x = x+xy for lattices. Here projectivity of finite bichains is considered, and related to studies of fuzzy logic. Hopefully this quick tour may be enough to interest you in some open problems.
Herbert Wiklicky (Imperial College London)
Quantum and Program Analysis
We will cover in this talk two seemingly unrelated areas: (quantitative) program analysis and quantum computation. It appears that both subjects require surprisingly similar mathematical models (besides sharing the prefix `quant'); and we will concentrate in particular on the role tensor products and projections (in a Hilbert space setting) for both fields. The aim of this comparison is to contribute somewhat to a better, less metaphysical understanding of concepts like entanglement or the structure of quantum logics a la Birkhoff and von Neumann.
|
17th February 2011
|
Martín Escardó (University of Birmingham)
An Amazingly Versatile Functional
I'll present the theory of selection functions, with applications to game theory, algorithms, proof theory and topology, among others.
Selection functions form a strong monad, which can be defined in any cartesian closed category, and has a morphism into the continuation monad. In certain categories of spaces and domains, the strength can be infinitely iterated. This infinite strength is an amazingly versatile functional that (i) optimally plays sequential games, (ii) realizes the Double Negation Shift used to realize the classical axiom of countable choice, and (iii) implements a computational version of the Tychonoff Theorem from topology, among other things. The infinite strength turns out to be built-in in the functional language Haskell, called sequence, and can be used to write unexpected programs that compute with infinite objects, sometimes surprisingly fast. The selection monad also gives rise to a new translation of classical logic into intuitionistic logic, which we refer to as the Peirce translation, as monad algebras are objects that satisfy Peirce's Law. This is joint work with Paulo Oliva from Queen Mary, who already presented some of our work at the Imperial Logic Seminar. This talk will emphasize aspects he didn't touch or that he mentioned only briefly. |
10th February 2011
|
Bob Coecke (University of Oxford)
In the beginning God created tensor, ... then matter, ... then speech.
This is a tale about quantum computing and a bit of natural language processing, ... all in pictures!
|
3rd February 2011
|
Thomas Wahl (University of Oxford)
From the Finite to the Infinite and Back: Techniques for Multi-Threaded Program Verification
The formal analysis of multi-threaded programs is among the grand challenges of software verification research. In this talk, I describe our recent and ongoing work on building predicate abstractions of multi-threaded C code, and on analysing the resulting parameterised Boolean programs. I begin with a brief illustration of the abstraction process in the presence of shared-variable communication. I then present a scalable method for analysing the reachability of program locations in finite-state programs executed by a bounded number of threads. This method, being based on counter abstraction, extends in principle to unbounded thread counts, but suffers in practice from the high complexity of coverability and reachability analyses for certain types of counter machines. A different approach rests on the assumption that in practical parametric program settings, a small "cutoff" number of threads suffice to generate all reachable program locations. While this assumption is widely considered to be safe, its practical usefulness hinges on how accurately we are able to estimate the cutoff of a given program. I present a new method that uses low-cost finite-state searches to analyse the reachable state space of a replicated finite-state program for increasing thread counts, until a condition emerges that allows us to conclude that the cutoff thread count has been reached. If this condition does not emerge, the unbounded search space has been narrowed down to a level where resorting to traditional coverability analyses becomes affordable. The result is an exact and efficient program location reachability method for non-recursive concurrent Boolean programs run by arbitrarily many threads.
|
16th December 2010
|
Jeremy Dawson (Australian National University)
Generic Methods for Formalising Sequent Calculi Applied to Provability Logic
We describe generic methods for reasoning about multiset-based sequent calculi which allow us to combine shallow and deep embeddings as desired. Our methods are modular, permit explicit structural rules, and are widely applicable to many sequent systems, even to other styles of calculi like natural deduction and term rewriting systems. We describe new axiomatic type classes which enable simplification of multiset or sequent expressions using existing algebraic manipulation facilities. We demonstrate the benefits of our combined approach by formalising in Isabelle/HOL a variant of a recent, non-trivial, pen-and-paper proof of cut-admissibility for the provability logic GL, where we abstract a large part of the proof in a way which is immediately applicable to other calculi. Our work also provides a machine-checked proof to settle the controversy surrounding the proof of cut-admissibility for GL.
Based on joint work with Rajeev Goré. |
9th December 2010
|
Thorsten Altenkirch (University of Nottingham)
Indexed Containers
We show that the syntactically rich notion of inductive families as present in systems like Agda can be reduced to a core type theory with a fixed number of type constructors exploiting the novel notion of indexed containers. Indexed containers generalize simple containers, capturing strictly positive families instead of just strictly positive types, without having to extend the core type theory. Other applications of indexed containers include datatype-generic programming and reasoning about polymorphic functions.
Based on joint work with Peter Morris. |
18th November 2010
|
Mirna Džamonja (University of East Anglia)
Coding Banach Spaces by Boolean Algebras and other First Order Structures
We shall discuss the problem of finding an isomorphically universal Banach space using the above mentioned methods.
|
11th November 2010
|
Katsuhiko Sano (Kyoto University)
Product of Hybrid Logics
Hybrid logic is an extended modal logic with nominals (a syntactic name of a state). In this talk, I will propose how to combine two hybrid logics, i.e., a way of dealing with two dimensions (e.g. time and space) at the same time in one setting. Our way of combining hybrid logics can be regarded as an extension of product of modal logics (Gabbay and Shehtman 1998). In the first part, I will explain how to `product' two hybrid logics over Kripke frames and establish a general completeness result (called pure completeness) for Kripke semantics. In the second part, I will extend this product method to hybrid logics over topological spaces and show that we can still keep a general completeness result. If time permits, I will also explain how to capture the dependence of one dimension (space) over the other (time).
|
4 November 2010
|
Achim Jung (University of Birmingham)
The Hofmann-Mislove Theorem
The Hofmann-Mislove Theorem states that there is a bijection between Scott-open filters of open sets and compact saturated subsets of a Sober space. What is to all appearances a technical result from Stone duality has over the years been found to play a central role in mathematical semantics. In this talk I plan to give an introduction to the theorem, its setting, its proof, its applications, and a more recent generalisation to four-valued logic.
|
20th November 2010
|
Prakash Panangaden (McGill University)
The Duality of State and Observation
In this talk we consider the problem of representing and reasoning about systems, especially probabilistic systems, with hidden state. We consider transition systems where the state is not completely visible to an outside observer. Instead, there are observables that partly identify the state. We show that one can interchange the notions of state and observation and obtain what we call a dual system. The double dual gives a minimal representation of the behaviour of the original system. We extend this to nondeterministic systems and to probabilistic transition systems and finally to partially observable Markov decision processes (POMDPs). In the case of finite automata restricted to one observable, we obtain Brzozowski's algorithm for minimizing finite-state language acceptors.
|
25th June 2010
|
Marc Denecker (Katholieke Universiteit Leuven)
Logic Programming's Contributions to Classical Logic
Much research in logic programming and non-monotonic reasoning originates from dissatisfaction with classical logic as a knowledge representation language, and with classical deduction as a mode for automated reasoning. Discarding these classical roots has generated many interesting and fruitful ideas. However, to ensure the lasting impact of the results that have been achieved, it is important that they should not remain disconnected from their classical roots. Ultimately, a clear picture should emerge of what the achievements of logic programming mean in the context of classical logic, so that they may be given their proper place in the canon of science. In this paper, we take a look at different aspects of logic programming, in an effort to identify precisely the limitations of classical logic that were exposed in various disciplines of LP and investigate how his solutions may be transferred back to the classical setting. Among the issues we thus address are the closed world assumption, abduction, "classical" negation, default reasoning with exceptions, and Gelfond's interpolation technique.
|
17th June 2010
|
Mike Dodds (University of Cambridge)
Abstract Predicates
Abstraction is key to understanding and reasoning about large computer systems. Abstraction is simple to achieve if data structures are disjoint, but rather difficult when they are partially shared, as is often the case for concurrent modules. In this talk I will present a program logic for reasoning abstractly about data structures that provides a fiction of disjointness and permits compositional reasoning. The internal details of a module are completely hidden from the client by concurrent abstract predicates. This logic reasons about a module's implementation using separation logic with permissions, and provide abstract specifications for use by client programs using concurrent abstract predicates. I will illustrate this proof system with the example of a set module.
Joint work with Dinsdale-Young, Gardner, Parkinson, & Vafeiadis |
3rd June 2010
|
Bernhard Reus (University of Sussex)
The Trouble with Nested Triples
Separation Logic has been recognised as an elegant way to deal with dynamic memory and pointers in program logics. It features a frame rule that allows one to reason locally wrt the heap so that one can restrict attention to just those heap cells actually touched by the command in question. O'Hearn, Yang and Reynolds have then suggested the 'hypothetical frame rule' that addresses issues of information hiding and modular reasoning. After a brief overview we discuss to what extent these concepts of 'framing' can be adapted to deal with code pointers. In order to talk about the behaviour of code stored in the heap, assertions will include Hoare triples. It is those 'nested triples' that complicate matters significantly (even more so if one wishes to model Pottier's anti-frame rule for hiding 'direct-style' ). They give rise to a variety of new frame rules and axioms some of which are not sound. The reasons for this initially surprising fact will be discussed mainly by means of (counter-) examples but some reference will be made to how these issues manifest themselves semantically.
This talk is an overview on joint work (past and ongoing) with Lars Birkedal, Billiejoe Charlton, Francois Pottier, Jan Schwinghammer, and Hongseok Yang. |
27th May 2010
|
Mika Cohen (Imperial College London)
Symmetry Reduction for Epistemic Logic
To make sure that a distributed protocol functions as intended we may need to look at the knowledge of protocol participants and adversaries: can the eavesdropper Eve come to know that Alice votes for candidate Bob in an electronic voting protocol? Will the system monitor know if there is a fault? The analysis of what agents know and do not know can be performed automatically through an exhaustive search of the state space: recently, model checking techniques have been extended from temporal logic to epistemic logic, the logic of knowledge. In this talk, I will review basic model checking techniques for epistemic logic and look in more detail at how symmetries between agents and symmetries between data can be exploited in order to reduce the number of system states that need to be explored in order to check epistemic logic formulae.
|
20th May 2010
|
Jonathan Zvesper (University of Oxford)
'Interactive' Diagonalisation: from Brandenburger-Keisler to Lawvere
The subject of this talk is an impossibility theorem due to Brandenburger and Keisler (BK) that is essentially a 'two-universe' (or 'interactive') version of Russell's paradox. One contribution is a redescription of the BK impossibility theorem as a fixpoint lemma in the style of Lawvere. We consider what logical resources that fixpoint lemma requires, which leads us to a categorical perspective from which we reduce the BK result to Lawvere's. This is joint work with Samson Abramsky.
References: BK, "An Impossibility Theorem on Beliefs in Games", Studia Logica, 2006. Lawvere, "Diagonal Arguments and Cartesian Closed Categories", Reprints in Theory and Applications of Categories, 2006 (originally published in 1969). |
13th May 2010
|
Bart Jacobs (Radboud University Nijmegen)
Smart Cards in Public transport: the Mifare Classic Case
The Mifare Classic chipcard has been dismantled. Between 1 and 2 billion copies have been sold worldwide. The card is mainly used for access to buildings and for public transport. The talk will give an overview of these developments and pay special attention to the OV-chipcard in the Netherlands and to London's Oyster card. Both are broken: data on the card, including balances, can be changed easily.
|
15th April 2010
|
Sanjay Modgil (Imperial College London)
Extending Argumentation Theory
Dung's abstract argumentation theory provides a general framework for various species of non-monotonic reasoning and conflict resolution. A Dung framework is essentially a directed graph in which the nodes are arguments consisting of proofs of claims in some underlying logic, where these arguments are related by attacks that account for the notion of conflict in the underlying logic. The justified arguments of a framework are then evaluated under different semantics, where the claims of the justified arguments equate with the underlying theory's inferences. In recent work I have extended Dung's theory to accommodate arguments that attack attacks as well as arguments. In this way one can integrate metalevel argumentation-based reasoning about possibly conflicting preferences between arguments. In this talk I will present the extended theory, discuss its application as a framework for non-monotonic logical formalisms that accommodate defeasible reasoning about preferences, and as an argumentation semantics for adaptive agent defeasible reasoning. The talk will conclude by reviewing recent work on argument game proof theories for the extended theory.
For people interested in attending, a basic understanding of non-monotonic reasoning will be helpful, but no background in argumentation theory will be assumed. |
8th April 2010
|
James Brotherston (Imperial College London)
Undecidability of Propositional Separation Logic and its Neighbours
Separation logic has become well-established in recent years as an effective formalism for reasoning about memory-manipulating programs. In this talk I shall explain how, and why, it happens that even the purely propositional fragment of separation logic is *undecidable*. In fact, it turns out that all of the following properties of propositional separation logic formulas are undecidable (among others):
(a) provability in Boolean BI (BBI) and its extensions, even when negation and falsum are excluded; (b) validity in the class of separation models; (c) validity in the class of separation models with indivisible units; (d) validity in *any concrete choice* of heap-like separation model. We also gain new insights into the nature of existing *decidable* fragments of separation logic. Furthermore, we additionally establish the undecidability of the following properties of propositional formulas, which are related to Classical BI and its ``dualising'' resource models: (e) provability in Classical BI (CBI) and its extensions; (f) validity in the class of CBI-models; (g) validity in the class of CBI-models with indivisible units. All of the above results are new but, in particular, decidability of BBI has been an open problem for some years, while decidability of CBI was a recent open problem. This is joint work with Max Kanovich, Queen Mary University of London. The talk is based on a paper of the same name due to appear at LICS 2010. A preliminary version can be found at the speaker's home page. |
18th March 2010
|
Andrew Moshier (Chapman University)
Mixing Information and Logic
We discuss two ways to mix information (from a domain theoretic point of view) and propositional logic. The first has its roots in Kleene's metamathematical investigations of three-valued logic, where evaluation of propositions may possibly diverge. The second is closely related to Belnap's proposal (of some interest for knowledge-based systems in distributed environments) to consider a four-value logic: true, false, undefined and contradiction. Kleene's system is closely related to domain theory anyway, as divergence is a fundamental notion that is well-modeled in domains. Belnap, on the other hand, explicitly discusses the idea that the four truth values come equipped with two natural lattice orders: logic, in which false and true are the bounds, and a second order which he calls "informa- tion," in which undefined is least informative, contradiction is most informative and true and false are incomparable.
In this talk, we will take Belnap's terminology seriously, by allowing for possibly infinitary accumulation of information along the lines of domain theory (and implicit in Kleene's approach). This ties Kleene and Belnap closely together. As a result, we develop two systems that mix logic and information based on three- and four-valued logic. In the end, however, we will show that the two approaches are equivalent. We conclude with a discussion of logic and information in a more general setting. |
11th March 2010
|
Georg Struth (University of Sheffield)
Program Analysis with Kleene Algebras
Variants of semirings and Kleene algebras have emerged over the last decade as versatile and powerful tools for modelling and analysing computing systems. I will survey some recent developments and applications in the field.
I will discuss how semirings and Kleene algebras equipped with a domain operation yield a simple algebraic approach to modal and dynamic logics and concise semantics for state-based programs. I will also sketch ongoing research towards semiring-based models for concurrency. I will then present an approach to automated program analysis in which these algebras are combined with off-the-shelf automated theorem proving systems and counterexample generators in a novel way. Applications include termination analysis, action system refinement and automated correctness proofs for graph algorithms. |
3rd March 2010
|
Ulrich Berger (Swansea University)
Program Extraction from Proofs using Induction and Coinduction
In this talk I give two examples of program extraction from proofs in a constructive theory of inductive and coinductive definitions. The first belongs to the realm of computable analysis. A coinductive predicate C_0 is defined characterising those real numbers that are constructively "approximable". From proofs of closure properties of C_0 one extract implementations of arithmetic operations with respect to the signed digit representation of real numbers. In the second example I show how to extract monadic parser combinators from proofs that certain labelled transition systems are finitely branching. While in the first example coinduction is central, here induction features prominently because finiteness is an inductive concept. Both examples have in common that the data types the extracted programs operate on (infinite digit streams, higher-order functions) are not present in the (source) proofs which reside in simple extensions of first-order logic. This indicates that the perspective of replacing programming by the activity of proving is not as daunting as it seems, and that therefore program extraction might become an attractive method for producing formally certified programs in the future.
|
25th February 2010
|
Robin Hirsch (University College London)
Equational Theories and Games over Binary Relations
We consider algebras with various combinations of operations on binary relations, including boolean operators, the identity, converse, composition and the two residuals of composition, \ and /. For each choice of a set of operations there is a corresponding representation class which is always a quasi-variety. Our main interest here is whether the representation class is a variety, whether the representation class is finitely axiomatisable and whether the equational theory is finitely axiomatisable.
Infinite length games may be devised to test membership of a representation class. The second player E has a strategy to survive all rounds of G(A) iff A is representable. We can express that E has a strategy to survive at least n rounds of the game by a quasi-equation. If we can construct unrepresentable algebras A_n for each natural number n where the second player can survive n rounds of the game, then a non-principal ultraproduct will be representable and the representation class is not finitely axiomatisable, by Los' theorem. If the representation class is a discriminator variety then it follows that the equational theory is not finitely axiomatisable either. But if not, then other methods are needed in order to prove that the equational theory is not finitely axiomatisable. One question that presents itself is: can we modify the representation games mentioned above so that E has a winning strategy iff the algebra satisfies the equational theory of the representation class and a winning strategy for the second player corresponds to the truth of {equations} (rather than quasi-equations). We consider the specific case of the signature {., +, \, /}. We describe the representation game and construct unrepresentable algebras A_n where E can survive n rounds. We do not know whether the representation class is a variety, but nevertheless we do prove that the equational theory is not finitely axiomatisable. Joint work with Szabolcs Mikulas. |
4th February 2010
|
Samson Abramsky (University of Oxford)
Diagonals, Self-Reference and the Edge of Consistency: Classical and Quantum
We give an overview of Lawvere's elegant unified account of diagonal arguments, and a preliminary discussion of how this might be extended to two-person situations such as the Brandenburger-Keisler paradox in epistemic game theory.
We also discuss `No-Go' theorems in domain theory and categorical logic, and show how the latter are related to No-Cloning in categorical quantum mechanics. We also briefly discuss self-reference in Hilbert spaces. |
28th January 2010
|
Alexander Rabinovich (Tel Aviv University)
On the Complexity of Temporal Logics over Linear Time Domains
We investigate the complexity of the validity problem of temporal logics with a finite set of modalities.
We show that the problem is in PSPACE over the rationals, over the reals, over the class of all linear orders and over many interesting classes of linear orders. |
13th January 2010
|
Leonardo Cabrer (RCIS-JAIST, Japan)
Applications of Natural Duality for Kleene Algebras
Natural Dualities have proved to be a powerful tool to study quasi-varieties generated by some finite algebras. In this talk we are going to see how the natural duality for Kleene algebras developed in [Davey, B.A. & Werner, H., Dualities and Equivalences for Varieties of Algebras, Contributions to Lattice Theory, Colloq. Math. Soc. Janos Bolyai 33 (1983), 101-275] may allow us to solve some interesting algebraic and categorical problems.
Free algebras: In general every natural duality gives a representation of the free algebras (see [Clark D.M & Davey, B.A. Natural Dualities for the Working Algebraist, Cambridge Studies in Advanced Mathematics, 57 (1998)] Chapter 2 Prop.2.3). In the case of Kleene algebras, we simplify this presentation using some results from [Gehrke, M.; Walker, C.L. & Walker, E.A.,Normal forms and truth tables for fuzzy logics, Fuzzy Sets Systems, 138 1 (2003), 25-51]. This representation is somehow in the crossroad between Birkhoff representation of finite free distributive lattices and the presentation of finite free Boolean algebras. Kalman Functor: In [Kalman, J. Lattices with involution, Trans. Amer. Math Soc., 87 (1958), 485-491], Kalman introduce the functor K from the category of bounded distributive lattice into the category of Kleene algebras. His definition strongly depends on the algebraic structur of Kleene algebras. We will prove that the Kalman functor can be easily described through the natural duality for Kleene algebras and the Priestley duality for bounded distributive lattices. Finite Projective Kleene Algebras: An easy exercise in Universal Algebra proves that projective algebras in a variety are just retractions of free algebras. Using this and the characterization of free Kleene algebras mentioned before, we prove that there exists a one to one correspondence between finite projective Kleene algebras and finite Nearlattices satisfying a technical condition. James Chapman (Tallinn University of Technology)
Monads and Adjunctions on Categories and Functors
In this talk I will describe recent work with Thorsten Altenkirch and Tarmo Uustalu on a generalisation of monads from being defined on a category to being defined on a functor. I will introduce a number of examples (vector spaces, well-scoped lambda terms, and arrows), generalise some related constructions from standard monad theory (adjunction, Kleisli category, etc.), and relate the two notions of monad.
|
3rd December 2009
|
Steve Vickers (University of Birmingham)
Geometricity
I'm going to talk about some ideas relating logic to topology. These ideas have been around for half a century, and are even widely known, but it's not always so obvious to the casual listener how or why the technology fits together.
In a project at Birmingham, we (myself, Berfried Fauser, Guillaume Raynaud) are seeking to apply these ideas to the topos approach to quantum physics developed by Isham et al. at Imperial, and also Landsman's group at Nijmegen. The first idea is to derive topological spaces from logical (geometric) theories. This is taken a step further in point-free topology by "doing topology" with the theories themselves rather than the spaces. The natural notion of map can then be understood as a transformation of models in which continuity is guaranteed by adherence to the non-classical constraints of geometric logic. The second idea is that of sheaves over a space X, in particular in the form (Higgs) of "Omega-sets". This can be understood as a set "continuously parametrized by" points of X. The third idea is that of bundles or fibred spaces. A map f: Y -> X is a point f(y) of X parametrized by y, but it is also a space f^{-1}(x) (the fibre over x) parametrized by x. So long as the spaces are all point-free, the bundles over X are equivalent to the internal spaces in the category of sheaves over X. Putting these together, one arrives at the idea of doing fibrewise topology, studying bundles, by working point-free and constructively in toposes of sheaves. However, one would like those constructions also to work fibrewise: then the internal mathematics is effectively parameterized by points of the base. This is where geometricity comes in, as added constraints more stringent than those of topos-valid mathematics. It takes extra effort to put constructions in geometric form, but then they can be much easier to use. My long-term geometrization programme is to investigate how much mathematics can be done geometrically. The quantum project is to take the quantum work as a case study. |
26th November 2009
|
Paulo Oliva (Queen Mary, University of London)
Selection Functions and Attainable Quantifiers
The usual existential and universal quantifiers over a set X can be thought of as type 2 operations which take a predicate p : X -> Bool as input and return a Bool as output. If instead of Bool we allow an arbitrary set R, several other operations can also be thought of as "generalised quantifiers", e.g.
limit : (Nat -> A) -> A supremum : (X -> Real) -> Real integration : ([0,1] -> Real) -> Real. A selection function for a quantifier phi : (X -> R) -> R is a function of type e : (X -> R) -> X which produces the point where the quantifier is attained, i.e. phi(p) = p(e(p)). In the case of the existential quantifier, for instance, this is Hilbert's epsilon term. In the case of integration, the mean value theorem says that integration over C[0,1] has a selection function, i.e. for any p in C[0,1] there exists an x in [0,1] such that integral(p) = p(x). Hence, there must exists a function e : ([0,1] -> R) -> [0,1] such that integral(p) = p(e(p)). This talk aims to motivate the concept of selection functions and attainable quantifiers, and show that the product of such objects in fact appears in wide range of different areas such as Nash equilibrium (backward induction), fixed point theory (Bekic's lemma), algorithms (backtracking) and proof theory (bar recursion). Talk based on joint work with Martin Escardo (preprint). |
19th November 2009
|
Sam Staton (University of Cambridge)
Process Calculi, Bisimulation and Coalgebras
What is an operational, syntax-free model for a process calculus? Is there a uniform framework for describing the myriad of process equivalences and bisimulations?
The theory of "coalgebras" has been proposed as a theory of generalized transition systems. There are general coalgebraic notions of bisimulation for these transition systems. I will demonstrate how these coalgebraic ideas can be used to model simple process calculi, focusing on the pi-calculus. Curiously, to build these coalgebraic models it is necessary to make use of fragments of the equational axiomatization of bisimilarity. The talk will draw on my CALCO'09 and LICS'08 papers. |
12th November 2009
|
Anuj Dawar (University of Cambridge)
Complexity: Structures and Specifications
A central achievement of Complexity Theory was the development of the theory of NP-completeness, which gave an explanation for the apparent intractability of a large number of problems. A problem in this sense is typically given by a class of instances (for example, graphs) and a specified property (such as that a graphs is 3-colourable) that must be decided. We now know that tractability can be recovered in some cases by restricting the class of instances to structurally simple ones, or by restricting the properties we can specify. To formalise the former kinds of restrictions, we generally look to structural graph theory while to formalise the latter one naturally turns to formal logic. In this talk we review a number of results on the complexity of evaluating logical specifications in restricted classes of graphs, which can be cast in this light - particularly on the evaluation of first-order logic on graphs.
|
5th November 2009
|
Michael Huth (Imperial College London)
Access Control via Belnap Logic
Access control to IT systems increasingly relies on the ability to compose policies. There is thus benefit in any framework for policy composition that is intuitive, formal (and so analyzable and implementable), expressive, independent of specific application domains, and yet able to accommodate domain-specific instances. Here we develop such a framework based on Belnap logic. An access-control policy is interpreted as a four-valued predicate that maps access requests to either grant, deny, conflict, or unspecified -- the four values of the Belnap bilattice. We define a very expressive access-control policy language PBel (pronounced "pebble"), having composition operators based on the operators of Belnap logic. A query language can express important properties of policies such as conflict-freedom in terms of the truth and information ordering of the Belnap bilattice. These queries support assume-guarantee reasoning and policies can be analyzed through validity checking of queries. A linear-time translation of queries then reduces their validity checking to validity checking of propositional formulas on predicates over access requests.
Joint work with Glenn Bruns, Bell Labs. |
29th October 2009
|
Stephan Kreutzer (University of Oxford)
Algorithmic Meta-theorems: Upper and Lower Bounds
In 1990, Courcelle proved a fundamental theorem stating that every property of graphs definable in monadic second-order logic can be decided in linear time on any class of graphs of bounded tree-width. This theorem is the first of what is today known as algorithmic meta-theorems, that is, results of the form: every property definable in a logic L can be decided efficiently on any class of structures with property P.
Such theorems are of interest both from a logical point of view, as results on the complexity of the evaluation problem for logics such as first-order or monadic second-order logic, and from an algorithmic point of view, where they provide simple ways of proving that a problem can be solved efficiently on certain classes of structures. Following Courcelle's theorem, several meta-theorems have been established, primarily for first-order logic with respect to properties of structures derived from graph theory. In this talk I will present recent developments in the field and illustrate the key techniques from logic and graph theory used in their proofs. So far, work on algorithmic meta-theorems has mostly focused on obtaining tractability results for as general classes of structures as possible. The question of finding matching lower bounds, that is, intractability results for monadic second-order or first-order logic with respect to certain graph properties, has so far received much less attention. Tight matching bounds, for instance for Courcelle's theorem, would be very interesting as they would give a clean and exact characterisation of tractability for MSO model-checking with respect to structural properties of the models. In the second part of the talk I will present a recent result in this direction showing that Courcelle's theorem can not be extended much further to classes of unbounded tree-width. |
22nd October 2009
|
Marek Sergot (Imperial College London)
The Logic of Unwitting Collective Agency
The logic of agency concerns expressions of the form `agent x brings it about that A', or `agent x is responsible for, or the cause of, its being the case that A'. The study of logics of this type has a very long tradition; in the philosophical literature they are often referred to as logics of action. This talk will present an account that combines this agency view of action with the transition based conceptions more usually encountered in computer science. It will present a two-sorted (modal) language for talking about properties of states and transitions in a transition system, and about the actions of individual agents or groups of agents in a transition, including two modalities of the `brings it about' kind. The study is motivated in part by the formal analysis of patterns of interaction among multiple, independently acting agents, and in part by the logic of norms: the logic of norms and the logic of action/agency have often been studied together. The account generalises naturally to talking about the collective actions of groups of agents; the last part of the talk is concerned with characterisations of several different forms of (unwitting) collective agency. The term `unwitting' can mean both accidental and unaware, and is to emphasise that no assumptions are made at all about the reasoning or perceptual capabilities of the agents. They could be deliberative (human or computer) agents, purely reactive agents, or simple mechanical or computational devices.
|
15th October 2009
|
Ian Hodkinson (Imperial College London)
Bounded Fragment and Hybrid Logic with Polyadic Modalities
It is known that the hybrid language with unary box and diamond and the 'downarrow' and 'at' operators, and the bounded fragment of first-order logic (with appropriate signature), are equally expressive. We show that they remain equally expressive even with polyadic modalities, but that their 'positive' fragments do not.
|
22nd July 2009
|
Rob Goldblatt (Victoria University)
Cover Systems for Lax Logic
Lax modalities occur in intuitionistic logics concerned with hardware verification, the computational lambda calculus, and access control in secure systems. They also encapsulate the logic of Lawvere-Tierney- Grothendieck topologies on topoi. This talk will describe a complete semantics for lax logic that combines the Beth-Kripke-Joyal cover semantics for intuitionistic logic with the classical relational semantics for a "diamond" modality. The main technique used is the lifting of a multiplicative closure operator (nucleus) from a Heyting algebra to its MacNeille completion, and the representation of an arbitrary locale as the lattice of "propositions" of a suitable cover system.
Rajeev Gore (Australian National University)
One-pass Tableaux for Computation Tree Logic
Computation Tree Logic (CTL) is a logic of fundamental importance in Computer Science. Model-checking is now a well-established method for checking whether a given model satisfies a given formula. On the other hand, practical methods for deciding whether a given formula is CTL-satisfiable have not received much attention. The main obstacle is that the problem of deciding CTL-satisfiability is known to be EXPTIME-complete. The traditional tableaux method for deciding CTL-satisfiability first constructs a cyclic graph and then prunes nodes and edges in multiple subsequent passes until the root node is pruned, indicating that the given formula is CTL-unsatisfiable, or until no further pruning is possible, indicating that the given formula is CTL-satisfiable.
We give the first single-pass tableau decision procedure for CTL-satisfiability. Our method extends Schwendimann's single-pass decision procedure for propositional linear temporal logic (PLTL) and extends to many other fix-point logics like propositional dynamic logic (PDL) and the logic of common knowledge (LCK). Our method builds a rooted tree with ancestor cycles and often outperforms the traditional method. In the worst case however, our method may require 2EXPTIME. A similar phenomenon occurs in extremely efficient practical single-pass tableau algorithms for very expressive description logics with EXPTIME-complete decision problems because the 2EXPTIME worst-case behaviour rarely arises. Our method is amenable to the numerous optimisation methods invented for these description logics and has been implemented in the Tableau Work Bench (twb.rsise.anu.edu.au) without these extensive optimisations. Joint work with Pietro Abate and Florian Widmann. |
25th June 2009
|
Alexander Kurz (University of Leicester)
Algebraic Logic from Presentations of Functors, and Applications
The notion of presentation of a functor was introduced in coalgebraic logic. It allows us to associate a logic L_T to T-coalgebras, uniformly in the type T and for abitrary T. Recently we found that this notion also has applications to lambda-calculus or first-order logic. These applications are based on the work of Gabbay and Pitts on nominal sets. Most of the talk will assume only some basic knowledge of category theory (categories and functors) and universal algebra (defining varieties by operations and equations).
Based on joint work with M. Bonsangue, J. Rosicky and D. Petrisan. |
8 June 2009
|
Yde Venema (University of Amsterdam)
Automata for all Coalgebras
Partly because of their tight connection with (modal) fixpoint logis, automata operating on infinite objects provide an invaluable tool for the specification and verification of the ongoing behavior of potentially infinite structures. Coalgebra provides an abstract and general mathematical framework for modelling state-based evolving systems, by studying their properties in a uniform matter, parametric in the type of the structure. In this framework the question naturally arises whether the theory of for instance parity automata can be lifted to a coalgebraic level of generality.
In the talk we give a quick introduction to coalgebra, and coalgebraic modal logic based on predicate liftings. We then define the notion of a Lambda- automaton, where Lambda is a set of predicate liftings for a given functor T, and the notion of acceptance for such an automaton, which is defined in terms of an acceptance game. The main result that we discuss states that a Lambda-automaton accepts a pointed coalgebra iff it accepts a finite coalgebra which is obtained from the automaton itself by some effective construction. This result has applications in the theory of coalgebraic modal fixpoint logic: under some assumptions on the complexity of the one-step satisfiability problem, we obtain an exponential solution to the satisfiability problem. [This slightly generalizes earlier results by Cirstea, Kupke and Pattinson.] This talk is largely based on joint work with Gaelle Fontaine and Raul Leal. |
4th June 2009
|
Ozan Kahramanoğulları (Imperial College London)
Processes of Biology
Modelling of biological systems by mathematical and computational techniques is becoming increasingly widespread in research on biological systems. In this respect, the view of biological systems as complex reactive systems provides the means to model, simulate and analyse these massively parallel, inherently concurrent systems with the aim of providing an operational/mechanistic understanding. In this talk, we give an overview of our work on applications of stochastic pi calculus to modelling of biological systems. We demonstrate our techniques, and the tools developed, on models of different phases of early stages of phagocytosis (cell-eating).
|
22nd May 2009
|
Amélie Gheerbrant (University of Amsterdam)
Craig Interpolation for Temporal Languages on Finite and Infinite Linear Orders
The talk will be based on a joint work with Balder ten Cate. We will present a study of Craig interpolation for fragments and extensions of propositional linear temporal logic (PLTL), both on finite linear orders and on infinite linear orders of order type omega. We will consider fragments of PLTL obtained by restricting the set of temporal connectives and, for each of these fragments, we will identify its smallest extension that has Craig interpolation. Depending on the underlying set of temporal operators, this extension will turn out to be one of the following three logics: the fragment of PLTL having only the Next operator; the extension of PLTL with a fixpoint operator mu (known as linear time mu- calculus); the fixpoint extension of the "Until-only" fragment of PLTL.
|
7th May 2009
|
David Makinson (London School of Economics)
Logical Questions behind the Lottery and Preface Paradoxes: Lossy Rules for Uncertain Inference
We reflect on lessons that the lottery and preface paradoxes provide for the logic of uncertain inference. One of these is the unreliability of the rule of conjunction of conclusions in such contexts, whether probabilistic or qualitative. This failure leads us to a formal examination of consequence relations without that rule, the study of other rules that may nevertheless be satisfied in uncertain inference, and a partial rehabilitation of conjunction as a 'lossy' rule. Another lesson is the possibility of rational inconsistent belief. This leads us to suggest criteria for deciding when an inconsistent set of beliefs may nevertheless reasonably be retained.
|
30th April 2009
|
Mark Kattenbelt (University of Oxford)
Verification of Probabilistic Programs
We present a methodology and implementation for the verification of 'probabilistic programs': ANSI-C programs exhibiting probabilistic behaviour. Examples of probabilistic programs are randomised algorithms and programs that are prone to failures such as, e.g., network programs. To overcome the complex nature of software we have developed a quantitative abstraction-refinement methodology in which probabilistic programs are modelled with Markov decision processes (MDPs) and their abstractions with stochastic two-player games. Our techniques target quantitative properties of software such as `the maximum probability of file-transfer failure' or `the minimum expected number of loop iterations' and the abstractions we construct yield lower and upper bounds on these properties. These bounds in turn guide the refinement process.
We build upon state-of-the-art techniques and tools, using SAT-based predicate abstraction, symbolic implementations of probabilistic model checking and components from GOTO-CC, SATABS and PRISM. Experimental results show that our approach performs very well in practice, successfully verifying actual networking software whose complexity is significantly beyond the scope of existing probabilistic verification tools. |
23rd April 2009
|
Nir Piterman (Imperial College London)
Synthesis From Temporal Specifications
In this talk we give a short introduction to the process of synthesis, the automatic production of designs from their specifications. We are interested in reactive systems, systems that continuously interact with other programs, users, or their environment. Classical solutions to synthesis use either two player games or tree automata. We give a short introduction to the technique of using two player games for synthesis and how to solve such games.
The classical solution to synthesis requires the usage of deterministic automata. This solution is 2EXPTIME-complete, is quite complicated, and does not work well in practice. We suggest a syntactic approach that restricts the kind of properties users are allowed to write. We claim that this approach is general enough and can be extended to cover most properties written in practice. The main advantage of our approach is that it is tailored to the use of BDDs and uses the structure of given properties to handle them more efficiently. We discuss how to extend our approach to handle more general properties and mention future direction. |
30th March 2009
|
Alexandru Baltag (University of Oxford)
Relational Transformers for Modelling Belief
Recently, Dynamic Epistemic Logic has been extended to deal with Belief Revision issues. Abstractly, this involves studying logics having modalities for various ``relational transformers" capturing various notions of ``learning", ``updating" or ``revising" beliefs. In the simplest case (that of a single agent having a finite belief base), the models are just total preorders on some finite set of states (meant to capturing some notion of ``relative subjective plausibility" of states), and one considers various modal operators (capturing different notions of ``knowledge", ``belief" , ``conditional belief", and in the multi-agent case their Kleene stars, i.e. ``common knowledge", ``common belief" etc). One then represents various types of ``learning" of some new information phi (expressible in the modal syntax considered above) as types of ``relational transformations". In the literature there are at least three proposals for representing semantically the learning operator as an operation on preordered models: (1) the so-called ``update", or ``conditionalization" (known in dynamic-epistemic logic as ``public announcement"): delete all the non-phi states and restrict the order and valuation to the remaining states; (2) the ``radical" (or ``lexicographic") upgrade: change the order by putting all the phi states above all the non-phi states (and leaving the relative order relations unchanged within the two zones); (3) and the ``conservative" upgrade: put only the ``top" phi-states above all others, leaving everything else the same.
These relational transformations can be generalized in several ways, obtaining a general notion of ``upgrade" of a preordered model. One can add to the language dynamic modalities for upgrades, obtaining various logics, which are usually still decidable. An interesting issue is what happens if we iterate the same type of upgrade indefinitely. (This has applications, not only in Belief Revision, but in Game Theory and Learning Theory.) We show that, for some types of upgrades, the model-changing process always reaches a fixed point, stabilizing eventually on a fixed model, while for others the models keep cycling forever. Even in the cycling case, for some important types of upgrades the most relevant parts of the model (e.g. the ``agent's beliefs", i.e. the ``top" states) do stabilize eventually. The logics obtained by adding to the above modalities fixed point operators or even just Kleene star (on the dynamic modalities) are unfeasible. However, one can add modalities that quantify over all transformations of a certain type and still obtain complete (though undecidable) logics. |
19th March 2009
|
Rob Myers (Imperial College London)
Generic Algorithms for Synthesising Multisorted Generalisations of Graphs
Transition structures are ubiquitous in Computer Science -- be they labelled transition systems, trees, Markov chains, metric spaces or subrelational transitions used to model preferences and belief change. The notion of a possibly multisorted *transition* from a state, to or over a collection of states, can be succinctly captured by the concept of an endofunctor over the product of the category of sets. Although this might seem quite abstract, it has enabled us to lift state of the art algorithms in the area of (hybrid) modal logic to arbitrary notions of multisorted transitions. In particular one can name states, reason about inequality and specify both local and global observations concerning transition structures. The algorithm will then either return a structure satisfying these conditions or otherwise prove no such structure is possible.
|
11th March 2009
|
Mehrnoosh Sadrzadeh (University of Oxford)
Positive Logic with Adjoint Modalities: Proof Theory, Semantics and Reasoning about Information
We consider a simple modal logic whose non-modal part has conjunction and disjunction as connectives and whose modalities come in adjoint pairs, but are not in general closure operators. Despite absence of negation and implication, and of axioms corresponding to the characteristic axioms of (e.g.) T, S4 and S5, such logics are useful, as shown in previous work by Baltag, Coecke and the first author, for encoding and reasoning about information and misinformation in multi-agent systems. For such a logic we present an algebraic semantics, using lattices with agent-indexed families of adjoint pairs of operators, and a cut-free sequent calculus. The calculus exploits operators on sequents, in the style of ``nested'' or ``tree-sequent'' calculi; cut-admissibility is shown by constructive syntactic methods. The applicability of the logic is illustrated by reasoning about the muddy children puzzle, for which the calculus is augmented with extra rules to express the facts of the muddy children scenario.
Joint work with Roy Dyckhoff. |
24th February 2009
|
Walter Hussak (Loughborough University)
Decidability of Monodic Fragments with Functions and Propositional Quantification
This talk will be about how the additions of functions and propositional quantification to the monodic fragment of first-order linear temporal logic, affect decidability. Decidable fragments with functions can be obtained by translation to the fragment without functions. An expanded form of the quasimodels used to prove decidability of the basic monodic fragment, can be used to establish the decidability of some non-trivial fragments with propositional quantification.
|
18th February 2009
|
Roman Kontchakov (Birkbeck College)
The DL-Lite Family and Relatives
The recently introduced series of description logics under the common moniker `DL-Lite' have attracted attention due to the low computational complexity of inference, on the one hand, and the ability to represent conceptual modeling formalisms, on the other. The main aim of this article is to carry out a thorough and systematic investigation of inference in the logics of the `DL-Lite family,' under a variety of combinations of constructs and under different semantic assumptions. Specifically, we extend the original DL-Lite logics along three axes: by (i) adding the Booleans connectives, (ii) adding number restrictions to concept constructors, and (iii) allowing role hierarchies. We analyze the combined complexity of satisfiability for the resulting logics, as well as the data complexity of instance checking and answering positive existential queries. Our approach is based on considering DL-Lite logics as suitable fragments of first-order logic, which provides useful insights into their properties and, in particular, computational behavior. We also investigate the impact of the unique name assumption on the computational properties of the DL-Lite logics.
|
11th February 2009
|
Mohammad Raza (Imperial College London)
Automatic Parallelization with Separation Logic
Separation logic is a recent approach to the analysis of pointer programs in which resource separation is expressed with a `spatial conjunction' in assertions. This has led to simple and elegant proofs of programs which reflect spatial intuitions, as well as automatic and scalable safety verification of industrial programs such as device drivers. But until now the focus has only been on verification. In this work we explore the applicability of the separation logic approach to program optimisation, given the explicit treatment of resource provided by the logic. In particular, we extend the separation logic framework to express properties of memory separation between different points in program, and present an algorithm for detecting independences between program statements which can be used for parallelisation. We demonstrate the strength of this approach over previous reachability-based approaches for detecting independences between statements in programs that manipulate dynamic pointer data structures.
|
28th January 2009
|
Thomas Dinsdale-Young (Imperial College London)
Reasoning about Concurrent B-Trees
In this talk, I will look at a concurrent B-Tree algorithm, the likes of which underpin large-scale databases, with a view to reasoning about it formally. I will describe how RGSep -- a recent development combining rely-guarantee and separation logic reasoning -- can be used to prove specifications for the algorithm. With a view to providing high-level specifications (which abstract the implementation details), I will show some limitations of the RGSep approach and discuss how the latest developments may overcome this and where further work may be required.
|
22nd January 2009
|
Bartek Klin (University of Cambridge)
Bialgebras and Modal Logic
Bialgebraic semantics is a categorical approach to modelling structural operational semantics (SOS), parametrised by notions of syntax and behaviour. It allows us to treat SOS descriptions of various "kinds" of systems (including labelled transition systems, probabilistic or timed systems etc.) in a uniform way. In each case, the canonical process equivalence related to the "kind", or behaviour of systems, is guaranteed compositional. A well-known GSOS format can be seen as a special case of the bialgebraic approach, where the behaviour is the one for labelled transition systems and bisimilarity is compositional.
In this talk I will discuss how to combine bialgebraic semantics with coalgebraic modal logic to prove a compositionality theorem parametrised not only by syntax and behaviour, but also by the notion of process equivalence, possibly different from canonical bisimilarities. The theorem relies on the existence of "logical duals" to structural operational semantic descriptions. |
11th December 2008
|
Amin Farjudian (Aston University)
Domain-theoretic Semantics for Distributed Exact Real Computation
Exact real computation offers a solution to the problems caused by the inaccuracy of floating point computation over real numbers arising from round-off and truncation errors. Nevertheless, its soundness comes at the cost of relative inefficiency. Therefore parallel and distributed exact real computation seems a natural way forward.
The semantic models for concurrency traditionally have focused on low level data transmission. There is a need to develop an appropriate semantics for such computation, addressing issues specific to exact real computation. In this talk we first argue why query-answer protocols are more suitable than stream protocols when remotely communicating real numbers, functions and other higher order objects. Then we present a semantics for queries, answers and the two combined, yielding a very general notion of convergence rate. We show a compositional semantics of processes and prove that our composition is safe and preserves responsiveness. Finally we demonstrate our prototype implementation and discuss the potential for further theoretical development and practical applications. |
4th December 2008
|
Nick Bezhanishvili (Imperial College London)
Transfer Results for Hybrid Logic
For every Kripke complete modal logic L, I will define its hybrid companion L_H and investigate which properties transfer from L to L_H. In fact, for a specific class of logics, there exists a satisfiability-preserving translation from L_H to L. We will see that for this class of logics, complexity, (uniform) interpolation, and finite axiomatization transfer from L to its hybrid companion. I will also provide examples showing that, in general, complexity, decidability and the finite model property do not transfer from L to L_H.
This talk is based on joint work with Balder ten Cate. |
27th November 2008
|
Tadeusz Litak (Birkbeck College)
Core XPath as a Modal Logic for Trees: Axiomatizations and Complexity
This is a joint work with Balder ten Cate and Maarten Marx (University of Amsterdam). We provide complete axiomatizations for several fragments of Core XPath 1.0: the navigational core of XPath 1.0 introduced by Gottlob, Koch and Pichler. A complete axiomatization for a given fragment is a set of equivalences from which every other valid equivalence is derivable; equivalences can be thought of as (undirected) rewrite rules. Specifically, we axiomatize single axis fragments of Core XPath as well as full Core XPath 1.0. We also provide a short overview of complexity results for those fragments and a few others in between. A lot of attention is devoted to the issue of eliminating inelegant, "unorthodox" rewrite rules with side conditions. This is a good opportunity to see a somewhat unexpected application of modal logic in CS, as the whole work is based on modal techniques for well-founded structures. In particular, the Loeb logic, "logic of finite trees" of Blackburn et al., Fine's normal forms for modal formulas and reducts of Tarski's relation algebras are going to play a prominent role.
|
20th November 2008
|
Radu Mardare (Microsoft Research - University of Trento)
Colonies of Synchronizing Agents: A Case Study in Robustness of Natural Systems
The focus of the talk is the concept of robustness emerging from the new computational paradigms inspired by natural systems. For tracing this concept, we study a modelling framework and computational paradigm called Colonies of Synchronizing Agents (CSAs), which abstracts intracellular and intercellular mechanisms of biological tissues. The model is based on a multiset of agents (cells) in a common environment (a tissue). Each agent has a local contents, stored in the form of a multiset of atomic objects (e.g. representing molecules), updated by multiset rewriting rules which may act on individual agents intracellular action) or synchronize the contents of pairs of agents (intercellular action). We show that the notion of robustness for CSAs relies on a notion of bisimulation on colonies. We characterize the concept logically and we study its decidability in various computational scenarios.
|
13th November 2008
|
James Brotherston (Imperial College London)
Classical BI: A Logic for Reasoning About Dualising Resource
We present classical BI (CBI): a nonconservative extension of O'Hearn and Pym's logic of bunched implications, BI, in which both the additive *and* the multiplicative connectives behave classically. In particular, CBI includes multiplicative versions of falsity, negation and disjunction, which are absent in BI.
We give an algebraic semantics for CBI that leads us naturally to consider resource models in which every resource has a unique dual. We also give a cut-eliminating proof system for CBI, based on Belnap's display logic, which is sound and complete with respect to our semantics. This is joint work with Cristiano Calcagno, also at Imperial. |
6th November 2008
|
Raul Leal Rodriguez (University of Amsterdam)
Behavior of Transition Systems
State transition systems have been widely used to describe computational processes; a natural issue is to describe the behavior of such systems with some formal language. In this talk we will present a formal language to describe transition systems. The talk has three parts as follows: First, we will show that transition systems can be naturally presented in the more abstract framework of coalgebra. One gain from this presentation is that coalgebras encompass many more diverse systems, for example, labeled transition systems, deterministic automata, pi-calculus, HD-automata, stochastic systems and neighborhood frames among others. Second, we will introduce languages to describe coalgebras (coalgebraic languages), which in particular will be languages to describe transition systems. Third, we will show how those languages for coalgebras can be used to describe the behavior of a system. In fact we will show that under some natural conditions we can endow those languages with the structure of the system they are describing.
|
30th October 2008
|
Ozan Kahramanogullari (Imperial College London)
Deep Inference and its Applications
Deep inference is a recent proof theoretical methodology that generalizes over the traditional notion of (shallow) inference, for example, the inference in the sequent calculus. For different logics, deep inference brings about previously unobserved combinatoric properties, while keeping the focus on notions of computational relevance such as locality and atomicity of inference. Moreover, deep inference allows to design deductive systems that are impossible with shallow inference, however are beneficial in the logical interpretation of computation. In particular, with deep inference, it becomes possible to design deductive systems that capture the sequential composition of the process algebras.
In this talk, we give an introduction to deep inference and discuss it from a computation as proof search point of view in a proof theoretical setting. We then discuss potential applications within a unifying deductive platform for certain problems in planning, verification of security protocols and modelling of biological systems. |
23rd October 2008
|
Mark Wheelhouse (Imperial College London)
High and Low Level Local Reasoning About Programs that Alter Data Structures
Hoare Logic allows us to reason about programs that alter data structures. We will look at two spatial logics that allow this reasoning to be carried out in a local fashion by providing command specifications in terms of "Small Axioms", each of which mentions only the parts of the data structure accessed by that particular command.
The first of these is Separation Logic, which works with a low level storage system based on a C like heap structure. Separation Logic has been used very successfully in the testing of windows device drivers. The second logic we shall consider is Context Logic which lets us reason about structured data update at a higher level of abstraction. Context Logic is currently being used to provide a formal specification for the DOM (Document Object Model) XML update library. This talk is intended to provide an overview of these logics and identify some of the areas of interest that are currently under investigation. |
16th October 2008
|
Clemens Kupke (Imperial College London)
Coalgebra Automata
Automata that operate on infinite words, trees or on labelled transition systems can all be seen as special instances of automata that operate on coalgebras. In my talk I will first recall the definition of a coalgebra and discuss the definition of a coalgebra automaton. After that I will demonstrate that the coalgebraic framework allows for uniform proofs of results that had been previously proven separately for word, tree and graph automata. As examples of such uniform coalgebraic proofs we mention various closure properties of coalgebra automata. In particular, for each alternating automaton it is possible to construct an equivalent non-deterministic one. Finally we will discuss the close connection between coalgebra automata and fixed-point logics.
|
9th October 2008
|
Daniel Wagner (Imperial College London)
Towards a Complete Abstraction Framework for Probabilistic Model Checking
I'll give an introduction to model checking, abstraction for model checking and the theoretic question of completeness for abstraction frameworks. After sketching some known results from the non-probabilistic world, I'll present some preliminary results for probabilistic model checking. One of these results is a game based semantics for PCTL on discrete time Markov Chains.
|
Former organisers include James Smith, Björn Lellmann, Andrew V. Jones, Nick Bezhanishvili, Dirk Pattinson, Rob Myers, Clemens Kupke and Daniel Wagner.
Last updated 19th April, 2015.