STRL STRL








The STRL organises a serie of Annual Distinguished Seminars. These are given by world class researchers.

The list of previous and forthcoming seminars is given below.


  2014
 
Prof. Michael FisherUniversity of Liverpool Verifiable Autonomy: Can you Trust your Robots
     
  2013
 
Prof. Alan BurnsUniversity of York The Future of Real-time Systems Research,
      Mixed Criticality Systems and Emergent Properties
  2012
 
Prof. Samson Abramsky (FRS)University of Oxford Information Flow: Tracing a path through logic,
      computation, topology and physics
  2011
 
Prof. Erol GelenbeImperial College London Networked Auctions
 
  2010
 
 
Prof. Nancy LevesonMassachusetts Institute
of Technology
A Systems Approach to Safety and
     Risk Analysis

 
  2010
 
Prof. Cliff JonesNewcastle University Another Way to Use AI Ideas
     to Support Formal Methods
  2009
 
Prof. Ross AndersonCambridge University Information Security - Where Computer
     Science, Economics and Psychology Meet
  2008
 
Prof. Amir PnueliNew York University Abstraction Methods for Liveness
     
  2007
 
Prof. Morris SlomanImperial College Ubiquitous Computing:
     A Research Grand Challenge
  2006
 
Prof. Ian HorrocksUniversity of Manchester Ontologies and the Semantic Web
 
  2005
 
Prof. Brian RandellNewcastle University Dependable Pervasive Systems
 
  2004
 
Prof. Alan BundyEdinburgh University Planning and Patching Proofs
 
  2003
 
 
Sir Prof. Tony Hoare (FRS)Microsoft Research
Laboratory Cambridge
The Verifying Compiler
 
 
  2002
 
Prof. Mike Gordon (FRS)Cambridge University The Convergence of deduction and
     computation
  2001
 
Prof. Manny LehmanImperial College System Evolution
 



2014

Prof. Michael Fisher, University of Liverpool

Verifiable Autonomy: Can you Trust your Robots.
Date: Thursday 20th March 2014
Location: Clephan Building 2.13
Time: 14:00
Abstract

As the use of autonomous systems and robotics spreads, the need for their activities to not only be understandable and explainable, but even verifiable, is increasing. But how can we be sure what such a system decides to do, and can we really formally verify its behaviour?

The concept of an "agent" is a widely used abstraction for an autonomous entity. While there are several forms of "agent", ranging from the straightforward interpretation of an agent as a "process" all the way through to agents that "act like" humans, many practical autonomous systems are now based on some form of hybrid agent architecture. Increasingly, these more sophisticated agents are required within truly autonomous systems that must not only make their own decisions, but must be able to explain why they made their choices.

In this talk, I will examine these "rational" agents, discuss their role in autonomous systems, and explain how we can formally verify their behaviours and so be more confident about what our autonomous systems decide to do.

Slides (pdf)

Back
2013

Prof. Alan Burns, University of York

The Future of Real-time Systems Research, Mixed Criticality Systems and Emergent Properties.
Date: Tuesday 4th June 2013
Location: Hugh Aston 0.10
Time: 13:00
Abstract

This talk will highlight a number of current open issues in real-time systems research. It will focus on the meaning of predictability and how predictable embedded systems can be designed. Emphasis will be placed on predictability as an emergent property of a system. Also covered in this somewhat wide ranging talk will be the timeband framework (that encourages the use of multiple levels of time granularity) and mixed critically systems.

Video (mp4)

Back
2012

Prof. Samson Abramsky (FRS), University of Oxford

Information Flow: Tracing a path through logic, computation, topology and physics
Date: Thursday 6th December 2012
Location: Edith Murphy 0.28
Time: 14:00
Abstract

TBA

Back
2011

Prof. Erol Gelenbe, Imperial College London

Networked Auctions
Date: Thursday 17th November 2011
Location: Bede Island Lecture Theatre 0.5
Time: 11:00
Abstract

The advent of the Internet as a medium for commerce, and the sale of web space for advertisements, has provided impetus for wide spread introduction of automated trading mechanisms. As a consequence it becomes important to understand the significant parameters of such systems and their economic outcomes in terms of price formation and the income of sellers. The impact of the communication network itself should also be considered. We will show how a principled theory of networked auctions can help understand price and income formation through an approach that is linked to computer and network performance evaluation methodology.

Back
2010

Prof. Nancy Leveson, Massachusetts Institute of Technology

A Systems Approach to Safety and Risk Analysis
Date: Monday 29th November 2010
Location: Bede Island 0.5
Time: 12:00
Abstract

"It isn't what we don't know that gives us trouble, it's what we do know that just ain't so."

Computers are being introduced into the control of virtually every dangerous system, including nuclear weapons, transportation systems (aircraft, automobiles, trains), medical devices, and chemical and nuclear power plants. Few engineering techniques exist to provide assurance that safety is not being degraded by the substitution of digital systems for the electromechanical designs that have been perfected through decades and sometimes centuries of experience. At the same time, nothing is absolutely safe, and computers provide important advantages over the human operators, social systems, and engineered devices they are replacing.

Much of what engineers do for safety was developed in the 1960's and is not effective for the complex, software-intensive systems being built today (although that hasn't stopped people from using it anyway). In this talk, I'll present a new paradigm for designing and analyzing such systems that is based on control theory rather than reliability theory. The approach has been proven to work in practice on very complex software-intensive systems. Comparisons on real projects has shown it to be more powerful than traditional techniques but, surprisingly, easier to use.

Back
2010

Prof. Cliff Jones, Newcastle University

Another Way to Use AI Ideas to Support Formal Methods
Date: Thursday 6th May 2010
Location: Bede Island
Time: 14:00
Abstract

We hope to improve the productivity of industrial engineers who are using formal methods. Some prejudices (based of decades of experience) will be presented but the main point is about using Artificial Intelligence (AI) ideas. Theorem proving was seen as an early challenge by AI researchers but any set of heuristics will have a limit. In a recently funded project (AI4FM) we are aiming to learn from the efforts of human theorem proving attempts to improve tool support for classes of proof obligations.

Slides (pdf)

Back
2009

Prof. Ross Anderson, Cambridge University

Information Security - Where Computer Science, Economics and Psychology Meet
Date: Monday 20th April 2009
Location: Clephan Building, Lecture Theatre 2.13
Time: 14:00
Abstract

For years, people thought that the insecurity of the Internet was due to a shortage of features, and so all through the 1990s we worked vigorously on developing better encryption, authentication and filtering mechanisms. But things didn't get any better. We began to realise that failures - of both security and dependability - are intricately tied up with incentives. Systems often fail because the people who guard and maintain them don't bear the full costs of failure. Microsoft doesn't accept liability for vulnerabilities that lead to millions of its customers being hacked; DVD region coding is easy to subvert because equipment vendors don't lose money when it fails; and medical records become less private once systems are bought by government ministers rather than doctors.

This led to the emergence of a new field of study, information security economics. It provides valuable insights not just into `security' topics such as privacy, bugs, spam, and phishing, but into more general areas such as system dependability and policy. This research program has been starting to spill over into more general security questions (such as law-enforcement strategy), and into the interface between security and sociology.

An exciting recent development is the interaction with psychology. As systems get harder to attack, the bad guys attack the users instead; phishing only got properly going in 2004, but by 2006 cost British banks 35m. We now know that most information security mechanisms are too hard to use, being designed by geeks for geeks. We urgently need to introduce bright ideas from psychology and human-computer interface design. And in addition to these 'micro' scale concerns, there are many 'macro' scale problems - why do people overreact to terrorism, yet underreact to everything from environmental degradation through online threats to road traffic accidents?

The challenge is to build a proper multi-disciplinary framework for analyzing security problems - one that is both principled and effective. Up till now, security economics has started to fuse the engineering and economic aspects, while behavioural economics, which studies the heuristics and biases that affect human judgment, has put psychology and economics together. The next big research task may well be security psychology.

Slides (pdf)

Video (m4v)

Back
2008

Prof. Amir Pnueli, New York University

Abstraction Methods for Liveness
Date: May 13th 2008
Location: Clephan Building, Lecture Theatre 2.13
Time: 14:00
Abstract

It is a known fact that finitary state abstraction methods, such as predicate abstraction, are inadequate for verifying general liveness properties or even termination of sequential programs. In this talk we will present an abstraction approach called "ranking abstraction" which is sound and complete for verifying all temporally specified properties, including all liveness properties.

We will start by presenting a general simple framework for state abstraction emphasizing that, in order to get soundness, it is necessary to apply an over-approximating abstraction to the system and an under-approximating abstraction to the (temporal) property. We show that finitary version of this abstraction are complete for verifying all safety properties.

We then consider abstraction approaches to the verification of deadlock freedom, presenting some sufficient conditions guaranteeing that deadlock freedom is inherited from the concrete to the abstract.

Finally, we introduce the method of ranking abstraction and illustrate its application to the verification of termination and more general liveness properties. In this presentation we emphasize the similarity between predicate abstraction and its extension into ranking abstraction. In particular, we illustrate how abstraction refinement can be applied to ranking abstraction. Time permitting, we will present a brief comparison between ranking abstraction and the methods of transition abstraction developed by Podelski, Rybalchenko, and Cook which underly the "Terminator" system.

Slides (pdf)

Back
2007

Prof. Morris Sloman, Imperial College

Ubiquitous Computing: A Research Grand Challenge
Date: February 15th 2007
Location: Hawthorn Building 1.05
Time: 14:00
Abstract

Pervasive or ubiquitous computing systems consist of large numbers of 'invisible' computers embedded into the environment which may interact with mobile users or form intelligent networks for sensing environmental conditions. Users will experience this world through a wide variety of devices, some they will wear (e.g medical monitoring systems), some they will carry (e.g. personal communicators that integrate mobile phones and PDAs), and some that are implanted in the vehicles they use (e.g car information systems). This heterogeneous collection of devices will interact with intelligent sensors and actuators embedded in our homes, offices, transportation systems to form an intelligent pervasive environment which aids normal activities related to work, education, entertainment or healthcare.

This talk will discuss some of the research issues identified in the Ubiquitous Computing Grand Challenge and our approach to policy-based self-managed cells for autonomic ubiquitous systems.

Slides (pdf)

Back
2006

Prof. Ian Horrocks, University of Manchester

Ontologies and the Semantic Web
Date: June 20th 2006
Location: Lecture Theatre 1.30 (Hawthorn Building)
Time: 14:00
Abstract

The World Wide Web is phenomenally successful, and has made an unprecedented range of information and services available to an unprecedented number of users, but there is an urgent need for more intelligent applications that can better exploit these resources and prevent users being overwhelmed by their sheer volume. The goal of Semantic Web research is to facilitate the development of such applications by transforming the Web from a linked document repository into a distributed knowledge base and application platform. Ontologies will play a key role in this transformation by capturing knowledge that will enable applications to better understand Web accessible resources, and to use them more intelligently. This talk will introduce the Semantic Web, and show how basic research in knowledge representation and reasoning has contributed to the design of OWL, a Semantic Web ontology language developed by the World Wide Web Consortium; it will also explore the impact that Semantic Web research is having in areas as diverse as medicine, genomics, earth sciences, agriculture, fisheries and manufacturing.

Back
2005

Prof. Brian Randell, Newcastle University

Dependable Pervasive Systems
Date: May 20th 2005
Location: Bede Island Lecture Theatre - 0.5
Time: 14:00
Abstract

Present trends indicate that huge networked computer systems are likely to become pervasive, as information technology is embedded into virtually everything, and to be required to function essentially continuously. In my view even today's (under-used) "best practice" regarding the achievement of high dependability - reliability, availability, security, safety, etc. - from large networked computer systems will not suffice for future pervasive systems. I will summarize the current state of research into the four basic dependability technologies:

  1. fault prevention (to avoid the occurrence or introduction of faults),
  2. fault removal (through validation and verification),
  3. fault tolerance (so that failures do not necessarily occur even if faults remain), and
  4. fault forecasting (the means of assessing progress towards achieving adequate dependability).

I then argue that much further research is required on all four dependability technologies in order to cope with pervasive systems, identify some priorities, and discuss how this research could best be aimed at making system dependability into a "commodity" that UK industry can value and from which it can profit.
Back
2004

Prof. Alan Bundy, Edinburgh University

Planning and Patching Proofs
Date: February 9th 2004
Location: Portland Building P1.02
Time: 14:00
Abstract

We describe proof planning, a technique for describing the hierarchical structure of proofs and using this structure to guide proof attempts. When such a proof attempt fails, we show how such failures can be analysed and a patch formulated and applied. We illustrate these ideas in the domain of inductive proof. We will discuss the pros and cons of proof planning and our plans for future research in this area.

Back
2003

Sir Prof. Tony Hoare (FRS), Microsoft Research Laboratory Cambridge

The Verifying Compiler
Date: May 22nd 2003
Location: room 0.45 Clephan Building
Time: 14:00
Abstract

I propose a set of criteria which distinguish a grand challenge in science or engineering from the many other kinds of short-term or long-term research problems that engage the interest of scientists and engineers. As an example drawn from Computer Science, I revive an old challenge: the construction and application of a verifying compiler that guarantees correctness of a program before running it.

Back
2002

Prof. Mike Gordon (FRS), Cambridge University

The Convergence of deduction and computation
Date: October 1st 2002
Location: Trinity House Chapel
Time: 14:30-15:30

BCS-FACS members are also welcome to attend.

Abstract

Deductive proof, property checking and program execution are traditionally implemented by distinct tools: theorem provers, model checkers and interpreters, respectively. However, several proof assistants support these activities within a single framework, blurring the distinctions between them.

I will describe how the HOL-4 platform integrates theorem proving, BDD and SAT based algorithmic verification and efficient execution. Recent work on exploiting this integration will be outlined and we will discuss how it represents a modern realisation of the principle "Computation = Logic + Control" developed by Kowalski and Hayes in the 1970s.

Back
2001

Prof. Manny Lehman, Imperial College

System Evolution
Abstract

Interest in software evolution is spreading rapidly. The successive changes to the software system that implement the evolution are required to adapt it to a changing operational domain, enhance and extend its functionality and improve its performance. Industrial and academic interest has, in the main, focused on the means whereby evolution is achieved. It is, however, equally relevant to ask why does software evolve and what are the characteristics of the resultant evolution phenomenon. Having answers to these questions would surely speed up improvement of the process to meet the challenges posed by a society ever more dependent on computer systems that function satisfactorily in an ever changing world. For the last five years a group in the Department of Computing at Imperial College has studied software evolution and its relationship to the feedback nature of the software process. Two EPSRC funded projects, FEAST/1 and /2, have analysed empirical data related to a number of systems of significantly different sizes, from different application areas, obtained from a number of industrial organisations. Observations and their interpretation reveal the system dynamics of the evolution process. Their analysis suggests management and technical guidelines and rules which in turn suggest improvements of the software evolution process. The presentation will then focus on examples of models of evolutionary behaviour. Three simple models of system growth will be discussed. They were fitted to empirical data relating to four software systems with average error in the range 2 to 17 percent. The models are consistent with the view that complexity constrains system growth, though the presence of discontinuities in the rate of growth trends and the extent to which the models are restricted to individual processes or specific domains need to be further investigated. The talk will briefly show how the results provide the basis of a theory of software evolution and discuss possible investigative strategies for its achievement.

Back







Mar 24 2014
Home | Training | Research | Members | About | News