skip to primary navigationskip to content


21st International Conference on Information Fusion - 10 - 13 July 2018

Studying at Cambridge


Tutorial Programme

Active Authentication in Mobile Devices: Role of Information Fusion

Motivation and Rationale. Why is this topic important to the FG community? What makes it timely?

Nowadays everyone carries sensitive information, such as bank account details, emails or passwords in their smartphones and tablets. In addition, there is an increasing need for secure mobile applications. Traditional login authentication in such mobile scenarios, e.g., with passwords, is not safe enough, which has led to important developments in biometric authentication for mobiles, e.g., based on fingerprints and face images. These authentication mechanisms are also limited in nature, as the users are typically authenticated only once at the beginning of the session. This situation has led to a growing body of literature looking now for new ways of authentication based on continuous biometrics (also known as active authentication), which periodically authenticate the user and thus ensure security in the device beyond the entry point, and for which both face and gesture information may be very useful.

The huge scale and practical implications of secure and convenient mobile authentication make this area very important and timely to the information fusion community. In fact, this community has much to contribute to the area of biometric active authentication to overcome the challenges that are limiting now its practical application.

Presenters: Julian Fierrez and Vishal Patel

 An Introduction to Track-to-Track Fusion and the Distributed Kalman Filter

The increasing trend towards  connected  sensors (”internet of things” and ”ubiquitous computing”) derive  a demand  for powerful distributed estimation methodologies. In tracking applications, the ”Distributed Kalman Filter” (DKF) provides an optimal solution under certain conditions.  The optimal solution in terms of the estimation  accuracy is also achieved by a centralized fusion algorithm which receives either all associated measurements or so-called tracklets. However,  this scheme needs the result of each update step for the optimal solution  whereas the DKF works at arbitrary  communication  rates since the calculation is completely  distributed.  Two more recent methodologies  are based  on the ”accumulated  state densities” (ASD) which augment the states from multiple time instants. In practical applications, tracklet fusion based on the equivalent measurement often achieves reliable results even if full communication is not available. The limitations  and robustness of the tracklet fusion will be discussed.

At first, the tutorial will explain the origin of the challenges in distributed tracking. Then, possible solutions to them are derived and illuminated.  In particular, algorithms will be provided for each presented solution.

The list of topics includes: Short introduction to target tracking, Tracklet Fusion, Exact Fusion with cross-covariances, Naive Fusion, Federated Fusion, Decentralized Fusion (Consensus Kalman  Filter), Distributed Kalman Filter (DKF), Debiasing for the DKF, Distributed ASD Fusion, Augmented State Tracklet Fusion.

Presenter: Felix Govaers 

Analytic Combinatorics for Multi-Object Tracking and Higher Level Fusion

This tutorial is designed to facilitate understanding of the classical theory of Analytic Combinatorics (AC) and how to apply it to problems in multi-object tracking and higher level data fusion.  AC is an economical technique for encoding combinatorial problems—without information loss—into the derivatives of a generating function (GF).  Exact Bayesian filters derived from the GF avoid the heavy accounting burden required by traditional enumeration methods.  Although AC is an established mathematical field, it is not widely known in either the academic engineering community or the practicing data fusion/tracking community.  This tutorial lays the groundwork for understanding the methods of AC, starting with the GF for the classical Bayes-Markov filter.  From this cornerstone, we derive many established filters (e.g., PDA, JPDA, JIPDA, PHD, CPHD, MultiBernoulli, MHT) with simplicity, economy, and insight. We also show how to use the saddle point method (method of stationary phase) to find low complexity approximations of probability distributions and summary statistics.

Presenters: Roy Streit and Murat Efe

Data Fusion for the Internet of Things

The advent of the Internet of Things (IoT) was possible due to advances on microelectronics techniques that fostered the development of smart objects able to collect, process and transmit data. The integration of these smart objects into the Internet originated the concept of Internet of Things (IoT). The IoT vision advocates a world of interconnected objects, capable of being identified, addressed, controlled, and accessed via the Internet. Such objects can communicate with each other, with other virtual resources available on the web, with information systems and human users. IoT applications often deal with a real-time monitoring of a wide range of parameters, ranging from vital signs to the heat on batteries, that can be exploited by a single or several applications whose scope and purpose are unlimited.

Although much emphasis is given in the interconnected things, the main point of the IoT paradigm is data instead of the object themselves. The   main   value  of   the   IoT   paradigm   is   the knowledge produced through the analysis of the data captured by the things. The produced knowledge can drive new business and operations. In this sense in order to explore the potential in IoT there should be mechanisms to process and analyze the enormous amount of data generated by the things producing and extracting valuable knowledge from them.

New challenges emerge in this scenario as well as several opportunities to be exploited. The devices area limited in terms of resources both memory and processing. In this context, data fusion techniques can be used to promote knowledge discovery from the huge amount of sensing data at a low power and processing cost. Data Fusion techniques deal with the association, correlation, and combination of data and information from single and multiple sources to achieve refined position and identity estimates, and complete and timely assessments of situations and threats,  and  their  significance (to  end  users  or  a target  application/system).  Employment  of  such Data Fusion techniques are useful to reveal trends in the  sampled  data,  uncover  new  patterns  of monitored variables, make predictions, thus improving decision making process, reducing decisions response times, and enabling more intelligent and immediate situation awareness.

The objective of this tutorial is to present the for Internet of Things where and how to use them in a smart environment.

The central elements of this tutorial are:

  • Internet of Things: Basic concepts, limitations and benefits
  • Where  and  how  IoT  can  benefit from  Data
  • Fusion Techniques
  • Data correlation among heterogeneous sources
  • Data fusion for resource constrained devices;
  • Implementing data  fusion  techniques  on sensors
  • Future directions

The participants will have access to the Presentation slides and Source code produced.

Extended Object Tracking:  Theory and Applications

Autonomous  systems are  an active  area of re- search  and technological  development.  These systems  require intelligence  and decision making,  including  intelligent  sensing, data collection and processing, collision  avoidance  and control. Autonomous   systems, especially  autonomous  cars need to be able to detect, recognise,  classify and track objects of interest, including  their location  and size.  In  the light of autonomous systems this tutorial will focus on tracking of extended objects, i.e., object tracking  using modern  high resolution  sensors  that give multiple  detections per object. State of the art theory will be introduced, and relevant real world applications will be shown where different  object types, e.g., pedestrians, bicyclists, and cars, are  tracked using  different sensors  such  as  lidar, radar, and camera.

Presenters: Karl Granström  and Marcus Baum

Groups and Crowds: Dynamics Modelling, Tracking and Analysis. State-of-the-Art Approaches and Beyond

Smart cities as dynamic systems of systems face a number of challenges both at the planning  stage and at the stage of real-time monitoring  and control. These require advanced tools for traffic flows prediction with different levels of granularity, multi-sensor data fusion and autonomy.  This tutorial introduces key models and methods for groups and crowds modelling, tracking and behaviour prediction.  Groups are structured objects characterised with particular motion patterns. The group can be comprised of a small number of interacting objects (e.g. pedestrians, sport players, convoys of cars) or of hundreds or thousands of components such as crowds of people.

Presenters: Lyudmila Mihaylova and Karl Granström 

Information Quality in Information Fusion and Decision Making

Designing fusion systems for decision support in complex dynamic situations requires fusion of a large amount of multimedia and multispectral information to produce estimates about objects and gain knowledge of the entire domain of interest. Data and information to be processed and made sense of includes but is not limited to data obtained from physical sensors (infrared imagers, radars, chemical, etc.), human intelligence reports, and information obtained from open sources (traditional such as newspapers, radio, TV as well as social media such as Twitter, Facebook, and Instagram).

The problem of building such fusion-based systems is complicated by the fact that data and information obtained from observations and reports as well as information produced by both human and automatic processes are of variable quality and may be unreliable, of low fidelity, insufficient resolution, contradictory, and/or redundant. It can come from a broken sensor or a sensor improperly used in the environmental context. A message obtained from a human sensor can contain a human error or is intentionally sent to skew the information. Furthermore, there is often no guarantee that evidence obtained from the sources is based on direct, independent observations. Sources may provide unverified reports obtained from other sources (e.g., replicating information in social networks), resulting in correlations and bias. In the more malicious setting, some sources may coordinate to provide similar information in order to reinforce their opinion in the system. The fusion methods used can be insufficient to achieve the required rigor.

The success of decision making in a complex fusion driven human-machine system depends on how well knowledge produced by fusion processes represents reality, which in turn depends on how adequate data are, how good and adequate is the fusion model used, and how accurate, appropriate or applicable prior and contextual knowledge is.

The tutorial will discuss major challenges and some possible approaches addressing the problem of representing and incorporating information quality into fusion processes. In particular it will present an ontology of quality of information and identify potential methods of representing and assessing the values of quality attributes and their combination.  It will also examine the relation between information quality and context, and suggest possible approaches to quality control compensating for insufficient information and model quality.

Presenter: Galina Rogova

Machine and Deep Learning for Data Fusion

In this tutorial, I will present some techniques for fusion and analytics to process big centralized warehouse data, inherently distributed data, and data residing on the cloud. The broad range of artificial intelligence and machine and deep learning techniques to be discussed will handle both structured transactional and sensor data as well as unstructured textual data such as human intelligence, emails, blogs, surveys, etc., and image data. Specifically, the tutorial will explore Deep  Fusion  to  solve  multi-sensor  big  data  fusion  problems  applying  deep  learning  and artificial intelligence technologies.

As a background, this tutorial is intended to provide an account of both the cutting-edge and the most commonly used approaches to high-level data fusion and predictive and text analytics. The demos to be presented are in the areas of distributed search and situation assessment, information extraction and classification, and sentiment analyses.

Some of the tutorial materials are based on the following two books by the speaker: 1) Subrata Das. (2008). “High-Level Data Fusion,” Artech  House, Norwell, MA; and 2)  Subrata Das. (2014). “Computational Business Analytics,” Chapman & Hall/CRC Press.

Tutorial Topics include the following: High-Level Fusion, Traditional Machine Learning Algorithms, Popular Deep Learning Algorithms (e.g. Convolutional & Recursive Neural Networks, Deep Belief Networks and Restricted Boltzmann Machine, Stacked Autoencoder), Descriptive and Predictive Analytics, Text Analytics, Decision Support and Prescriptive Analytics, Cloud Computing, Distributed Fusion, Hadoop and MapReduce, Natural Language Query, Big Data Query Processing, Graphical Probabilistic Models, Bayesian Belief Networks, Distributed Belief Propagation, Text Classification, Supervised and Unsupervised Classification, Information Extraction, Natural Language Processing, Demos in R and Python.

Presenter: Dr. Subrata Das

Multitarget multisensor tracking: from traditional to modern distributed approach

The tutorial aims to present an historical overview of multisensor multitarget tracking from the traditional “divide & conquer” approach to a more modern approach based on the theory of finite random sets/Poisson point processes, but also aims to present the most recent research achievements of the group related to distributed tracking over peer-to-peer sensor networks. The talk will initially describe the intertwined R&D activities, along several decades, between academia and industry in conceiving and implementing - on live surveillance systems - tracking algorithms for targets in civilian as well as defense and security applications. In this respect, we trace back from the alpha-beta adaptive filter to modern random set filters passing through the Kalman  algorithm  (in  its  many  embodiments),  Multiple  Model  filters,  Multiple  Hypothesis Tracking, Joint Probabilistic Data Association, Particle filters. Then,  the  presentation  will  focus  on  recent  research  achievements  on  distributed  multitarget tracking over a peer-to-peer network consisting of the radio interconnection of multiple, possibly low-cost, devices with sensing, communication and processing capabilities. In this respect, fundamental issues like distributed fusion, handling of different fields of view, distributed sensor registration, etc., and relative solutions will be thoroughly investigated. Through the talk, applications to land, naval and airborne sensors will be mentioned. Further, active as well as passive radar experiences are overviewed.

The description will have a balanced look to both theoretical and practical implementation issues including mitigation of real life system limitations.

Presenter: Alfonso Farina

Multi Source and Multi Modal Sensor Fusion Strategies and Implementations in the world of Autonomous Driving

This tutorial is going to provide insights on the following sections:

  • Sensor fusion  levels  and  architectures   for  autonomous vehicles
  • Different environment  perception data and representation
  • Objects, Grids and Raw Data oriented sensor fusion problems
  • Nitty-gritty details that plays  a vital role in real life sensor fusion applications
  • Infrastructure  based sensor fusion

This tutorial is focussed towards  the stringent requirements, foundations,  development  and testing of sensor fusion algorithms  meant for advanced driver assistance functions, self-driving  car applications in automotive vehicle systems and vehicular infrastructure oriented sensor fusion applications . The audience  would be provided with the presentation materials used in the tutorial.

The complex sensor  world of autonomous  vehicles  is discussed in detail and different  aspects of sensor fusion problem related to this area is taken  as one of the core subject of this tutorial. In addition a special discussion section on a sensor fusion system that is designed to work on the data obtained from envoirnment perception sensors placed in an infrastructure   such as a parking house, is presented.

The audience  can see  the different representations  of the surrounding  environment   as percepted  by the het- erogeneous environment  perception sensors e.g. different kinds of  radar (multi-mode radar, short range radar), stereo  camera  and lidar. The relevant  state estimation algorithms, sensor fusion frameworks and the evaluation procedures  with  reference  ground truth are presented in detail. The audience can get a  first ever glimpse  of the data set obtained  from a sensor configuration   that would be used in the future Mercedes Benz autonomous vehicles.

Section on urban automated driving application with sup- port of infrastructure sensing, distributed computing, and cellular radio, is introduced. After a brief overview over the overall system and the individual  components, the hy- brid fusion design of the overall environmental perception for an automated vehicle comprising both onboard sen- sors and distributed  environmental  models delivered  via cellular radio, is presented. Advantages and disadvantages of different fusion architectures  for automated  driving with support  from infrastructure  sensing are discussed and the influence of cellular radio and overall system latency on different approaches  is presented.  After  a short discussion of possible approaches to incorporate  a mixture of both geo-referenced and vehicle-fixed  sensor data into a fusion system and the effect of ego localization errors on estimation  uncertainty  and this section with an introduction into behavior-generation  for automated vehicles which are supported by environmental  models received  via cellular radio and the limits of behavior- generation without the support of distributed environment perception. The interesting  part of the tutorial is covered  on the different challenging  and important real world imple- mentation  problems  and practical  aspects such as fusion with  incomplete information, data association,   sensor communication latency, real world testing, real-life like simulation etc. related to fusion and target tracking in automotive  setting. Challenges in automated driving in highway  and urban setting are discussed in detail during every  section of this tutorial. Interesting  research and application  based discussion on centralized, decentralized and hybrid-distributed  sensor fusion designs in particular to autonomous driving is discussed in depth using the results obtained  using several  real world data sets  that contains  various static and dynamic targets  would be presented in this tutorial. Fusion and management of the different  extended target and static object representations from heterogeneous  information sources  with different resolution is presented with examples.

Presenters: Bharanidhar Duraisamy, Ting Yuan, Tilo Schwarz, Martin Fritzsche and Michael  Gabb

Multitarget Tracking and Multisensor Information Fusion

Objectives: To provide to the participants the latest state-of-the art techniques to estimate the states of multiple targets with multisensor information fusion. Tools for algorithm selection, design and evaluation will be presented. These form the basis of automated decision systems for advanced surveillance and targeting. The various information processing configurations for fusion are described, including the recently solved track-to-track fusion from heterogeneous sensors.

Presenter:  Yaakov Bar-Shalom

Overview of High-Level Information Fusion Theory, Models, and Representations

Over the past decade, the ISIF community has put together special sessions, panel discussions, and concept papers to capture the methodologies, directions, needs, and grand challenges of high-level information fusion (HLIF) in practical system designs. This tutorial brings together the contemporary  concepts,  models,  and  definitions  to  give  the  attendee  a summary of the state-of-the-art in HLIF. Analogies from low-level information fusion (LLIF) of object tracking and identification are extended to the HLIF concepts of situation/impact assessment and process/user refinement. HLIF theories (operational, functional, formal, cognitive) are mapped to representations (semantics, ontologies, axiomatics, and agents) with contemporary issues of modelling, testbeds, evaluation, and human-machine interfaces. Discussions with examples of search and rescue, cyber analysis, and battlefield awareness are presented. The attendee will gain an appreciation of HLIF through the topic organization from the perspectives of numerous authors, practitioners, and developers of information fusion systems. The tutorial is organized as per the recent text:

E. P. Blasch, E. Bosse, and D. A. Lambert, High-Level Information Fusion Management and Systems Design, Artech House, April 2012.

Presenter: Erik Blasch

Statistical Methods for Information Fusion System Design and Performance Evaluation

Information fusion system find their application in multiple domains from defense applications to self-driving cars, and autonomous systems. Irrespective of the application domain, the design of an information fusion system requires evaluation of a multitude of design variables which have a direct impact on the fusion system performance. These variables include various sensor types and attributes, multiple tracking and fusion algorithms (i.e., low-level information fusion (LLIF) considerations), and different situation assessment and resource management approaches (i.e., high-level information fusion (HLIF) considerations).  It is imperative for the fusion system designers to identify the significant design decisions in this large design space and subsequently quantify their impact on the end fusion performance. Traditionally, information fusion community has taken a partitioned approach of isolated design and evaluation of the various attributes of the fusion system, which assumes a lack of interactions between design decisions. However, in complex systems, such as the information fusion system, interactions between system design variables, continue to dominate the performance. 

In this tutorial, a domain-agnostic framework, based on Design of Experiments, is presented which provides holistic performance evaluation of an information fusion system. This framework leverages systems engineering principles for identifying design variables which are then investigated by statistical methods (e.g., analysis of variance) for establishing statistical significance and quantifying their impact on fusion system performance. This tutorial will discuss theoretical foundations for performing design and analysis of experiments, followed by a hands-on information fusion system application example which can be transferred to domain-specific implementation in the participant’s area of interest. A refresher on Monte-Carlo simulations and hypothesis testing will also be provided. At the conclusion of the tutorial, the participants will be able to formulate an experimental design for the fusion system performance evaluation, employ hypothesis testing for comparing uncertain data, perform analysis of variance (ANOVA) to establish statistical significance of design variables and interactions, and perform multiple comparison range tests to quantify the impact of design variables and obtain sensitivity analysis of interactions for fusion system performance evaluation.

Presenters: Dr. Ali Raz and Dr. Daniel DeLaurentis

Subjective Logic Trust Fusion and Bayesian Reasoning

This tutorial gives attendees a first-hand insight into the theory and application of subjective logic by the author who started developing this framework in 1997, with a book published in 2016.

This tutorial gives an introduction to subjective logic, and how it applies to reasoning under uncertainty, computational trust and trust fusion. Specific elements of the tutorial are:

1.  Representation and interpretation of subjective opinions

  • Formal representation of binomial, multinomial and hyper opinions
  • Correspondence between subjective opinions and other relevant representations of trust such as binary  logic propositions, probabilities, Dempster-Shafer belief functions,
  • Expressing opinions  as PDFs (probability density functions) and qualitative measures

2.  Algebraic operators of subjective logic

  • Operators for binomial opinions: transitivity, fusion, product, coproduct
  • Operators for multinomial opinions: conditional deduction and abduction, trust transitivity and fusion

3.  Applications of subjective logic

  • Trust networks modelling and analysis
  • Subjective Bayesian  reasoning modelling and analysis
  • Subjective networks  based, on a combination of subjective Bayesian and trust networks

Presenter: Prof. Audun Jøsang

Tracking and Sensor Data Fusion – Methodological Framework and Selected Applications.

The tutorial covers the material of the recently published book of the presenter with the same title (Springer 2014, Mathematical Engineering Series, ISBN 978-3-642-39270-2) and thus provides an guided introduction to deeper reading. Starting point is the well known JDL model of sensor data and information fusion that provides general orientation within the world of fusion methodologies and its various applications, covering a dynamically evolving field of ever increasing relevance. Using the JDL model as a guiding principle, the tutorial introduces into advanced fusion technologies based on practical examples taken from real world applications. 

Presenter: Prof. Dr. Wolfgang Koch