skip to content

FUSION 2018

21st International Conference on Information Fusion - 10 - 13 July 2018
 

To see the timetable for Tutorials, please click here.


An Introduction to Track-to-Track Fusion and the Distributed Kalman Filter

The increasing trend towards  connected  sensors (”internet of things” and ”ubiquitous computing”) derive  a demand  for powerful distributed estimation methodologies. In tracking applications, the ”Distributed Kalman Filter” (DKF) provides an optimal solution under certain conditions.  The optimal solution in terms of the estimation  accuracy is also achieved by a centralized fusion algorithm which receives either all associated measurements or so-called tracklets. However,  this scheme needs the result of each update step for the optimal solution  whereas the DKF works at arbitrary  communication  rates since the calculation is completely  distributed.  Two more recent methodologies  are based  on the ”accumulated  state densities” (ASD) which augment the states from multiple time instants. In practical applications, tracklet fusion based on the equivalent measurement often achieves reliable results even if full communication is not available. The limitations  and robustness of the tracklet fusion will be discussed.

At first, the tutorial will explain the origin of the challenges in distributed tracking. Then, possible solutions to them are derived and illuminated.  In particular, algorithms will be provided for each presented solution.

The list of topics includes: Short introduction to target tracking, Tracklet Fusion, Exact Fusion with cross-covariances, Naive Fusion, Federated Fusion, Decentralized Fusion (Consensus Kalman  Filter), Distributed Kalman Filter (DKF), Debiasing for the DKF, Distributed ASD Fusion, Augmented State Tracklet Fusion.

Presenter: Felix Govaers 


Analytic Combinatorics for Multi-Object Tracking and Higher Level Fusion

This tutorial is designed to facilitate understanding of the classical theory of Analytic Combinatorics (AC) and how to apply it to problems in multi-object tracking and higher level data fusion.  AC is an economical technique for encoding combinatorial problems—without information loss—into the derivatives of a generating function (GF).  Exact Bayesian filters derived from the GF avoid the heavy accounting burden required by traditional enumeration methods.  Although AC is an established mathematical field, it is not widely known in either the academic engineering community or the practicing data fusion/tracking community.  This tutorial lays the groundwork for understanding the methods of AC, starting with the GF for the classical Bayes-Markov filter.  From this cornerstone, we derive many established filters (e.g., PDA, JPDA, JIPDA, PHD, CPHD, MultiBernoulli, MHT) with simplicity, economy, and insight. We also show how to use the saddle point method (method of stationary phase) to find low complexity approximations of probability distributions and summary statistics.

Presenters: Roy Streit and Murat Efe


 

Extended Object Tracking:  Theory and Applications

Autonomous  systems are  an active  area of re- search  and technological  development.  These systems  require intelligence  and decision making,  including  intelligent  sensing, data collection and processing, collision  avoidance  and control. Autonomous   systems, especially  autonomous  cars need to be able to detect, recognise,  classify and track objects of interest, including  their location  and size.  In  the light of autonomous systems this tutorial will focus on tracking of extended objects, i.e., object tracking  using modern  high resolution  sensors  that give multiple  detections per object. State of the art theory will be introduced, and relevant real world applications will be shown where different  object types, e.g., pedestrians, bicyclists, and cars, are  tracked using  different sensors  such  as  lidar, radar, and camera.

Presenters: Karl Granström  and Marcus Baum


 

Implementations of Labeled Random Finite Set Multi-Target Filters

The random finite set framework for multi-sensor multi-target tracking has attached considerable interest in recent years. It provides a unified perspective of multi-target tracking in a very intuitive manner by drawing direct parallels with the simpler problem of single-target tracking. This framework has led to the development of well-known multi-target filters such as the Probability Hypothesis Density (PHD), Cardinalized PHD (CPHD), Multi-Bernoulli filters and a recent advance, the Generalized Labeled Multi- Bernoulli (GLMB) filter, which can handle in the order of one million targets in the presence of high clutter and misdetections.

The tutorial will present the Generalized Labeled Multi-Bernoulli (GLMB) filter and demonstrate the landmark example with one million targets. Further, we will show how the GLMB filter and an approximation called LMB filters are implemented in Matlab. Matlab code for these filters will be provided to all participants. It is envisaged that participants will come away with sufficient know-how to implement and apply these algorithms in their work.

Participants should have working knowledge of random variable, probability density function, Gaussian distribution, and concepts such as state space models.

Presenters: Ba-Ngu Vo and Ba-Tuong Vo


Information Quality in Information Fusion and Decision Making

Designing fusion systems for decision support in complex dynamic situations requires fusion of a large amount of multimedia and multispectral information to produce estimates about objects and gain knowledge of the entire domain of interest. Data and information to be processed and made sense of includes but is not limited to data obtained from physical sensors (infrared imagers, radars, chemical, etc.), human intelligence reports, and information obtained from open sources (traditional such as newspapers, radio, TV as well as social media such as Twitter, Facebook, and Instagram).

The problem of building such fusion-based systems is complicated by the fact that data and information obtained from observations and reports as well as information produced by both human and automatic processes are of variable quality and may be unreliable, of low fidelity, insufficient resolution, contradictory, and/or redundant. It can come from a broken sensor or a sensor improperly used in the environmental context. A message obtained from a human sensor can contain a human error or is intentionally sent to skew the information. Furthermore, there is often no guarantee that evidence obtained from the sources is based on direct, independent observations. Sources may provide unverified reports obtained from other sources (e.g., replicating information in social networks), resulting in correlations and bias. In the more malicious setting, some sources may coordinate to provide similar information in order to reinforce their opinion in the system. The fusion methods used can be insufficient to achieve the required rigor.

The success of decision making in a complex fusion driven human-machine system depends on how well knowledge produced by fusion processes represents reality, which in turn depends on how adequate data are, how good and adequate is the fusion model used, and how accurate, appropriate or applicable prior and contextual knowledge is.

The tutorial will discuss major challenges and some possible approaches addressing the problem of representing and incorporating information quality into fusion processes. In particular it will present an ontology of quality of information and identify potential methods of representing and assessing the values of quality attributes and their combination.  It will also examine the relation between information quality and context, and suggest possible approaches to quality control compensating for insufficient information and model quality.

Presenter: Galina Rogova


Machine and Deep Learning for Data Fusion

In this tutorial, I will present some techniques for fusion and analytics to process big centralized warehouse data, inherently distributed data, and data residing on the cloud. The broad range of artificial intelligence and machine and deep learning techniques to be discussed will handle both structured transactional and sensor data as well as unstructured textual data such as human intelligence, emails, blogs, surveys, etc., and image data. Specifically, the tutorial will explore Deep  Fusion  to  solve  multi-sensor  big  data  fusion  problems  applying  deep  learning  and artificial intelligence technologies.

As a background, this tutorial is intended to provide an account of both the cutting-edge and the most commonly used approaches to high-level data fusion and predictive and text analytics. The demos to be presented are in the areas of distributed search and situation assessment, information extraction and classification, and sentiment analyses.

Some of the tutorial materials are based on the following two books by the speaker: 1) Subrata Das. (2008). “High-Level Data Fusion,” Artech  House, Norwell, MA; and 2)  Subrata Das. (2014). “Computational Business Analytics,” Chapman & Hall/CRC Press.

Tutorial Topics include the following: High-Level Fusion, Traditional Machine Learning Algorithms, Popular Deep Learning Algorithms (e.g. Convolutional & Recursive Neural Networks, Deep Belief Networks and Restricted Boltzmann Machine, Stacked Autoencoder), Descriptive and Predictive Analytics, Text Analytics, Decision Support and Prescriptive Analytics, Cloud Computing, Distributed Fusion, Hadoop and MapReduce, Natural Language Query, Big Data Query Processing, Graphical Probabilistic Models, Bayesian Belief Networks, Distributed Belief Propagation, Text Classification, Supervised and Unsupervised Classification, Information Extraction, Natural Language Processing, Demos in R and Python.

Presenter: Dr Subrata Das


Multisensor-Multitarget Tracker/Fusion Engine Development and Performance Evaluation for Realistic Scenarios

While numerous tracking and fusion algorithms are available in the literature, their implementation and application on real-world problems are still challenging. Since new algorithms continue to emerge, rapidly prototyping them, developing for production and evaluating them on real-world (or realistic) problems efficiently are also essential. In addition to reviewing state-of-the-art tracking algorithms, this tutorial will focus on a number of realistic multisensor-multitarget tracking problems, simulation of large-scale tracking scenarios, rapid prototyping, development of high performance real-time tracking/fusion software, and performance evaluation on realistic scenarios. A unified tracker framework that can handle a number of state-of-the-art algorithms like the Multiple Hypothesis Tracking (MHT) algorithm, Multiframe Assignment (MFA) tracker and the Joint (Integrated) Probabilistic Data Association (J(I)PDA) tracker is presented. Modules for preprocessing (e.g., coordinate transformations, clutter estimation, thresholding, registration), data association (e.g., 2-D assignment, multiframe assignment, k-best assignment), filtering (e.g., Kalman filter, Interacting Multiple Model (IMM) Estimator, Unscented Kalman filter) and postprocessing (e.g., prediction, classification) are discussed. Fusion software with different architectures is also presented. Integration of sensors like radar, ESA, angle-only, PCL and AIS/ADS-B is demonstrated. Side-by-side performance evaluation of multiple algorithms using more than 30 metrics on realistic large-scale tracking scenarios is presented. A hands-on approach with ISR360, which is an end-to-end real-time software suite for Intelligence, Surveillance and Reconnaissance, will be the cornerstone of this tutorial.

The topics will include Review of Bayesian state estimation, Multitarget tracking system architecture, Implementation of J(I)PDA/MHT/MFA trackers, Implementation of a multisensor fusion engine, Implementation of realistic simulators, Implementation of a track analytics engine, Performance evaluation of trackers (MOP/MOE), and Real-world examples.

Participants should have a basic knowledge of tracking and fusion concepts.

Presenter: Professor T Kirubarajan 


Multitarget multisensor tracking: from traditional to modern distributed approach

The tutorial aims to present an historical overview of multisensor multitarget tracking from the traditional “divide & conquer” approach to a more modern approach based on the theory of finite random sets/Poisson point processes, but also aims to present the most recent research achievements of the group related to distributed tracking over peer-to-peer sensor networks. The talk will initially describe the intertwined R&D activities, along several decades, between academia and industry in conceiving and implementing - on live surveillance systems - tracking algorithms for targets in civilian as well as defense and security applications. In this respect, we trace back from the alpha-beta adaptive filter to modern random set filters passing through the Kalman  algorithm  (in  its  many  embodiments),  Multiple  Model  filters,  Multiple  Hypothesis Tracking, Joint Probabilistic Data Association, Particle filters. Then,  the  presentation  will  focus  on  recent  research  achievements  on  distributed  multitarget tracking over a peer-to-peer network consisting of the radio interconnection of multiple, possibly low-cost, devices with sensing, communication and processing capabilities. In this respect, fundamental issues like distributed fusion, handling of different fields of view, distributed sensor registration, etc., and relative solutions will be thoroughly investigated. Through the talk, applications to land, naval and airborne sensors will be mentioned. Further, active as well as passive radar experiences are overviewed.

The description will have a balanced look to both theoretical and practical implementation issues including mitigation of real life system limitations.

Presenter: Giorgio Battistelli, Luigi Chisci and Alfonso Farina


Multi Source and Multi Modal Sensor Fusion Strategies and Implementations in the world of Autonomous Driving

This tutorial is going to provide insights on the following sections:

  • Sensor fusion  levels  and  architectures   for  autonomous vehicles
  • Different environment  perception data and representation
  • Objects, Grids and Raw Data oriented sensor fusion problems
  • Nitty-gritty details that plays  a vital role in real life sensor fusion applications
  • Infrastructure  based sensor fusion

This tutorial is focussed towards  the stringent requirements, foundations,  development  and testing of sensor fusion algorithms  meant for advanced driver assistance functions, self-driving  car applications in automotive vehicle systems and vehicular infrastructure oriented sensor fusion applications . The audience  would be provided with the presentation materials used in the tutorial.

The complex sensor  world of autonomous  vehicles  is discussed in detail and different  aspects of sensor fusion problem related to this area is taken  as one of the core subject of this tutorial. In addition a special discussion section on a sensor fusion system that is designed to work on the data obtained from envoirnment perception sensors placed in an infrastructure   such as a parking house, is presented.

The audience  can see  the different representations  of the surrounding  environment   as percepted  by the het- erogeneous environment  perception sensors e.g. different kinds of  radar (multi-mode radar, short range radar), stereo  camera  and lidar. The relevant  state estimation algorithms, sensor fusion frameworks and the evaluation procedures  with  reference  ground truth are presented in detail. The audience can get a  first ever glimpse  of the data set obtained  from a sensor configuration   that would be used in the future Mercedes Benz autonomous vehicles.

Section on urban automated driving application with sup- port of infrastructure sensing, distributed computing, and cellular radio, is introduced. After a brief overview over the overall system and the individual  components, the hy- brid fusion design of the overall environmental perception for an automated vehicle comprising both onboard sen- sors and distributed  environmental  models delivered  via cellular radio, is presented. Advantages and disadvantages of different fusion architectures  for automated  driving with support  from infrastructure  sensing are discussed and the influence of cellular radio and overall system latency on different approaches  is presented.  After  a short discussion of possible approaches to incorporate  a mixture of both geo-referenced and vehicle-fixed  sensor data into a fusion system and the effect of ego localization errors on estimation  uncertainty  and this section with an introduction into behavior-generation  for automated vehicles which are supported by environmental  models received  via cellular radio and the limits of behavior- generation without the support of distributed environment perception. The interesting  part of the tutorial is covered  on the different challenging  and important real world imple- mentation  problems  and practical  aspects such as fusion with  incomplete information, data association,   sensor communication latency, real world testing, real-life like simulation etc. related to fusion and target tracking in automotive  setting. Challenges in automated driving in highway  and urban setting are discussed in detail during every  section of this tutorial. Interesting  research and application  based discussion on centralized, decentralized and hybrid-distributed  sensor fusion designs in particular to autonomous driving is discussed in depth using the results obtained  using several  real world data sets  that contains  various static and dynamic targets  would be presented in this tutorial. Fusion and management of the different  extended target and static object representations from heterogeneous  information sources  with different resolution is presented with examples.

Presenters: Bharanidhar Duraisamy, Ting Yuan, Tilo Schwarz, Martin Fritzsche and Michael Gabb


Multitarget Tracking and Multisensor Information Fusion

Objectives: To provide to the participants the latest state-of-the art techniques to estimate the states of multiple targets with multisensor information fusion. Tools for algorithm selection, design and evaluation will be presented. These form the basis of automated decision systems for advanced surveillance and targeting. The various information processing configurations for fusion are described, including the recently solved track-to-track fusion from heterogeneous sensors.

Presenter:  Yaakov Bar-Shalom


Noise Covariance Matrices in State Space Models: Overview, Algorithms, and Comparison of Estimation Methods

Knowledge of a system model is a key prerequisite for many state estimation, signal processing, fault detection, and optimal control problems. The model is often designed to be consistent with random behaviour of the system quantities and properties of the measurements. While the deterministic part of the model often arises from mathematical modelling based on physical, chemical, or biological laws governing the behaviour of the system, the statistics of the stochastic part are often difficult to find by the modelling and have to be identified using the measured data. Incorrect description of the noise statistics may result in a significant worsening of estimation, signal processing, detection, or control quality or even in a failure of the underlying algorithms.

The tutorial introduces a more than six decades long history as well as recent advances and the state-of-the-art of the methods for estimation of the properties (or statistical description) of the stochastic part of the model with a special emphasis on the state-space model noise covariance matrices estimation. The tutorial covers all major groups of the noise statistics estimation methods, including the correlation methods, maximum likelihood methods, covariance matching methods, and the Bayesian methods. The methods are introduced in the unified framework highlighting their basic ideas, key properties, and assumptions. Algorithms of individual methods will be described and analysed to provide a basic understanding of their nature and similarities. Performance of the methods will also be compared using a numerical illustration.

The attendees will be provided with course notes and sample implementations of the selected methods.

Presenters: Ondrej Straka, Jindrich Dunik and Jindrich Havlik


Object tracking, Sensor Fusion and Situational Awareness for Assisted- and Self-Driving Vehicles: Problems, Solutions and Directions

The automotive industry has been undergoing a major revolution in the last few years. Rapid advances have been made in assisted- and self-driving vehicles. As a result, vehicles have become more efficient and more automated. A number of automotive as well as technology companies are in the process of developing smart cars that can drive themselves. While totally self-driving cars are still in their infancy, some features like self-parking, proximity detection and lane identification have already made it into production in high-end vehicles. In spite of these recent developments, significantly more research is needed in order to perfect these nascent technologies and to make them ready for mass production. This provides the motivation for this tutorial.

In this tutorial, we aim to discuss a number of problems related to assisted- and self-driving vehicles, potential solutions and directions for research & development. The issues discussed in this tutorial will span multitarget tracking, multisensor fusion and situational awareness within the context of smart cars. We will also present some of the algorithms that are available in the open literature as well as those we have developed recently. In addition, we will also discuss related computational issues and sensor technologies. Finally, we will present some results on real data.

Presenter: Professor T Kirubarajan


Overview of High-Level Information Fusion Theory, Models, and Representations

Over the past decade, the ISIF community has put together special sessions, panel discussions, and concept papers to capture the methodologies, directions, needs, and grand challenges of high-level information fusion (HLIF) in practical system designs. This tutorial brings together the contemporary  concepts,  models,  and  definitions  to  give  the  attendee  a summary of the state-of-the-art in HLIF. Analogies from low-level information fusion (LLIF) of object tracking and identification are extended to the HLIF concepts of situation/impact assessment and process/user refinement. HLIF theories (operational, functional, formal, cognitive) are mapped to representations (semantics, ontologies, axiomatics, and agents) with contemporary issues of modelling, testbeds, evaluation, and human-machine interfaces. Discussions with examples of search and rescue, cyber analysis, and battlefield awareness are presented. The attendee will gain an appreciation of HLIF through the topic organization from the perspectives of numerous authors, practitioners, and developers of information fusion systems. The tutorial is organized as per the recent text:

E. P. Blasch, E. Bosse, and D. A. Lambert, High-Level Information Fusion Management and Systems Design, Artech House, April 2012.

Presenter: Erik Blasch


Statistical Methods for Information Fusion System Design and Performance Evaluation

Information fusion system find their application in multiple domains from defense applications to self-driving cars, and autonomous systems. Irrespective of the application domain, the design of an information fusion system requires evaluation of a multitude of design variables which have a direct impact on the fusion system performance. These variables include various sensor types and attributes, multiple tracking and fusion algorithms (i.e., low-level information fusion (LLIF) considerations), and different situation assessment and resource management approaches (i.e., high-level information fusion (HLIF) considerations).  It is imperative for the fusion system designers to identify the significant design decisions in this large design space and subsequently quantify their impact on the end fusion performance. Traditionally, information fusion community has taken a partitioned approach of isolated design and evaluation of the various attributes of the fusion system, which assumes a lack of interactions between design decisions. However, in complex systems, such as the information fusion system, interactions between system design variables, continue to dominate the performance. 

In this tutorial, a domain-agnostic framework, based on Design of Experiments, is presented which provides holistic performance evaluation of an information fusion system. This framework leverages systems engineering principles for identifying design variables which are then investigated by statistical methods (e.g., analysis of variance) for establishing statistical significance and quantifying their impact on fusion system performance. This tutorial will discuss theoretical foundations for performing design and analysis of experiments, followed by a hands-on information fusion system application example which can be transferred to domain-specific implementation in the participant’s area of interest. A refresher on Monte-Carlo simulations and hypothesis testing will also be provided. At the conclusion of the tutorial, the participants will be able to formulate an experimental design for the fusion system performance evaluation, employ hypothesis testing for comparing uncertain data, perform analysis of variance (ANOVA) to establish statistical significance of design variables and interactions, and perform multiple comparison range tests to quantify the impact of design variables and obtain sensitivity analysis of interactions for fusion system performance evaluation.

Presenters: Dr Ali Raz and Dr Daniel DeLaurentis