skip to content

FUSION 2018

21st International Conference on Information Fusion - 10 - 13 July 2018
 

SS1 - Advanced Nonlinear Filters

Areas such as target tracking, positioning, navigation, sensor fusion, signal processing, and decision-making usually require application of nonlinear filtering methods. The methods are used to provide an estimate of a system state, which is not often directly measurable. The development of the methods has started in the sixties with the appearance of the Kalman filter. The first methods were able to cope with linear system models and for this purpose nonlinear system models were linearized. Satisfactory performance of these methods was limited to system models with mild nonlinearities.

Advanced performance of computers made it possible to develop more and more complex methods, which were able to cope with even strongly nonlinear models. In contrast to the first methods, which were optimization-based, these modern methods focused on Bayesian approach to state estimation, which allowed a more informative description of the estimate by the probability distribution. These modern methods were subsequently advanced to increase their efficiency, reduce their requirements and assumptions, and to allow application in more general settings.

This special session focuses on recent advances in nonlinear filtering for both discrete and continuous time system models with areas such as sigma-point filtering, Gaussian filters, Gaussian-mixture filters, non-Gaussian filters such as Student’s-t filters, particle filters, homotopy-based estimation methods for continuous and discrete densities, comparisons of existing nonlinear filtering methods, applications of nonlinear filters.

The proposed special session will bring together leading and active researchers and practitioners in the field of nonlinear filtering to present their recent accomplishments, exchange latest experience, and explore future directions in this field. We believe that the subject of this special session is timely, important, and of wide interest to the information fusion community, particularly the participants of FUSION 2018.


SS2 - Advances in Distributed Kalman Filtering and Fusion

The rapid advances in sensor and communication technologies are accompanied by an increasing demand for distributed state estimation methods. Centralized implementations of the Kalman filter are often too costly in terms of communication bandwidth or simply inapplicable – for instance when mobile ad-hoc networks are considered. Compared with centralized approaches, distributed or decentralized Kalman filtering is considerably more elaborate. In particular, the treatment of dependent information shared by different systems is a key issue. Distributed state estimation is, in general, a balancing act between estimation quality and flexible network design. Although distributed implementations of the Kalman filter that provide optimal estimates are possible, these algorithms are not robust to packet delays and drops, node failures, and changing network topologies. In practice, these problems deserve careful attention and have to be addressed by future research.


SS3 - Advances in Motion Estimation using Inertial Sensors

Accelerometers and gyroscopes (inertial sensors) measure the movement of the sensor in terms of its acceleration and angular velocity.  These sensors are nowadays not only widely available in smartphones and VR / AR headsets but also in dedicated sensor units (inertial measurement units).  Due to their small sizes, they can non-intrusively be placed on people and in devices. Measurements from mobile sensors carried by or placed on people, vehicles, and robots can be used to track or classify their movements. Due to technological advances, the availability of these sensors as well as their accuracy has steadily increased over recent years, opening up for many exciting applications. Since inertial measurements only give accurate position and orientation information on a short time scale, inertial sensors are typically combined with other sources of information, e.g., additional sensors or with motion models. Challenges lie both in obtaining accurate (sensor and motion) models as well as in the choice and development of algorithms.

This Special Session “Advances in Motion Estimation using Inertial Sensors” features contributions describing recent developments in the use of inertial sensors, with focus on motion estimation in general, calibration purposes, and medical applications. Especially medical applications is an area that has gained much attention recently, hence deserving having a forum at FUSION 2018.


SS4 - AI enabled FUSION for Federated Environments

In many real-world environments, sensor fusion must happen across systems that span multiple organizations. Examples of such organizations include military coalition networks, industrial consortiums, large enterprises with mergers & acquisitions, multi-agency cooperation in governments, and smarter cities. Because of policy differences, infrastructure differences, differences in the manner of collecting sensor data, and restrictions on sharing data, sensor fusion in such organizations require addressing a unique set of challenges not seen in sensor fusion where all information is within a single organization.

This session will look at the ways in which fusion issues in such environments are addressed, ranging from the infrastructure that is required for such fusion to the algorithms that operate within the distributed environments. The subject of Analytics and AI in federated organizations is an active area of research. As an example, a long-term research alliance of universities, industry and government labs in U.S. and UK is actively working in this area (reference DAIS Alliance, http://www.dais-ita.org). Given the amount of activity in this area, a special session will attract many attendees, and provide a venue for researchers in the field to get together to review progress in the area.


SS5 - Context-based Information Fusion

The goal of the proposed session is discussing approaches to context-based information fusion. It will cover the design and development of information fusion solutions integrating sensor data with contextual knowledge.

The development of IF systems inclusive of contextual factors and information offers an opportunity to improve the quality of the fused output, provide solutions adapted to the application requirements, and enhance tailored responses to user queries. Contextual-based strategy challenges include selecting the appropriate representations, exploitations, and instantiations.  Context could be represented as  knowledge-bases,  ontologies,  and geographical maps, etc. and would form a powerful tool to favour adaptability and system performance. Example applications include context-aided tracking and classification, situational reasoning, ontology building and updating.

Therefore, the session covers both representation and exploitation mechanisms so that contextual  knowledge  can  be  efficiently  integrated  in  the  fusion  process  and  enable adaptation mechanisms.


SS6 - Evaluation of Techniques for Uncertainty Reasoning

The ETUR Session is intended to report the latest results of the ISIF’s ETURWG, which aims to bring together advances and developments in the area of evaluation of uncertainty representation. The ETURWG special sessions started in Fusion 2010 and has been held ever since, with an attendance consistently averaging between 40 and 48 attendees. While most attendees consist of ETURWG participants  new  researchers  and  practitioners  interested  in uncertainty evaluation have attended the sessions and some joined the ETURWG.


SS7 - Extended Object and Group Tracking

Traditional object tracking algorithms assume that the target object can be modeled as a single point without an extent. However, there are many scenarios in which this assumption is not justified. For example, when the resolution of the sensor device is higher than the spatial extent of the object, a varying number of measurements from spatially distributed reflection centers is received. Furthermore, a collectively moving group of point objects can be seen as a single extended object because of the interdependency of the group members.

This Special Session addresses fundamental techniques, recent developments and future research directions in the field of extended object and group tracking.


SS8 - Forty Years of Multiple-Hypothesis Tracking

Multiple-hypothesis tracking (MHT) is a leading paradigm for multi-target tracking (MTT) to address difficult data association problems. It has received prominence starting with the seminal work of Donald Reid, first published at the IEEE Conference on Decision and Control in 1978. In the past 40 years, significant progress has been achieved including the track-oriented formulation, relaxation solution approaches, distributed MHT, generalizations for merged and repeated measurement models, and graph-based extensions. In this session, we encourage submissions that provide a unifying view on MHT developments and that discuss new advances. Of particular interest are contributions that establish connections between the MHT paradigm and alternative approaches to the MTT problem.


SS9 - Indoor Positioning

Location based services have seen a surge in popularity with applications such as virtual and augmented reality, asset tracking, and targeted advertising. Such services rely on accurate knowledge of the user’s position. While global navigation satellite systems (GNSS) such as the global positioning system (GPS) are capable of accurately determining the user’s position outdoors, accurate localization inside buildings and other areas without  GNSS coverage is challenging. Thus, the field of indoor positioning and indoor navigation has recently emerged as an active research area and a rich variety of different indoor positioning and localization techniques have been developed. Such methods are often based on fusion of different sensor modalities such as received signal strength measurements (e.g. from Bluetooth beacons or WiFi access points), time-difference measurements (e.g. using Ultra Wide-Band communications), or signatures of local variations of the magnetic field. In combination with ubiquitously available inertial sensors and inertial odometry along with optical and barometric measurements these approaches have shown promising results for accurate indoor positioning.

The aim of this special session is to bring together researchers from academia and industry to present their latest development in this area. Topics of interest for this special session include infrastructure-based and infrastructure-free indoor positioning methods, simultaneous localization and mapping (SLAM) for the indoor environment, trajectory learning, and inertial naviga- tion for indoor positioning applications.


SS10 - Information Fusion in Multi-Biometrics and Forensics

This session will focus on the latest innovations and best practices in the emerging field of multi-biometric fusion. Biometrics tries to build an identity recognition decision based on the physical or behavioral characteristics of individuals. Multi-biometrics aims at outperforming the conventional biometric solutions by increasing accuracy, and robustness to intra-person variations and to noisy data. It also reduces the effect of the non-universality of biometric modalities and the vulnerability to spoof attacks. Fusion is performed to build a unified biometric decision based on the information collected from different biometric sources. This unified result must be constructed in a way that guarantees the best performance possible and take into account the efficiency of the solution.

The topic of this special session, Information Fusion in Multi-Biometrics and Forensics, requires the development of innovative and diverse solutions. Those solutions must take into account the nature of biometric information sources as well as the level of fusion suitable for the application in hand. The fused information may include more general and non-biometric information such as the estimated age of the individual or the environment of the background.

This special session will be supported by the European Association for Biometrics (EAB) and the Center for Research in Security and Privacy (CRISP). This collaboration will provide technical support by addressing experts for reviews and will help with the dissemination and exploitation of the event.


SS11 - Intelligent Information Fusion and Data Mining for Tracking

Research on Intelligent Systems for information fusion & data mining has matured during the past years and many effective applications of this technology are now deployed such as Wearable Computing, Intelligent Surveillance, Smart City/Home-Care, Smart Grid, Web Tracking, Network Management. The rapid development of modern sensors and their application to distributed networks provide a foundation for new paradigms to combat the challenges that arise in target detection, tracking, trajectory forecasting and sensor fusion in harsh environments with poor prior information. For example, the advent of large-scale/massive sensor systems provides very informative observation, which facilitates novel perspectives based on data clustering and model learning to deal with false alarms and misdetection, given little knowledge about the objects, sensors and the background. Sensor data fitting and regression analysis provide another unlimited means to utilize the unstructured context information such as “the trajectory is smooth” for continuous-time trajectory estimation and forecasting. As such, the sensor community has the interest in novel information fusion & data mining methods coupling traditional statistical techniques for substantial performance enhancement, especially for challenging problems that make traditional approaches inappropriate.

This special session aims to assemble and disseminate information on recent, novel advances in intelligent systems, information fusion & sensor data mining techniques and approaches, and promote a forum for continued discussion on the future development. Both theoretical and practical approaches to address the problems in this area are welcome.


SS12 - Multi-layered fusion processes: exploiting multiple models and levels of abstraction for understanding and sense-making

The exploitation of all relevant information originating from a growing mass of heterogeneous sources, both device-based (sensors, video, etc.)  and  human-generated (text, voice, etc.), is a key factor for the production of timely, comprehensive and most accurate description of a situation or phenomenon in order to make informed decisions. Even if exploiting multiple sources, most fusion systems are developed for combing just one type of data (e.g. positional data) in order to achieve a certain goal (e.g. accurate target tracking) without considering other relevant information (e.g. current situation status) from other abstraction levels.

The result of single-layer processing is often stove-piped systems dedicated to a single fusion task with limited robustness.  This is caused by the lack of an integrative approach for processing sensor data  (low-level  fusion)  and  semantically  rich  information  (high-level fusion) in a holistic manner, thus effectively implementing a multi-layered processing architecture and fusion process.

Processes at different levels generally work on data and information of different nature. For example, low level processes could deal with device-generated data (e.g. images, tracks, etc.) while high level processes might exploit human-generated knowledge (e.g. text, ontologies, etc.).

The overall objective is to enhance the sense-making of the information collected from heterogeneous sources and multiple processes for improved situational awareness and intelligence.

The proposed special session will bring together researchers working on fusion techniques and algorithms often considered to be different and disjoint. The objective is thus fostering the discussion and proposing viable multi-layered fusion solutions to address challenging problems in relevant applications.


SS13 - Multi-Sensor Data Fusion for Navigation and Localisation

During the past decades, navigation and localisation have been gaining increasing attention for both military and civil applications. The rapid development of advanced sensors has opened up the possibility to acquire a large amount of data with different attributes for navigation and localisation systems. To fuse multi-sensor data, many challenges will have to be considered, such as sensor models, noise models, system dynamics model nonlinearity, and advanced fusion algorithms. This special session will address these problems through focusing on fundamental theory and techniques, recent research achievements and future research trends in the field of navigation and localisation.


SS14 - Novel Information Fusion Methodologies for Space Domain Awareness

The operation of Earth-orbiting spacecraft has become increasingly difficult due to the proliferation of orbital debris and increased commercialization. This has been made evident by several collisions involving operational spacecraft over the past several years. Maintaining sustainability of key orbit regimes, e.g. low-Earth, sun-synchronous, and geosynchronous orbits, requires improved tracking and prediction of up to hundreds of thousands of objects given sparse measurements in both space and time. Target identification and classification allows for better prediction and awareness. Moreover, proper characterization of measurement assignments as well as the determination of measurement associations for maneuvering targets play a pivotal role in successful space situational awareness. Solutions to the problem will be interdisciplinary and require expertise in astrodynamics, computational sciences, information fusion, applied mathematics, and many other fields.

The primary goal of this session is to promote interaction between the astrodynamics and space domain awareness community with those conducting research in information fusion and multi-target tracking. Secondary is a gathering of the individuals performing research on the associated topics to present, discuss, and disseminate ideas related to solving the detection, tracking, identification, and classification problems in the context of space domain awareness.


SS15 - Physics-based and Human-derived Information Fusion

In the recent past, situation awareness information has largely been derived from ‘physics-based’ sensors (e.g. radar, electro-optic cameras). Additionally, significant situation awareness has been derived from the exploitation of information from ‘human-based’ sources. Human-based sources are typically less precise, more categorical and have richer content than physics-based sensors and therefore are typically analyzed separately from physics based sensors. Established situation reporting systems are being augmented with new human-based sources of data (for example social media, free text, webpages) and the proliferation of these lead to a large amount of additional information that could potentially be fused with that from physics based sensors. This motivates the need for automated fusion methods, which can process and mitigate the enormous volume of unstructured data which could otherwise overwhelm human analysts. The fusion of human-based and physics-based sources will enhance situation awareness for various tasks, such as target/object detection and tracking, detection of deception and security threats, and surveillance of patterns. This session will focus on methods to fuse data sourced from physics-based sensors with human-based sources of information. More broadly, papers in this session will cover practical methods of fusing data from disparate sources to enable better situation awareness.


 

SS17 - Remote sensing data fusion

Remote sensing data fusion has gained great importance along with the explosion of available data from various remote sensing sensors and platforms.    As a special application field, aiming at earth observation remote sensing data include multi-­‐temporal and multi-­‐model images of our land surface.  On the one hand,  individual  algorithms  are developed  according  to the image  geometry  and the employed sensors.  With the development of the latest remote sensing image acquisition techniques, remote sensing data differ in many aspects, like geometric and spectral resolution, and the imaging principle. On the other hand, besides remote sensing images, geospatial data, GIS, social-­‐media  and other statistical datasets can be adopted and integrated  to a proper workflow to solve land cover object recognition problems. 

This session will focus on four topics: 1) Co-registration  of multi-sources  data; 2) High performance big data fusion; 3) Pixel-,  feature- and decision-level  fusion techniques for remote sensing data; 4) Data fusion applications in the fields of monitoring natural resources, natural hazards, and security.


SS18 - Semi‐supervised/unsupervised learning–based state estimation

This session is concerned with semi‐supervised/unsupervised learning methods for the efficient solution of dynamic system state estimation problems occurring in information fusion and filtering. As an important component in navigation, robotics, object tracking, and many other current research fields, the state estimation is generally established on the state space model (SSM), which employs a hidden Markov model to describe the object’s dynamics, and an observation function to link observed variables to the hidden states. Indeed, the state estimation problem can be understood and classified as the supervised learning method in machine learning field, because it essentially labels the state by the hidden Markov model (HMM) to characterize the state’s dynamic evolution and then makes the linear or nonlinear regression between the state estimation and measurement. Markov–Bayes framework (MBF) and its variations, such as Kalman filter (KF), Gaussian filter (KF) and particle filter (PF), are the most popular supervised estimation methods, which is based on Bayesian rule or numerically approximate technologies for achieving the regression (i.e. the Bayesian state update equation). But these filters suffer from the unexpectable and undesirable accuracy deterioration, even divergence since HMM sometimes hardly or even wrongly describe the object’s motion, which equals to say that HMM wrongly labels the state’ dynamics. To cope with this challenging problem, a well‐established technique is based on the semi‐supervised/unsupervised learning, instead of the supervised learning to achieve the state estimation. For this session, manuscripts are invited that cover any aspect of semi‐supervised/unsupervised methods, such as expectation maximization, Bayesian inference, K‐means for state estimation. This includes both theoretically oriented work and applications of known methods. In particular, this session also strongly invites the associated manuscripts, which are based on the machine learning, deep learning, artificial intelligence to efficiently solve the state estimation problem in information fusion and filtering.


SS19 - Lower Bounds for Parameter Estimation and Beyond

The field of estimation performance bounds has a long history. The perhaps most prominent example is the Cramer-Rao Lower bound (CRLB) which nowadays finds widespread use. Even though CRLB itself is established, there are many emerging areas, where it has not been evaluated. Besides the CRLB, there are other bounds that are often tighter, i.e. they better predict the estimation performance, such as the Barankin bound or Weiss-Weinstein bound, which are often more difficult to compute, but have recently attracted considerable interest in the research community.

This special session aims at bringing together different experts in the field of estimation performance bounds to discuss the newest research results in this area. Of particular interest are developments of novel bounds, such as e.g. Bayesian bounds, non- Bayesian bounds, hybrid bounds, mis-specified bounds, as well as new results for the CRLB with application to for instance target tracking, sensor networks, aerospace, or localization.


SS20 - Towards a Battlefield IoT: Information Challenges and Solutions

Recent directions in military thinking support the development of a new system architecture for delivery of battlefield services. Such an architecture might revolutionize smart battlefields of the future, the way the Internet of Things revolutionized smart homes, cities, and civilian services (e.g., transportation, agriculture, and energy management). It is called the Internet of Battlefield Things (IoBT)1. The development of effective IoBTs, however, poses significant challenges. Unlike civilian IoTs, IoBTs will usually operate in contested adversarial environments. They will need to support diverse missions, tasks, and goals. They will include highly heterogeneous nodes, from humans to high-energy weapons and from cloud services to embedded resource-limited devices. They will carry sensitive battlefield information and execute mission-critical software. Protecting information flow in a robust manner from failures, attacks, and outages, becomes a paramount concern. Improving the quality of information delivery in the face of disruptions becomes an important consideration. This session discusses the challenges that emerge in information flow management in the context of the envisioned IoBTs, and outlines emerging research on potential solutions. It explores issues of securing the IoBT information workflow against side-channel attacks [1]. It offers new solutions for robust workflow analytics in adversarial environments, where resources can be lost suddenly [2], describes how to autonomously manage the workflow in a command- by-intent fashion [3], offers middleware solutions for effective handling of information semantics [4], and describes example applications such as the reconstruction of unknown environments from mobile sensor data [5]. Finally, it describes ideas for incentivizing (e.g., grey) nodes to deliver high quality information [6]. The session hopes to establish a forum for discussing innovative ideas and techniques that might lead to the next generation of battlefield information technologies that revolutionize future warfare and maximize readiness, agility, and efficacy of military operations, while reducing cost and collateral damage.


SS21 - Uncertainty, Trust and Deception in Information Fusion

The collection, fusion and analysis of information is not only challenged by uncertain or incomplete information but also by deliberately false and deceptive information.  Many different approaches can be taken to orevent or mitigate the risk of being influenced, drawing conclusions and making decisions based on uncertain, false or deceptive information. Possible approaches are e.g. trust discounting and fusion, belief revision, behaviour analysis, reputation analysis, and determining the minimum number of reliable source nodes required to significantly reduce the impact of false information. Many other methods and approaches can also be use. In a world of fake news and alternative facts these topics urgently need the attention of the research community.


SS22 – Big Data Fusion and Analytics

Big data has tremendous potential to transform private sector businesses and defence organizations with valuable strategic information and actionable intelligence and patterns. But the volume, veracity and velocity, the three major aspects that typically characterize a big data environment, pose significant challenge in searching, processing, and extracting intelligence for strategic situational awareness and decision support.

This special session will focus on fusion and analytics (two sides of the same coin!) to process big centralized data, inherently distributed data, and data residing on the cloud. The fusion and analytics techniques to be covered will handle structured and/or unstructured data. Structured data refers to computerized information which can be easily interpreted and used by a computer program supporting a range of tasks. For example, the information stored in a relational database is structured, whereas text documents, videos, and images are usually unstructured.

Potential authors from both academia and industry are encouraged to submit papers on the following topics - High-level big data situation and threat assessment; Descriptive and predictive big data analytics; Text analytics of unstructured HUMINT, textual blogs, emails, surveys, etc., via deep linguistics processing; Cloud computing for fusion and analytics using relevant platforms and paradigms such as Hadoop, MapReduce, and Accumulo, and scripting languages such as R and Python; Big data search and query in traditional and NoSQL environments; Distributed fusion and analytics for inherently distributed big data; Deep learning of neural and belief network models; Tracking and resource management using big data; Prescriptive analytics and decision support in the presence of big data; Sampling based approach to fusion and analytics in big data environments; Distributed model-based computation; Data mining and knowledge discovery in big data residing in cloud and warehouses; Extraction of structured information from textual and other types of unstructured data; Handling of uncertainly in big data environment.

The session is aimed to be both practical and discussion oriented. Demos will be welcome.


SS23 – Directional Estimation

Many estimation problems of practical relevance include the problem of estimating directional quantities, for example angular values or orientations. However, conventional filters like the Kalman filter assume Gaussian distributions defined on R^n. This assumption neglects the inherent periodicity present in directional quantities. Consequently, more sophisticated approaches are required to accurately describe the circular setting.

This Special Session addresses fundamental techniques, recent developments, and future research directions in the field of estimation involving directional and periodic data. It is our goal to bridge the gap between theoreticians and practitioners. Thus, we welcome both applied and theoretic contributions to this topic.

Topics of interest for the session include - Estimation of circular or directional quantities; Combination of periodic and linear quantities, e.g., for 6 DOF pose estimation; Circular and directional statistics; Statistics on the rotation groups SO(2) and SO(3), the Euclidean; Group SE(2) and SE(3), and other manifolds; Recursive and batch filtering in a periodic setting; Applications: tracking, robotics, medicine, biology.


SS24 - Sensor, Resources, and Process Management for Information Fusion Systems

Advancements in communication, information and sensor technologies are driving a trend in the development of complex, adaptive and reconfigurable sensor systems. Such a sensor system can have a large scope for online reconfiguration, which typically exceeds the management capability of the human operator. In addition, the sensor system can face a variety of fundamental resource limitations, such as a limited power supply, a finite total time-budget, a narrow field of sight, a limited on-board processing capability or constraints on the communication channels between the sensor nodes. Consequently, effective sensor scheduling and resources management is a key performance factor for the emerging generation of adaptive and reconfigurable sensor systems.

In the case of stationary sensors, it is usually desirable to schedule measurements to maximize the benefit in respect to the objectives of the sensor system, whilst avoiding redundant measurements. This benefit can be quantified by an appropriate metric, for example, a task specific metric, information gain or utility. For mobile sensors it is also necessary to consider the sensor platform navigation (including its uncertainties), as the sensor-scenario geometry can significantly affect performance, e.g., for coordinated exploration in disaster areas.

Current research in this area involves innovative approaches to active sensing, sensor lifetime maximization, reduction of redundant measurements, and optimization of trajectories for area coverage.


SS25 - Situational Understanding Through Equivocal Sources

In contrast to traditional sensing sources, the proliferation of soft information sources—especially multimodal social media—has made them a viable medium to obtain insights about events and their evolutions in the environment. Fusing information from soft information with traditional sensing sources could improve the situational understanding of decision makers, thus enabling them to make informed decisions in rapidly changing complex environments. However, the equivocal nature of such sources makes the decision-making challenging, especially in critical situations where information reliability plays a key role.

Thus, the aim of the special session is as follows: (a) discuss how different strains of information can be processed, analysed, and combined to model the equivocality in information; (b) investigate how such models can be exploited to improve the credibility and reliability of the fused information; and (c) frameworks to combine such information to assists decision makers—be they central or edge users.

The topics of interest include - Modelling equivocality in social, physical, participatory, and pervasive sensing for situational understanding; Modelling credibility and relevancy in information and sources; Distributed and multimodal fusion for insight generation; Machine learning with heterogeneous data sources; Detecting and reasoning about conflicts in information; Knowledge discovery, representation, and reasoning for insights; Computation at the edge to increase the local understanding; Fusion using belief models; Data-to-Decision frameworks situational understanding.


SS26 - Autonomous Driving

Autonomous driving poses unique challenges for vehicle sensor fusion system in complicated driving environments. In 2009, Google first announced an initiative to develop self-driving cars. Since then their autonomous driving vehicles have already covered more than two million miles,  and every day their simulators drive three million more. By Oct 2016, 19 companies has announced plans for AVs to be available in the next three to five years. According to a recent study released by Intel (http://fortune.com/2017/06/03/autonomous-vehicles-market/), there will be a $7 trillion self-driving future by 2050. The rapid progress of the self-driving vehicle development demands a workforce that is well prepared for the technology. This brings exciting opportunity for our international society of information fusion - the vehicle sensor fusion system is now widely considered as the most important component in the autonomous driving system and a bottleneck technology to achieve level 4/5 autonomous driving system.

The special session is intended to inspire sensor fusion researchers/engineers to apply the corresponding theory/knowledge to push forward the autonomous driving to a higher level.