Introduction
Sensors can be found in almost every area like, healthcare, agriculture, defence, smart mobiles, industries, weather monitoring etc. Because of the involvement of a large number of features, data acquisition using a single sensor may not provide an accurate analysis. Each sensor has its own pros and cons and by combining the data from these sensors, we can reduce the errors and obtain complete information. This is where sensor fusion comes into play. Sensor fusion means combining the data from multiple sensors for interpretation and analysis to have relatively less uncertainty in the result as compared to the result obtained with individual sensors.
Sensor fusion technology has been widely used in many areas like biomedical for clinical diagnosis, military for navigation purposes, autonomous driving systems etc. For example, estimating the location of an indoor object is quite challenging as it is difficult to achieve a higher level of position accuracy. By combining data received from different sources like Wi-Fi signals and video cameras, one can achieve a more accurate location estimate.
The concept of fusion has been derived from nature itself. For example, humans combine information from all five sensory organs (eyes, ears, nose, skin, tongue) to perceive the environment and perform actions accordingly. Animals also analyse the environment by receiving and combining signals from multiple sources. In the past few years, fusion technology has been widely used in technical areas constituting a new discipline named sensor fusion.
Types of Sensor Fusions
In sensor fusion, there are three types of configurations: complementary, competitive, and cooperative.

Competitive
In Competitive sensor fusion, each sensor provides independent measurements of the same property. There are two types of such configuration: one where data from two different sensors are fused and another which involves the fusion of measurements from a single sensor at different time intervals. This type of configuration is majorly used in fault detection. This helps in making the system more robust.
Complementary
In Complementary sensor fusion, sensors are not directly dependent on each other, however; data from each sensor can be merged to obtain complete information about the environment being evaluated. Individually these sensors may not provide complete information, hence this fusion configuration can tackle the incompleteness of individual sensors. An example could be using multiple cameras to obtain images of the same object focusing on distinct features to provide complete information. Or we can also combine data from vibration sensors and rotational speed sensors to analyse the condition of the motors and the gearbox more accurately. Complementary sensor configuration is relatively easy to design as we can easily append the data from different sensors since they are independent of each other.
Co-operative
In Co-operative sensor fusion, information from two independent sensors is combined to derive the result that cannot be obtained using single sensors. An example could be a stereoscopic vision in which two-dimensional images from two different cameras at different angles are combined to generate a three-dimensional image. This type of configuration is the most difficult to design because the resulting data is more sensitive to inaccuracies in all individual sensors thereby resulting in a decrease in reliability and accuracy.
Application of Sensor Fusion
Sensor fusion technology can be used in smart healthcare systems where we can integrate (fuse) data from wearable IoT devices for a more accurate and fast diagnosis. Recently sensor fusion concepts have been deployed in several military tools such as weapon systems, enemy identification and target detection systems etc.
Earlier, the combat vehicles would use optical or imaging sensors for assisting soldiers or crew in scanning the target. This however is no longer helpful in the case of hidden enemies. With sensor fusion techniques, multiple sensors operating in different wavebands, visual, acoustic, and thermal, can be combined to provide more accurate information about the threat. In the agricultural industry, estimating the quality parameters of soil is essentially the most important. Visual images in this case can only provide information about the superficial layer of the soil. Combining data from both digital cameras and LIDAR to measure soil-microtopography leads to a better analysis.
Conclusion
Sensor fusion technology in IoT has recently gained tremendous popularity. With the advancement in microtechnology, the size of the sensors is getting smaller and smaller and hence can be easily embedded in IoT devices. Deploying IoT devices with sensor fusion techniques helps achieve faster and more reliable solutions.