Una revisión de fusión de datos multimodal: aplicaciones y condiciones adversas
Daniel Fernando Quintero Bernal
Universidad de Santiago de Chile (USACH)
Héctor Kaschel
Universidad de Santiago de Chile
John Kern
Universidad de Santiago de Chile
DOI: https://doi.org/10.17981/ingecuc.21.1.2025.08
Palabras clave: Condiciones adversas, datos heterogéneos, fusión de datos, multimodal, perturbaciones, ruido
Resumen
La fusión de datos multimodal es un campo de investigación que combina información de diversas fuentes, cada una con sus propias modalidades. Uno de los desafíos más destacados de la fusión de datos es abordar las condiciones adversas que surgen al enfrentar implementaciones del mundo real. Este artículo presenta una revisión de la fusión de datos, realizando un análisis comparativo a través de una búsqueda bibliográfica en varios dominios de aplicaciones que abordan los desafíos de la heterogeneidad de datos y condiciones adversas. El enfoque de este documento se centra en establecer una base sólida que permita el desarrollo de futuras investigaciones en este campo. Esta revisión muestra que casi la mitad de los documentos analizados describen condiciones adversas, pero solo una minoría realiza análisis sobre cómo sus técnicas abordan el ruido o se benefician de considerar estas condiciones. La cantidad de modalidades utilizadas en las investigaciones es generalmente baja, y la mayoría de los estudios emplean datos estáticos en dimensiones 1D. Finalmente, es fundamental que los investigadores sigan trabajando en un vocabulario interdisciplinario y consideren aplicaciones en entornos del mundo real con condiciones adversas para avanzar en este campo. También es importante explorar aplicaciones con datos de mayor dimensionalidad y más modalidades, lo que podría proporcionar información valiosa para abordar desafíos específicos en la fusión de datos.
Descargas
Biografía del autor/a
Daniel Fernando Quintero Bernal, Universidad de Santiago de Chile (USACH)
Del año 2012 a 2017 realizó sus estudios en el programa de Ingeniería Mecatrónica, en la Universidad Tecnológica de Pereira, Colombia (donde recibió 3 reconocimientos de Estudiante Distinguido y dos Matrículas de Honor). Del año 2017 a 2020, en la misma universidad, realizó estudios de posgrado como becario (Beca Jorge Roa Martínez) en el programa de Maestría en Instrumentación Física. Durante los años de estudio de maestría se desempeñó como investigador. Actualmente se encuentra cursando estudios como becario (Beca BEE y ANID) en el programa de Doctorado en Ciencias de la Ingeniería, Mención en Automática, en la Universidad de Santiago de Chile. Sus áreas de Interés se encuentran relacionadas con instrumentación, automatización y control, más específicamente en áreas donde se trabaja con datos de múltiples sensores con un objetivo específico (fusión de datos). Posee una patente y es coautor de un artículo científico.
Héctor Kaschel, Universidad de Santiago de Chile
Hector Kaschel (Senior Member, IEEE) recibió el título de Ingeniero Civil en Electricidad en la Universidad de Santiago de Chile, y el Doctordo en Ingeniería Eléctrica (Dr.-Ing.) por la Universidad de Paderborn, Alemania. Actualmente es Profesor Titular del Departamento de Ingeniería Eléctrica de la Universidad de Santiago de Chile.
Ha publicado más de 150 artículos en congresos y revistas nacionales e internacionales. Sus intereses de investigación incluyen Redes de Comunicaciones Industriales, Redes de Sensores Inalámbricas (WSN), Redes de Area Corporal Inalámbrica (WBAN), Ciudades Inteligentes, Redes Inteligentes, Redes de Area Local Inalámbricas (WLAN) y Redes Móviles.
John Kern, Universidad de Santiago de Chile
John Kern nació en Santiago de Chile. Recibió el M.Sc. Ing. y Dr. de la Universidad de Santiago de Chile, Santiago, Chile en 2010 y 2013, respectivamente, y completó sus estudios de posdoctorado en 2014 en la misma institución. Tema de investigación: Sistemas Tolerantes a Fallos. John Kern es actualmente Profesor del Departamento de Ingeniería Eléctrica de la Universidad de Santiago de Chile, en las áreas de control automático y robótica. Ha desarrollado laboratorios: teoría y sistemas de control, sistemas de señales y comunicación, y electrónica. Desde 2016 se desempeña como jefe de Laboratorio de Control de Procesos. Actualmente es director del Doctorado en Ciencias de la Ingeniería Mención Automatización de la Universidad de Santiago de Chile.
Citas
T. Meng, X. Jing, Z. Yan, and W. Pedrycz, “A survey on machine learning for data fusion,” Information Fusion, vol. 57, pp. 115–129, 2020, doi: https://doi.org/10.1016/j.inffus.2019.12.001.
R. Bokade et al., “A cross-disciplinary comparison of multimodal data fusion approaches and applications: Accelerating learning through trans-disciplinary information sharing,” Expert Syst Appl, vol. 165, p. 113885, 2021, doi: https://doi.org/10.1016/j.eswa.2020.113885.
J. Zhou, X. Hong, and P. Jin, “Information Fusion for Multi-Source Material Data: Progress and Challenges,” Applied Sciences, vol. 9, no. 17, 2019, doi: 10.3390/app9173473.
H. Zhang, H. Xu, X. Tian, J. Jiang, and J. Ma, “Image fusion meets deep learning: A survey and perspective,” Information Fusion, vol. 76, pp. 323–336, 2021, doi: https://doi.org/10.1016/j.inffus.2021.06.008.
J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: A survey,” Information Fusion, vol. 45, pp. 153–178, 2019, doi: https://doi.org/10.1016/j.inffus.2018.02.004.
E. Blasch et al., “Machine Learning/Artificial Intelligence for Sensor Data Fusion–Opportunities and Challenges,” IEEE Aerospace and Electronic Systems Magazine, vol. 36, no. 7, pp. 80–93, 2021, doi: 10.1109/MAES.2020.3049030. [7] X. Cui and D. Niu, “A Study on Multimodal Data Fusion Based on Data Space for Power Equipment Identification,” in 2022 IEEE 25th International Conference on Computer Supported Cooperative Work in Design (CSCWD), 2022, pp. 471–476. doi: 10.1109/CSCWD54268.2022.9776174.
G. K. Canalle, A. C. Salgado, and B. F. Loscio, “A survey on data fusion: what for? in what form? what is next?,” J Intell Inf Syst, vol. 57, no. 1, pp. 25–50, 2021, doi: 10.1007/s10844-020-00627-4.
Y. Zhang, D. Sidibé, O. Morel, and F. Mériaudeau, “Deep multimodal fusion for semantic image segmentation: A survey,” Image Vis Comput, vol. 105, p. 104042, 2021, doi: https://doi.org/10.1016/j.imavis.2020.104042.
C. Zhang, Z. Yang, X. He, and L. Deng, “Multimodal Intelligence: Representation Learning, Information Fusion, and Applications,” IEEE Journal on Selected Topics in Signal Processing, vol. 14, no. 3, pp. 478–493, 2020, doi: 10.1109/JSTSP.2020.2987728.
I. Ullah and H. Y. Youn, “Intelligent Data Fusion for Smart IoT Environment: A Survey,” Wirel Pers Commun, vol. 114, no. 1, pp. 409–430, 2020, doi: 10.1007/s11277-020-07369-0.
P. Ghamisi et al., “Multisource and Multitemporal Data Fusion in Remote Sensing: A Comprehensive Review of the State of the Art,” IEEE Geosci Remote Sens Mag, vol. 7, no. 1, pp. 6–39, 2019, doi: 10.1109/MGRS.2018.2890023.
S. Singh, N. Mittal, and H. Singh, “Review of Various Image Fusion Algorithms and Image Fusion Performance Metric,” Archives of Computational Methods in Engineering, vol. 28, no. 5, pp. 3645–3659, 2021, doi: 10.1007/s11831-020-09518-x.
H. Kaur, D. Koundal, and V. Kadyan, “Image Fusion Techniques: A Survey,” Archives of Computational Methods in Engineering, vol. 28, no. 7, pp. 4425–4447, 2021, doi: 10.1007/s11831-021-09540-7.
S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: A survey of the state of the art,” Information Fusion, vol. 33, pp. 100–112, 2017, doi: https://doi.org/10.1016/j.inffus.2016.05.004.
Y. Liu, X. Chen, Z. Wang, Z. J. Wang, R. K. Ward, and X. Wang, “Deep learning for pixel-level image fusion: Recent advances and future prospects,” Information Fusion, vol. 42, pp. 158–173, 2018, doi: https://doi.org/10.1016/j.inffus.2017.10.007.
B. P. L. Lau et al., “A survey of data fusion in smart city applications,” Information Fusion, vol. 52, pp. 357–374, 2019, doi: https://doi.org/10.1016/j.inffus.2019.05.004.
A. A. Aguileta, R. F. Brena, O. Mayora, E. Molino-Minero-re, and L. A. Trejo, “Multi-sensor fusion for activity recognition—a survey,” Sensors (Switzerland), vol. 19, no. 17, pp. 1–41, 2019, doi: 10.3390/s19173808.
D. Li, Z. Song, C. Quan, X. Xu, and C. Liu, “Recent advances in image fusion technology in agriculture,” Comput Electron Agric, vol. 191, p. 106491, 2021, doi: https://doi.org/10.1016/j.compag.2021.106491.
T. Zhou, S. Ruan, and S. Canu, “A review: Deep learning for medical image segmentation using multi-modality fusion,” Array, vol. 3–4, p. 100004, 2019, doi: https://doi.org/10.1016/j.array.2019.100004.
H. Hermessi, O. Mourali, and E. Zagrouba, “Multimodal medical image fusion review: Theoretical background and recent advances,” Signal Processing, vol. 183, p. 108036, 2021, doi: https://doi.org/10.1016/j.sigpro.2021.108036.
O. S. Faragallah et al., “A Comprehensive Survey Analysis for Present Solutions of Medical Image Fusion and Future Directions,” IEEE Access, vol. 9, pp. 11358–11371, 2021, doi: 10.1109/ACCESS.2020.3048315.
B. Huang, F. Yang, M. Yin, X. Mo, and C. Zhong, “A Review of Multimodal Medical Image Fusion Techniques,” Comput Math Methods Med, vol. 2020, 2020, doi: 10.1155/2020/8279342.
S. P. Yadav and S. Yadav, “Image fusion using hybrid methods in multimodality medical images,” Med Biol Eng Comput, vol. 58, no. 4, pp. 669–687, 2020, doi: 10.1007/s11517-020-02136-6.
C. Sun, C. Zhang, and N. Xiong, “Infrared and Visible Image Fusion Techniques Based on Deep Learning: A Review,” Electronics (Basel), vol. 9, no. 12, 2020, doi: 10.3390/electronics9122162.
R. Dian, S. Li, B. Sun, and A. Guo, “Recent advances and new guidelines on hyperspectral and multispectral image fusion,” Information Fusion, vol. 69, pp. 40–51, 2021, doi: https://doi.org/10.1016/j.inffus.2020.11.001.
F. Dadrass Javan, F. Samadzadegan, S. Mehravar, A. Toosi, R. Khatami, and A. Stein, “A review of image fusion techniques for pan-sharpening of high-resolution satellite imagery,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 171, no. October 2020, pp. 101–117, 2021, doi: 10.1016/j.isprsjprs.2020.11.001.
U. Baroudi, A. A. Al-Roubaiey, and A. Devendiran, “Pipeline Leak Detection Systems and Data Fusion: A Survey,” IEEE Access, vol. 7, pp. 97426–97439, 2019, doi: 10.1109/ACCESS.2019.2928487.
Y. Cui et al., “Deep Learning for Image and Point Cloud Fusion in Autonomous Driving: A Review,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 2, pp. 722–739, 2022, doi: 10.1109/TITS.2020.3023541.
D. J. Yeong, G. Velasco-hernandez, J. Barry, and J. Walsh, “Sensor and sensor fusion technology in autonomous vehicles: A review,” Sensors, vol. 21, no. 6, pp. 1–37, 2021, doi: 10.3390/s21062140.
F. Alam, R. Mehmood, I. Katib, N. N. Albogami, and A. Albeshri, “Data Fusion and IoT for Smart Ubiquitous Environments: A Survey,” IEEE Access, vol. 5, pp. 9533–9554, 2017, doi: 10.1109/ACCESS.2017.2697839.
G. Xiao, D. P. Bavirisetti, G. Liu, and X. Zhang, Image Fusion. Singapore: Springer Singapore, 2020. doi: 10.1007/978-981-15-4867-3.
X. E. Gros, J. Bousigue, and K. Takahashi, “NDT data fusion at pixel level,” NDT & E International, vol. 32, no. 5, pp. 283–292, Jul. 1999, doi: 10.1016/S0963-8695(98)00056-5.
P. U. Kurup and E. P. Griffin, “Prediction of Soil Composition from CPT Data Using General Regression Neural Network,” Journal of Computing in Civil Engineering, vol. 20, no. 4, pp. 281–289, Jul. 2006, doi: 10.1061/(ASCE)0887-3801(2006)20:4(281).
T. Cheng, G. C. Migliaccio, J. Teizer, and U. C. Gatti, “Data Fusion of Real-Time Location Sensing and Physiological Status Monitoring for Ergonomics Analysis of Construction Workers,” Journal of Computing in Civil Engineering, vol. 27, no. 3, pp. 320–335, May 2013, doi: 10.1061/(ASCE)CP.1943-5487.0000222.
Z. M. Sbartaï, S. Laurens, S. M. Elachachi, and C. Payan, “Concrete properties evaluation by statistical fusion of NDT techniques,” Constr Build Mater, vol. 37, pp. 943–950, 2012, doi: https://doi.org/10.1016/j.conbuildmat.2012.09.064.
Z. M. SBARTAÏ, V. GARNIER, and M. A. PLOIX, “DECISION MAKING FOR CONCRETE EVALUATION USING FUSION OF NDT TECHNIQUES,” in Uncertainty Modeling in Knowledge Engineering and Decision Making, 2012, pp. 94–99. doi: 10.1142/9789814417747_0016.
Y. Z. Erzin, M. Nikoo, M. R. Nikoo, and T. Cetin, “THE USE OF SELF-ORGANIZING FEATURE MAP NETWORKS FOR THE PREDICTION OF THE CRITICAL FACTOR OF SAFETY OF AN ARTIFICIAL SLOPE,” Neural Network World, vol. 26, pp. 461–475, 2016, [Online]. Available: https://api.semanticscholar.org/CorpusID:41853327.
M.-A. Ploix, V. Garnier, D. Breysse, and J. Moysan, “NDE data fusion to improve the evaluation of concrete structures,” NDT & E International, vol. 44, no. 5, pp. 442–448, 2011, doi: https://doi.org/10.1016/j.ndteint.2011.04.006.
C. Völker and P. Shokouhi, “Multi sensor data fusion approach for automatic honeycomb detection in concrete,” NDT & E International, vol. 71, pp. 54–60, 2015, doi: https://doi.org/10.1016/j.ndteint.2015.01.003.
A. Aryal, A. Ghahramani, and B. Becerik-Gerber, “Monitoring fatigue in construction workers using physiological measurements,” Autom Constr, vol. 82, pp. 154–165, 2017, doi: https://doi.org/10.1016/j.autcon.2017.03.003.
C. Völker and P. Shokouhi, “Clustering Based Multi Sensor Data Fusion for Honeycomb Detection in Concrete,” J Nondestr Eval, vol. 34, no. 4, p. 32, 2015, doi: 10.1007/s10921-015-0307-7.
B. Cai, Y. Zhao, H. Liu, and M. Xie, “A Data-Driven Fault Diagnosis Methodology in Three-Phase Inverters for PMSM Drive Systems,” IEEE Trans Power Electron, vol. 32, no. 7, pp. 5590–5600, 2017, doi: 10.1109/TPEL.2016.2608842.
L. Zhao and X. Wang, “A Deep Feature Optimization Fusion Method for Extracting Bearing Degradation Features,” IEEE Access, vol. 6, pp. 19640–19653, 2018, doi: 10.1109/ACCESS.2018.2824352.
L. Zhang, H. Gao, J. Wen, S. Li, and Q. Liu, “A deep learning-based recognition method for degradation monitoring of ball screw with multi-sensor data fusion,” Microelectronics Reliability, vol. 75, pp. 215–222, 2017, doi: https://doi.org/10.1016/j.microrel.2017.03.038.
Y. Ge, B. Li, W. Yan, and Y. Zhao, “A real-time gesture prediction system using neural networks and multimodal fusion based on data glove,” in 2018 Tenth International Conference on Advanced Computational Intelligence (ICACI), 2018, pp. 625–630. doi: 10.1109/ICACI.2018.8377532.
Y.-W. Hsu, S.-S. Huang, and J.-W. Perng, “Application of multisensor fusion to develop a personal location and 3D mapping system,” Optik (Stuttg), vol. 172, pp. 328–339, 2018, doi: https://doi.org/10.1016/j.ijleo.2018.07.029.
M. J. Roemer, G. J. Kacprzynski, and R. F. Orsagh, “Assessment of data and knowledge fusion strategies for prognostics and health management,” in 2001 IEEE Aerospace Conference Proceedings (Cat. No.01TH8542), 2001, pp. 2979–2988 vol.6. doi: 10.1109/AERO.2001.931318.
M. Ma, C. Sun, and X. Chen, “Deep Coupling Autoencoder for Fault Diagnosis with Multimodal Sensory Data,” IEEE Trans Industr Inform, vol. 14, no. 3, pp. 1137–1145, 2018, doi: 10.1109/TII.2018.2793246.
M. Kumar and D. P. Garg, “Intelligent multi-sensor fusion techniques in flexible manufacturing workcells,” in Proceedings of the 2004 American Control Conference, 2004, pp. 5375–5380 vol.6. doi: 10.23919/ACC.2004.1384707.
Y. Chen, J. Yang, C. Wang, and N. Liu, “Multimodal biometrics recognition based on local fusion visual features and variational Bayesian extreme learning machine,” Expert Syst Appl, vol. 64, pp. 93–103, 2016, doi: https://doi.org/10.1016/j.eswa.2016.07.009.
J. Yuqin, W. Peixia, and L. Yue, “Study of manufacturing system based on neural network multi-sensor data fusion and its application,” in IEEE International Conference on Robotics, Intelligent Systems and Signal Processing, 2003. Proceedings. 2003, 2003, pp. 1022–1026 vol.2. doi: 10.1109/RISSP.2003.1285729.
M. S. Safizadeh and S. K. Latifi, “Using multi-sensor data fusion for vibration fault diagnosis of rolling element bearings by accelerometer and load cell,” Information Fusion, vol. 18, pp. 1–8, 2014, doi: https://doi.org/10.1016/j.inffus.2013.10.002.
T. Ahmad, P. Campr, M. Čadik, and G. Bebis, “Comparison of semantic segmentation approaches for horizon/sky line detection,” in 2017 International Joint Conference on Neural Networks (IJCNN), 2017, pp. 4436–4443. doi: 10.1109/IJCNN.2017.7966418.
S.-H. Tseng, Y. Chao, C. Lin, and L.-C. Fu, “Service robots: System design for tracking people through data fusion and initiating interaction with the human group by inferring social situations,” Rob Auton Syst, vol. 83, pp. 188–202, 2016, doi: https://doi.org/10.1016/j.robot.2016.05.004.
G. Reina, A. Milella, R. Rouveure, M. Nielsen, R. Worst, and M. R. Blas, “Ambient awareness for agricultural robotic vehicles,” Biosyst Eng, vol. 146, pp. 114–132, 2016, doi: https://doi.org/10.1016/j.biosystemseng.2015.12.010.
F. Palumbo, C. Gallicchio, R. Pucci, and A. Micheli, “Human activity recognition using multisensor data fusion based on Reservoir Computing,” J Ambient Intell Smart Environ, vol. 8, pp. 87–107, 2016, doi: 10.3233/AIS-160372.
A. Howard and H. Seraji, “Multi-sensor terrain classification for safe spacecraft landing,” IEEE Trans Aerosp Electron Syst, vol. 40, no. 4, pp. 1122–1131, 2004, doi: 10.1109/TAES.2004.1386868.
Y. Cheng and W. Zhang, “Concise deep reinforcement learning obstacle avoidance for underactuated unmanned marine vessels,” Neurocomputing, vol. 272, pp. 63–73, 2018, doi: https://doi.org/10.1016/j.neucom.2017.06.066.
A. Chilian, H. Hirschmüller, and M. Görner, “Multisensor data fusion for robust pose estimation of a six-legged walking robot,” in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011, pp. 2497–2504. doi: 10.1109/IROS.2011.6094484.
V. Kubelka, M. Reinstein, and T. Svoboda, “Improving multimodal data fusion for mobile robots by trajectory smoothing,” Rob Auton Syst, vol. 84, pp. 88–96, 2016, doi: https://doi.org/10.1016/j.robot.2016.07.006.
K. Maatoug, M. Njah, and M. Jallouli, “Multisensor data fusion for electrical wheelchair localization using extended Kalman Filter,” in 2017 18th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA), 2017, pp. 257–260. doi: 10.1109/STA.2017.8314970.
R. Qiu, X. Miao, S. Zhuang, H. Jiang, and J. Chen, “Design and implementation of an autonomous landing control system of unmanned aerial vehicle for power line inspection,” in 2017 Chinese Automation Congress (CAC), 2017, pp. 7428–7431. doi: 10.1109/CAC.2017.8244120.
D. M. G. A. I. N. Sumanarathna, I. A. S. R. Senevirathna, K. L. U. Sirisena, H. G. N. Sandamali, M. B. Pillai, and A. M. H. S. Abeykoon, “Simulation of mobile robot navigation with sensor fusion on an uneven path,” in 2014 International Conference on Circuits, Power and Computing Technologies [ICCPCT-2014], 2014, pp. 388–393. doi: 10.1109/ICCPCT.2014.7054830.
D. Novak, X. Omlin, R. Leins-Hess, and R. Riener, “Predicting targets of human reaching motions using different sensing technologies.,” IEEE Trans Biomed Eng, vol. 60, no. 9, pp. 2645–2654, Sep. 2013, doi: 10.1109/TBME.2013.2262455.
R. Yang, P. V. Er, Z. Wang, and K. K. Tan, “An RBF neural network approach towards precision motion system with selective sensor fusion,” Neurocomputing, vol. 199, pp. 31–39, 2016, doi: https://doi.org/10.1016/j.neucom.2016.01.093.
E. J. Shamwell, W. D. Nothwang, and D. Perlis, “A deep neural network approach to fusing vision and heteroscedastic motion estimates for low-SWaP robotic applications,” in 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), 2017, pp. 56–63. doi: 10.1109/MFI.2017.8170407.


English
Español (España)



















