Membership Inference Attack: A Middleware-Based Approach for Privacy Preservation and Attack Mitigation in Machine Learning Systems
##plugins.themes.bootstrap3.article.main##
Resumen
This article explores the use of middleware as a robust solution to mitigate membership inference attacks (MIA) in machine learning (ML) systems. These attacks allow an attacker to deduce whether a specific data point was part of a model’s training set, compromising data confidentiality and privacy. The proposed approach focuses on the use of middleware that implements data randomization techniques, prediction obfuscation, dynamic regularization, and real-time monitoring to prevent such attacks. The results reveal that this middleware architecture provides an additional layer of security, minimizing the risk of data exposure while maintaining model accuracy. This research offers a novel perspective on using middleware for mitigating membership inference attacks, providing valuable insights into machine learning security.
Descargas
##plugins.themes.bootstrap3.article.details##

Esta obra está bajo una licencia internacional Creative Commons Atribución-NoComercial-SinDerivadas 4.0.
Los artículos publicados son de exclusiva responsabilidad de sus autores y no reflejan necesariamente las opiniones del comité editorial.
La Revista CESTA respeta los derechos morales de sus autores, los cuales ceden al comité editorial los derechos patrimoniales del material publicado. A su vez, los autores informan que el presente trabajo es inédito y no ha sido publicado anteriormente.
Todos los artículos están bajo una Licencia Creative Commons Atribución-NoComercial-SinDerivadas 4.0 Internacional.