Please use this identifier to cite or link to this item:
https://scholarhub.balamand.edu.lb/handle/uob/6839
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Tekli, Jimmy | en_US |
dc.contributor.author | Al Bouna, Bechara | en_US |
dc.contributor.author | Tekli, Gilbert | en_US |
dc.contributor.author | Couturier, Raphaël | en_US |
dc.date.accessioned | 2023-06-06T06:18:34Z | - |
dc.date.available | 2023-06-06T06:18:34Z | - |
dc.date.issued | 2023-04-11 | - |
dc.identifier.issn | 13807501 | - |
dc.identifier.uri | https://scholarhub.balamand.edu.lb/handle/uob/6839 | - |
dc.description.abstract | Image obfuscation techniques (e.g., pixelation, blurring and masking,..) have been developed to protect sensitive information in images (e.g. individuals’ faces). In a previous work, we designed a recommendation framework that evaluates the robustness of image obfuscation techniques and recommends the most resilient obfuscation against Deep-Learning assisted attacks. In this paper, we extend the framework due to two main reasons. First, to the best of our knowledge there is not a standardized evaluation methodology nor a defined model for adversaries when evaluating the robustness of image obfuscation and more specifically face obfuscation techniques. Therefore, we adapt a three-components adversary model (goal, knowledge and capabilities) to our application domain (i.e., facial features obfuscations) and embed it in our framework. Second, considering several attacking scenarios is vital when evaluating the robustness of image obfuscation techniques. Hence, we define three threat levels and explore new aspects of an adversary and its capabilities by extending the background knowledge to include the obfuscation technique along with its hyper-parameters and the identities of the target individuals. We conduct three sets of experiments on a publicly available celebrity faces dataset. Throughout the first experiment, we implement and evaluate the recommendation framework by considering four adversaries attacking obfuscation techniques (e.g. pixelating, Gaussian/motion blur and masking) via restoration-based attacks. Throughout the second and third experiments, we demonstrate how the adversary’s attacking capabilities (recognition-based and Restoration & Recognition-based attacks) scale with its background knowledge and how it increases the potential risk of breaching the identities of blurred faces. | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Springer nature | en_US |
dc.subject | Adversary model | en_US |
dc.subject | Background knowledge | en_US |
dc.subject | Deep learning-assisted attacks | en_US |
dc.subject | Face obfuscation | en_US |
dc.subject | Image transformation | en_US |
dc.subject | Privacy-preserving techniques | en_US |
dc.title | A framework for evaluating image obfuscation under deep learning-assisted privacy attacks | en_US |
dc.type | Journal Article | en_US |
dc.identifier.doi | 10.1007/s11042-023-14664-y | - |
dc.identifier.scopus | 2-s2.0-85160270905 | - |
dc.identifier.url | https://api.elsevier.com/content/abstract/scopus_id/85160270905 | - |
dc.contributor.affiliation | Department of Mechatronics Engineering | en_US |
dc.description.volume | 82 | en_US |
dc.description.startpage | 42173 | en_US |
dc.description.endpage | 42205 | en_US |
dc.date.catalogued | 2023-06-06 | - |
dc.description.status | Published | en_US |
dc.identifier.openURL | https://link.springer.com/article/10.1007/s11042-023-14664-y | en_US |
dc.relation.ispartoftext | Multimedia Tools and Applications | en_US |
crisitem.author.parentorg | Issam Fares Faculty of Technology | - |
Appears in Collections: | Department of Mechatronics Engineering |
SCOPUSTM
Citations
2
checked on Nov 23, 2024
Record view(s)
92
checked on Nov 22, 2024
Google ScholarTM
Check
Altmetric
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.