Adversarial Robustness for Unsupervised Domain Adaptation
Awais
Fengwei Zhou
Hang Xu
Lanqing Hong
Ping Luo
Sung-Ho Bae
Zhenguo Li

Huawei's Noah Ark Lab
Hong Kong University
Kyung-Hee University South Korea


[Paper]
[Slides]
[GitHub]
[ICCV]

Abstract

Extensive Unsupervised Domain Adaptation (UDA) studies have shown great success in practice by learning transferable representations across a labeled source domain and an unlabeled target domain with deep models. However, previous works focus on improving the generalization ability of UDA models on clean examples without considering the adversarial robustness, which is crucial in real-world applications. Conventional adversarial training methods are not suitable for the adversarial robustness on the unlabeled target domain of UDA since they train models with adversarial examples generated by the supervised loss function. In this work, we leverage intermediate representations learned by multiple robust ImageNet models to improve the robustness of UDA models. Our method works by aligning the features of the UDA model with the robust features learned by ImageNet pre-trained models along with domain adaptation training. It utilizes both labeled and unlabeled domains and instills robustness without any adversarial intervention or label requirement during domain adaptation training. Experimental results show that our method significantly improves adversarial robustness compared to the baseline while keeping clean accuracy on various UDA benchmarks.



Paper and Supplementary Material

Adversarial Robustness for Unsupervsied Domain Adaptation.
In ICCV, 2021.
(hosted on ArXiv)


[Bibtex]


Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.