TY - JOUR
T1 - Privacy-Preserving Constrained Domain Generalization Via Gradient Alignment
AU - Tian, Chris Xing
AU - Li, Haoliang
AU - Wang, Yufei
AU - Wang, Shiqi
N1 - Publisher Copyright:
© 1989-2012 IEEE.
PY - 2024/5/1
Y1 - 2024/5/1
N2 - Deep neural networks (DNN) have demonstrated unprecedented success for various applications. However, due to the issue of limited dataset availability and the strict legal and ethical requirements for data privacy protection, the broad applications of DNN (e.g., medical imaging classification) with large-scale training data have been largely hindered, greatly constraining the model generalization capability. In this paper, we aim to tackle this problem by developing the privacy-preserving constrained domain generalization method, aiming to improve the generalization capability under the privacy-preserving condition. In particular, we propose to improve the information aggregation process on the centralized server side with a novel gradient alignment loss, expecting that the trained model can be better generalized to the 'unseen' but related data. The rationale and effectiveness of our proposed method can be explained by connecting our proposed method with the Maximum Mean Discrepancy (MMD) which has been widely adopted as the distribution distance measure. Experimental results on three domain generalization benchmark datasets indicate that our method can achieve better cross-domain generalization capability compared to the state-of-the-art federated learning methods.
AB - Deep neural networks (DNN) have demonstrated unprecedented success for various applications. However, due to the issue of limited dataset availability and the strict legal and ethical requirements for data privacy protection, the broad applications of DNN (e.g., medical imaging classification) with large-scale training data have been largely hindered, greatly constraining the model generalization capability. In this paper, we aim to tackle this problem by developing the privacy-preserving constrained domain generalization method, aiming to improve the generalization capability under the privacy-preserving condition. In particular, we propose to improve the information aggregation process on the centralized server side with a novel gradient alignment loss, expecting that the trained model can be better generalized to the 'unseen' but related data. The rationale and effectiveness of our proposed method can be explained by connecting our proposed method with the Maximum Mean Discrepancy (MMD) which has been widely adopted as the distribution distance measure. Experimental results on three domain generalization benchmark datasets indicate that our method can achieve better cross-domain generalization capability compared to the state-of-the-art federated learning methods.
KW - domain generalization
KW - Federated learning
KW - gradient alignment
UR - https://www.scopus.com/pages/publications/85171566897
U2 - 10.1109/TKDE.2023.3315279
DO - 10.1109/TKDE.2023.3315279
M3 - 文章
AN - SCOPUS:85171566897
SN - 1041-4347
VL - 36
SP - 2142
EP - 2150
JO - IEEE Transactions on Knowledge and Data Engineering
JF - IEEE Transactions on Knowledge and Data Engineering
IS - 5
ER -