TY - GEN
T1 - Pretext Tasks in Test Time Adaptation Under Distribution Shifts-A Survey and Future Directions
AU - Liu, Kai
AU - Zhang, Jicong
AU - Wang, Shiqi
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The performance of well-trained models deteriorates significantly under distribution shifts between training and test datasets. In response, various test time adaptive methods have been proposed to narrow domain gaps by capturing distribution cues from test samples. Notably, pretext task-based test time adaptive models exhibit promising performance, as they do not require target test annotations and separate training and testing stages, by leveraging well-designed pretext tasks, enabling effective adaptation at test time. Moreover, to accommodate diverse scenarios, task-specific pretext tasks are proposed to improve adaptive performance. Currently, a review has provided a comprehensive overview of test time adaptive methods. Nevertheless, there remains a notable gap in detailed surveys of pretext tasks employed in test time adaptation. To narrow this gap, this paper presents a survey of pretext tasks employed in test time adaptive models. We begin by providing an overview of test time adaptive methods, followed by giving a concise review of pretext tasks used in common scenarios, with a comparison to those used in test time adaptation scenarios. Subsequently, we delve into pretext tasks employed in various test time adaptation scenarios, exploring their characteristics, strengths, and limitations. Lastly, we conduct an empirical analysis with various pretext tasks in a digit prediction task, and subsequently conclude with a discussion of potential directions for future research.
AB - The performance of well-trained models deteriorates significantly under distribution shifts between training and test datasets. In response, various test time adaptive methods have been proposed to narrow domain gaps by capturing distribution cues from test samples. Notably, pretext task-based test time adaptive models exhibit promising performance, as they do not require target test annotations and separate training and testing stages, by leveraging well-designed pretext tasks, enabling effective adaptation at test time. Moreover, to accommodate diverse scenarios, task-specific pretext tasks are proposed to improve adaptive performance. Currently, a review has provided a comprehensive overview of test time adaptive methods. Nevertheless, there remains a notable gap in detailed surveys of pretext tasks employed in test time adaptation. To narrow this gap, this paper presents a survey of pretext tasks employed in test time adaptive models. We begin by providing an overview of test time adaptive methods, followed by giving a concise review of pretext tasks used in common scenarios, with a comparison to those used in test time adaptation scenarios. Subsequently, we delve into pretext tasks employed in various test time adaptation scenarios, exploring their characteristics, strengths, and limitations. Lastly, we conduct an empirical analysis with various pretext tasks in a digit prediction task, and subsequently conclude with a discussion of potential directions for future research.
KW - Contrastive Learning
KW - Distribution Shift
KW - Domain Generalization
KW - Pretext Tasks
KW - Test Time Adaptation
UR - https://www.scopus.com/pages/publications/105004576488
U2 - 10.1109/ICICML63543.2024.10958016
DO - 10.1109/ICICML63543.2024.10958016
M3 - 会议稿件
AN - SCOPUS:105004576488
T3 - 2024 International Conference on Image Processing, Computer Vision and Machine Learning, ICICML 2024
SP - 429
EP - 438
BT - 2024 International Conference on Image Processing, Computer Vision and Machine Learning, ICICML 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 3rd International Conference on Image Processing, Computer Vision and Machine Learning, ICICML 2024
Y2 - 22 November 2024 through 24 November 2024
ER -