Anomaly detection is a crucial task that learns normal patterns from training samples to identify abnormal samples in test data. However, existing approaches often encounter challenges in real-world scenarios due to data drift resulting from external factors such as changes in camera angles, lighting conditions, and noise. In this work, we propose a robust anomaly detection model via Generalized Normality Learning (GNL) to deal with the domain shift. The key to our robustness to domain shift is to enhance the recall of out-of-distribution samples. First, we train a normality distillation student to fit diverse augmented normal patterns, in which we adopt a hard distillation loss and a structure distillation loss. Second, to enhance the accuracy of anomaly localization, we adopt a segmentation sub-network to integrate the outputs of teacher and student models. Experiments on the MvTec AD test set with random perturbations highlight the effectiveness of our method.
IntroductionBackground: Anomaly detection is a crucial task that learns normal patterns from training samples to identify abnormal samples in test data. Existing approaches often encounter challenges in real-world scenarios due to data drift resulting from external factors such as changes in camera angles, lighting conditions, and noise.
Challenge Description: Adapt & Detect: Robust Anomaly Detection in Real-World Applications
MethodologyModel Design
- Approach: The key to our robustness to domain shift is to enhance the recall of out-of-distribution samples
- Architecture: Our framework is built on a distillation-based framework, RD4AD [1] and DestSeg [2]. The architecture consists of two parts, Generalized Normlity Learning and Accurate Anomaly Localization. First, to enhance the recall of out-of-distribution samples, we train a student decoder to fit diverse augmented normal patterns, in which we adopt a structure distillation loss (Spatial feature distillation) [3] and a hard distillation loss [4]. Second, to enhance the accuracy of anomaly localization, we adopt a segmentation sub-network [2] to integrate the outputs of teacher and student models.
Comments