|
2 | 2 | This project proposes an end-to-end framework for semi-supervised Anomaly Detection and Segmentation in images based on Deep Learning.
|
3 | 3 |
|
4 | 4 | ## Method Overview
|
5 |
| -The proposed method employs a thresholded pixel-wise difference between reconstructed image and input image to localize anomaly where the threshold is determined by using a first subset of anomalous-free training images and a second subset of both anomalous-free and anomalous test images. |
| 5 | +The proposed method employs a thresholded pixel-wise difference between reconstructed image and input image to localize anomaly. The threshold is determined by first using a subset of anomalous-free training images, i.e validation images, to determine possible values of minimum area and threshold pairs followed by using a subset of both anomalous-free and anomalous test images to select the best pair for classification and segmentation of the remaining test images. |
6 | 6 |
|
7 | 7 | It is inspired to a great extent by the papers [MVTec AD — A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection](https://www.mvtec.com/fileadmin/Redaktion/mvtec.com/company/research/mvtec_ad.pdf) and [Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders](https://arxiv.org/abs/1807.02011).
|
8 | 8 | The method is devided in 3 steps: training, finetuning and testing.
|
9 | 9 |
|
10 | 10 | 
|
11 | 11 |
|
| 12 | +**NOTE: Why Semi-Supervised and not Unsupervised?** |
| 13 | +The method proposed in the [MVTec paper](https://www.mvtec.com/fileadmin/Redaktion/mvtec.com/company/research/mvtec_ad.pdf) is unsupervised, as a subset containing only anomaly-free training images (validation set) are used during the validation step to determine the threshold for classification and segmentation of test images. However, the validation algorithm is based on a user input parameter, the minimum defect area, which definition remains unclear and unexplained in the aforementioned paper. Because the choice of this parameter can greatly influence the classification and segmentation results and in an effort to automate the process and remove the need for all user input, we developed a finetuning algorithm that computes different thresholds corresponding to a wide range of discrete minimum defect areas using the validation set. Subsequently, a small subset of anomaly and anomaly-free images of the test set (finetuning set) is used to select the best minimum defect area and threshold pait that will finally be used to classify and segment the remaining test images. Since our method relies on test images for finetuning, we describe it as being semi-supervised. |
| 14 | + |
12 | 15 | ## Dataset
|
13 | 16 |
|
14 | 17 | The proposed framework has been tested successfully on the [MVTec dataset](https://www.mvtec.com/company/research/datasets/mvtec-ad/).
|
15 | 18 |
|
16 | 19 | ## Models
|
17 | 20 |
|
18 |
| -There is a total of 4 models based on the Convolutional Auto-Encoder (CAE) architecture implemented in this project: |
| 21 | +There is a total of 5 models based on the Convolutional Auto-Encoder (CAE) architecture implemented in this project: |
19 | 22 | * *mvtecCAE* is the model implemented in the [MVTec Paper](https://www.mvtec.com/fileadmin/Redaktion/mvtec.com/company/research/mvtec_ad.pdf)
|
20 | 23 | * *baselineCAE* is inspired by: https://github.com/natasasdj/anomalyDetection
|
21 | 24 | * *inceptionCAE* is inspired by: https://github.com/natasasdj/anomalyDetection
|
22 | 25 | * *resnetCAE* is inspired by: https://arxiv.org/pdf/1606.08921.pdf
|
| 26 | +* *skipCAE* is inspired by: https://arxiv.org/pdf/1606.08921.pdf |
23 | 27 |
|
24 | 28 | **NOTE:**
|
25 | 29 | The models *mvtecCAE*, *baselineCAE* and *inceptionCAE* are quite comparable in performance.
|
26 |
| -*resnetCAE* is still being tested. |
| 30 | +Both remaining models, *resnetCAE* and *skipCAE*, are still being tested, as they are prone to overfitting, which translates in the case of convolutional auto-encoders by copying its inputs without filtering out the defective regions. |
27 | 31 |
|
28 | 32 | ## Prerequisites
|
29 | 33 |
|
|
0 commit comments