Abstract:
In the chemical laboratory, the laboratory staff often forget to close the fume hood window in time before they leave, which may cause potential safety hazards and the waste of energy. Therefore, it is necessary to develop a methodology for the safety management of fume hood window. To the best of the authors’ knowledge, the related research works about window status recognition are mainly through various electronic control systems, which are not suitable for fume hood window. By using computer vision with the advantages of non-contact and easy expansibility, this paper proposes a novel safety management method for fume hood window. Firstly, the surveillance videos are preprocessed and these areas of fume hood window are extracted via motion features and geometric priority. This can effectively reduce the influence of irrelevant area on window status recognition. Due to the lack of available data set and the limitation of the number of fume hood windows in the laboratory, this paper constructs a new data set containing 400 window images. By using few-shot learning, this paper proposes a recognition method on the status of fume hood window. Compared with the traditional few-shot learning dataset, fume hood window images have higher resolution such that it is difficult to extract effective features. To overcome this significant challenge, this paper applies dilation convolution to enlarge receptive field and constructs the inception layer with multi-scale dilation rate instead of traditional convolution layer. In order to avoid invalid detection on the window status while staff are in the laboratory, we use the moving foreground region extracted from the Gauss mixture model as the prior region of Yolov3 (You only look once version 3) target detection such that the error recognition can be greatly reduced. In the simulation experiment, the proposed method is compared with the traditional machine learning algorithm and CNN(convolutional neural network). LBP (local binary pattern), PCA(principal component analysis), ColorHist(histogram of color) and HOG(histogram of oriented gradient) are selected as the features of machine learning methods from the aspects of texture, dimension reduction, color and shape. It is shown via the experimental results that the proposed method can achieve 99.29% accuracy under normal illumination conditions, 17.20% higher than the best traditional HOG combined with Randomforest method and 10.95% higher than the convolution neural network. Under the condition of illumination change, the accuracy is 95.74%, which is less changed than the one under normal illumination.