Abstract:
In recent years, convolutional neural networks have made remarkable achievements in low light image enhancement. However, the effect of most existing low light enhancement models based on traditional convolutional neural networks is limited by the fact that the convolutional kernel sensory field is local in convolution, and the application of pooling layer makes a lot of valuable feature information lost. To solve these problems, we propose a lovel end-to-end low light image enhancement network. Firstly, the smooth dilated convolution layer and Convolutional Block Attention Module (CBAM) attention mechanism are used to extract the features of the image respectively. Secondly, the multi-layer feature fusion is carried out through the splicing operation. Finally, the multi-channel features are sent into the reconstruction network composed of residual network to generate the final enhanced image. In addition, we use compound loss function to train the low light image data set. Experimental results show that multi-channel feature extraction can be achieved better by using this method. Compared with the current mainstream image enhancement methods, the network enhancement effect proposed by us is superior to other methods in subjective vision and objective evaluation indicators.