

model/RCAN_BIX8.pt -test_only -save_results -chop -save 'RCAN ' -testpath. Python main.py -data_test MyImage -scale 8 -model RCAN -n_resgroups 10 -n_resblocks 20 -n_feats 64 -pre_train. model/RCAN_BIX4.pt -test_only -save_results -chop -save 'RCAN ' -testpath. Python main.py -data_test MyImage -scale 4 -model RCAN -n_resgroups 10 -n_resblocks 20 -n_feats 64 -pre_train. model/RCAN_BIX3.pt -test_only -save_results -chop -save 'RCAN ' -testpath. Python main.py -data_test MyImage -scale 3 -model RCAN -n_resgroups 10 -n_resblocks 20 -n_feats 64 -pre_train. model/RCAN_BIX2.pt -test_only -save_results -chop -save 'RCAN ' -testpath. Python main.py -data_test MyImage -scale 2 -model RCAN -n_resgroups 10 -n_resblocks 20 -n_feats 64 -pre_train. # No self-ensemble: RCAN # BI degradation model, X2, X3, X4, X8 # RCAN_BIX2 You can use scripts in file 'TestRCAN_scripts' to produce results for our paper. # RCAN_BDX3_G10R20P48, input=48x48, output=144x144 # specify '-dir_data' to the path of BD training dataĭownload models for our paper and place them in '/RCAN_TestCode/model'.Ĭd to '/RCAN_TestCode/code', run the following scripts.

Python main.py -model RCAN -save RCAN_BIX8_G10R20P48 -scale 8 -n_resgroups 10 -n_resblocks 20 -n_feats 64 -reset -chop -save_results -print_model -patch_size 384 -pre_train. Python main.py -model RCAN -save RCAN_BIX4_G10R20P48 -scale 4 -n_resgroups 10 -n_resblocks 20 -n_feats 64 -reset -chop -save_results -print_model -patch_size 192 -pre_train. Python main.py -model RCAN -save RCAN_BIX3_G10R20P48 -scale 3 -n_resgroups 10 -n_resblocks 20 -n_feats 64 -reset -chop -save_results -print_model -patch_size 144 -pre_train. Python main.py -model RCAN -save RCAN_BIX2_G10R20P48 -scale 2 -n_resgroups 10 -n_resblocks 20 -n_feats 64 -reset -chop -save_results -print_model -patch_size 96 You can use scripts in file 'TrainRCAN_scripts' to train models for our paper.
#DROPBOX PAPER CODE BLOCK DOWNLOAD#
(optional) Download models for our paper and place them in '/RCAN_TrainCode/experiment/model'.Īll the models (BIX2/3/4/8, BDX3) can be downloaded from Dropbox, BaiduYun, or GoogleDrive.Ĭd to 'RCAN_TrainCode/code', run the following scripts to train models. npy files, then set '-ext sep' to skip converting files.įor more informaiton, please refer to EDSR(PyTorch). If all the training images (.png) are converted to. In option.py, '-ext' is set as 'sep_reset', which first convert.

Specify '-dir_data' based on the HR and LR images path. Train Prepare training dataĭownload DIV2K training data (800 training + 100 validtion images) from DIV2K dataset or SNU_CVLab. The architecture of our proposed residual channel attention network (RCAN). Residual channel attention block (RCAB) architecture. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information.

Each residual group contains some residual blocks with short skip connections. Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. To solve these problems, we propose the very deep residual channel attention networks (RCAN). The low-resolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. However, we observe that deeper networks for image SR are more difficult to train. Visual results reproducing the PSNR/SSIM values in the paper are availble at GoogleDrive.įor BI degradation model, scales=2,3,4,8: Results_ECCV2018RCAN_BIX2X3X4X8 ContentsĬonvolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). RCAN model has also been merged into EDSR (PyTorch). The code is built on EDSR (PyTorch) and tested on Ubuntu 14.04/16.04 environment (Python3.6, PyTorch_0.4.0, CUDA8.0, cuDNN5.1) with Titan X/1080Ti/Xp GPUs. Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu, "Image Super-Resolution Using Very Deep Residual Channel Attention Networks", ECCV 2018, This repository is for RCAN introduced in the following paper Image Super-Resolution Using Very Deep Residual Channel Attention Networks
