View on GitHub


An official PyTorch implementation of “Explorable Super Resolution” by Yuval Bahat and Tomer Michaeli (CVPR 2020).

Table of Contents

  1. Overview
  2. Dependencies
  3. Acknowledgement
  4. Running the GUI
  5. Exploring with the GUI
  6. Training an explorable super-resolution network
  7. Using the consistency enforcing module (CEM) for other purposes


The overall explorable super resolution framework is shown in the figure below. It consists of a super-resolution neural network, a consistency enforcing module (CEM) and a graphical user interface (GUI).

This repository includes:

  1. Code for a Graphical User Interface (GUI) allwoing a user to perform explorable super resoution and edit a low-resoultion image in real time. Pre-trained backend models for the 4x case are available for download, though our method supports any integer super-resolution factor.

  2. Code for training an explorable super resolution model yourself. This model can then be used to replace the available pre-trained models as the GUI backend.
  3. Implementation of the Consistency Enforcing Module (CEM) that can wrap any existing (and even pre-trained) super resolution network, modifying its high-resolution outputs to be consistent with the low-resolution input.

You can run our GUI and use its tools to explore the abundant different high-resolution images matching an input low-resolution image. The backend of this GUI comprises an explorable super-resolution netwrok. You can either download a pre-trained model, or you can train a model by yourself. Finally, our consistency enforcing module (CEM) can be used as a standalone component, to wrap any super-resolution model, whether before or after its training, for guranteeing the consistency of its outputs.

Our CEM assumes the default bicubic downsampling kernel, but in needs access to the actual downsampling kernel corresponding to the low-resolution image, in order to guarantee the consistency of our framework’s outputs. To this end, GUI users can utilize the incorporated KernelGAN kernel estimation method by Bell-Kligler et al., which may improve consistency in some cases.



Code architecture is based on an older version of BasicSR.

Running the explorable SR GUI

  1. Train or download a pre-trained explorable SR model:
    Our GUI enables exploration by utilizing a backend explorable SR network. Therefore to run it, you first need to either train or download a pre-trained model. The corresponding pre-trained discriminator is available here, in case you want to fine-tune the model.
  2. (Optional) Download a pre-trained ESRGAN model:
    Download a pre-trained ESRGAN model, to display the (single) super-resolved output by the state-of-the-art ESRGAN method.
  3. Update paths:
    Update the necessary fields in the GUI_SR.json file.
  4. Run the GUI:
    python SR -opt ./options/test/GUI_SR.json  

Exploring using our GUI

I hope to add here a full description of all our GUI exploration tools soon. In the meantime, please refer to the description in appendix D of our paper.

Training the backend exploration network

  1. Download training set:
    Download a dataset of high-resolution training images. We used the training subset of the DIV2K dataset.
  2. Training-set preparation (Requires updating the ‘dataset_root_path’ field in each of the scripts below):
    1. Create image crops of your high resolution (HR) image training set using
    2. Create two new folders containing pairs of corresponding HR and LR image crops, using
    3. Create two corresponding lmdb files using (change the ‘HR_images’ flag for the LR file).
  3. Download initialization model:
    Download a pre-trained ESRGAN model for weights initialization (This model is for a 4x super-resolution. Other factors require a different model).
  4. Update parameters:
    Update the necessary (and optionally other) fields in the train_explorable_SR.json file.
  5. Train the model:
    python -opt ./options/train/train_explorable_SR.json