napari-n2v

Logo

A self-supervised denoising algorithm.

View the Project on GitHub juglab/napari-n2v

Example pipelines

The plugins come with sample data that can be loaded into napari using File/Open sample/napari-n2v. As the images are downloaded from a remote server, the process can seem idle for a while before eventually loading the images as napari layers.

In this section, we describe how to reproduce the results from the N2V Github repository using the napari plugins.

  1. 2D - SEM
  2. N2V2
  3. 2D - BSD68
  4. 2D - RGB
  5. StructN2V
  6. 3D

Important note: if you are using a GPU with little memory (e.g. 4 GB), then most of the shown settings will not work because the batches will probably not fit in memory. Try reducing the batch size while increasing the number of steps. This will obviously increase the running time.

2D SEM

The example notebook can be found here.

config = N2VConfig(X, 
                   unet_kern_size=3, 
                   train_steps_per_epoch=27, 
                   train_epochs=20, 
                   train_loss='mse', 
                   batch_norm=True, 
                   train_batch_size=128, 
                   n2v_perc_pix=0.198, 
                   n2v_patch_shape=(64, 64), 
                   n2v_manipulator='uniform_withCP', 
                   n2v_neighborhood_radius=5)

In order to reproduce the result using the plugin, we then follow these steps:

  1. Confirm that your environment is properly set for GPU training by checking that the GPU indicator (top right) in the plugin displays a greenish GPU label.
  2. Choose the SEM data in the sample examples (File / Open sample / napari-n2v / Download data (SEM)). Don’t be scared by the monster face.
  3. Set the validation layer to val.
  4. In Training parameters, set:
    N epochs = 20
    N steps = 27
    Batch size = 128
    Patch XY = 64
  5. Click on the gear button to open the Expert settings and set:
    U-Net kernel size = 3
  6. You can compare the configuration above to the rest of the Expert settings to confirm that the other default values are properly set.
  7. Train!
  8. Note that for the prediction, you will probably need to use tiling.

2D SEM with N2V2

N2V2 requires specific parameters, but napari-n2v takes care of them if you select N2V2 in the expert settings.

config = N2VConfig(X, 
                   unet_kern_size=3, 
                   train_steps_per_epoch=27, 
                   train_epochs=20, 
                   train_loss='mse', 
                   batch_norm=True, 
                   train_batch_size=128, 
                   n2v_perc_pix=0.198, 
                   n2v_patch_shape=(64, 64), 
                   n2v_manipulator='mean', 
                   n2v_neighborhood_radius=5,
                   blurpool=True,
                   skip_skipone=True,
                   unet_residual=False)

In order to reproduce the result using the plugin, we then follow these steps:

  1. Confirm that your environment is properly set for GPU training by checking that the GPU indicator (top right) in the plugin displays a greenish GPU label.
  2. Choose the SEM data in the sample examples (File / Open sample / napari-n2v / Download data (SEM)). Don’t be scared by the monster face.
  3. Set the validation layer to val.
  4. In Training parameters, set:
    N epochs = 20
    N steps = 27
    Batch size = 128
    Patch XY = 64
  5. Click on the gear button to open the Expert settings and set:
    U-Net kernel size = 3
    N2V2 = checked
  6. You can compare the configuration above to the rest of the Expert settings to confirm that the other default values are properly set.
  7. Train!
  8. Note that for the prediction, you will probably need to use tiling.

2D BSD68

This example, with the settings proposed here, takes a long time to train.

The example notebook generates a configuration containing all the parameters used for training and reproducing the results in the N2VConfig call:

config = N2VConfig(X, 
                   unet_kern_size=3, 
                   train_steps_per_epoch=400, 
                   train_epochs=200, 
                   train_loss='mse', 
                   batch_norm=True, 
                   train_batch_size=128, 
                   n2v_perc_pix=0.198, 
                   n2v_patch_shape=(64, 64), 
                   unet_n_first = 96,
                   unet_residual = True,
                   n2v_manipulator='uniform_withCP', 
                   n2v_neighborhood_radius=2,
                   single_net_per_channel=False)

The resulting configuration is:

{'means': ['110.72957232412905'], 
 'stds': ['63.656060106500874'],
 'n_dim': 2,
 'axes': 'YXC',
 'n_channel_in': 1,
 'n_channel_out': 1,
 'unet_residual': True, # Expert settings / U-Net residuals
 'unet_n_depth': 2,   # Expert settings / U-Net depth
 'unet_kern_size': 3, # Expert settings / U-Net kernel size
 'unet_n_first': 96,  # Expert settings / U-Net n filters
 'unet_last_activation': 'linear',
 'unet_input_shape': (None, None, 1),
 'train_loss': 'mse', # Expert settings / Train loss
 'train_epochs': 200,  # N epochs
 'train_steps_per_epoch': 400, # N steps
 'train_learning_rate': 0.0004, # Expert settings / Learning rate
 'train_batch_size': 128, # Batch size 
 'train_tensorboard': True,
 'train_checkpoint': 'weights_best.h5',
 'train_reduce_lr': {'factor': 0.5, 'patience': 10},
 'batch_norm': True,
 'n2v_perc_pix': 0.198, # Expert settings / N2V pixel %
 'n2v_patch_shape': (64, 64), # Patch XY (and Patch Z)
 'n2v_manipulator': 'uniform_withCP', # Expert settings / N2V manipulator
 'n2v_neighborhood_radius': 2,  # Expert settings / N2V radius
 'single_net_per_channel': False, # Expert settings / Split channels
 'structN2Vmask': None, # Expert settings / structN2V
 'probabilistic': False}

Here we commented some lines with the equivalent parameters in the napari plugin. Parameters that were not specifically set in the N2VConfig call are set their default and might not need to be set in the napari plugin either.

In order to reproduce the result using the plugin, we then follow these steps:

  1. In napari, go to File / Open sample / napari-n2v / Download data (2D), after the time necessary to download the data, it will automatically add the BSD68 data set to napari.
  2. Confirm that your environment is properly set for GPU training by checking that the GPU indicator (top right) in the plugin displays a greenish GPU label.
  3. Select the validation layer in Val.
  4. In Training parameters, set:
    N epochs = 200
    N steps = 400
    Batch size = 128
    Patch XY = 64
  5. Click on the gear button to open the Expert settings and set:
    U-Net kernel size = 3
    U-Net residuals = True (check)
    Split channels = False (uncheck)
    N2V radius = 2
  6. You can compare the configuration above to the rest of the Expert settings to confirm that the other default values are properly set.
  7. Train!

If your GPU is too small for the training parameters (loading batches in the GPU memory creates out-of-memory errors), then you should decrease the Batch size parameter.

2D RGB example

The RGB notebook example can be found here.

config = N2VConfig(X, 
                   unet_kern_size=3, 
                   unet_n_first=64, 
                   unet_n_depth=3, 
                   train_steps_per_epoch=39, 
                   train_epochs=25, 
                   train_loss='mse', 
                   batch_norm=True, 
                   train_batch_size=128, 
                   n2v_perc_pix=0.198, 
                   n2v_patch_shape=(64, 64), 
                   n2v_manipulator='uniform_withCP', 
                   n2v_neighborhood_radius=5, 
                   single_net_per_channel=False)

In order to reproduce the result using the plugin, we then follow these steps:

  1. In napari, go to File / Open sample / napari-n2v / Download data (RGB).
  2. Confirm that your environment is properly set for GPU training by checking that the GPU indicator (top right) in the plugin displays a greenish GPU label.
  3. Make sure to enter YXC in Axes.
  4. In Training parameters, set:
    N epochs = 25
    N steps = 39
    Batch size = 128
    Patch XY = 64
  5. Click on the gear button to open the Expert settings and set:
    U-Net depth = 3
    U-Net kernel size = 3
    U-Net n filters = 64
    Split channels = False (uncheck)
  6. You can compare the configuration above to the rest of the Expert settings to confirm that the other default values are properly set.
  7. Train!
  8. Note that for the prediction, you will probably need to use tiling.

2D structN2V Convollaria

The example notebook can be found here.

config = N2VConfig(X, 
                   unet_kern_size=3, 
                   train_steps_per_epoch=500, 
                   train_epochs=10, 
                   train_loss='mse', 
                   batch_norm=True, 
                   train_batch_size=128, 
                   n2v_perc_pix=0.198, 
                   n2v_patch_shape=(64, 64), 
                   n2v_manipulator='uniform_withCP', 
                   n2v_neighborhood_radius=5, 
                   structN2Vmask = [[0,1,1,1,1,1,1,1,1,1,0]])
  1. Download the data and load it into napari.
  2. Confirm that your environment is properly set for GPU training by checking that the GPU indicator (top right) in the plugin displays a greenish GPU label.
  3. In Training parameters, set:
    N epochs = 10
    N steps = 500
    Batch size = 128
    Patch XY = 64
  4. Click on the gear button to open the Expert settings and set:
    U-Net kernel size = 3
    structN2Vmask = 0,1,1,1,1,1,1,1,1,1,0
  5. You can compare the configuration above to the rest of the Expert settings to confirm that the other default values are properly set.
  6. Train!

3D example

The example notebook can be found here.

config = N2VConfig(X, 
                   unet_kern_size=3, 
                   train_steps_per_epoch=4,
                   train_epochs=20, 
                   train_loss='mse', 
                   batch_norm=True, 
                   train_batch_size=4, 
                   n2v_perc_pix=0.198, 
                   n2v_patch_shape=(32, 64, 64), 
                   n2v_manipulator='uniform_withCP', 
                   n2v_neighborhood_radius=5)
  1. In napari, go to File / Open sample / napari-n2v / Download data (3D).
  2. Confirm that your environment is properly set for GPU training by checking that the GPU indicator (top right) in the plugin displays a greenish GPU label.
  3. Check Enable 3D.
  4. In Training parameters, set:
    N epochs = 20
    N steps = 4
    Batch size = 4
    Patch XY = 64
    Patch Z = 32
  5. Click on the gear button to open the Expert settings and set:
    U-Net depth = 2
    U-Net kernel size = 3
  6. You can compare the configuration above to the rest of the Expert settings to confirm that the other default values are properly set.
  7. Train!
  8. Note that for the prediction, you will probably need to use tiling.