Final Submission Eligibility

1. Validation Environment

HPC Hardware Configuration.

Data Arrangement

2. Build your model and your running scripts

3. Create Dockerfile

4. Build Docker image

5. Locally test Docker container

6. Upload your Docker image

7. Additional Notes


Final Submission Eligibility

Starting August 14 at 23:59 (UTC+8:00), we will be accepting Docker submissions. In order to be eligible, you need to have submitted your short paper before the workshop paper deadline. If you do not have a short paper submitted until the paper submission deadline, you Docker submission will be invalid. Your final Docker submission is due September 5 at 23:59 (UTC-12:00). We will not inform participants of their scores before the STACOM workshop unless there is a programming problem. That is, the participants only have one chance for test phase and the final score will be announced in STACOM workshop at MICCAI2022. While the validation system of validation phase will remain open until September 5 at 23:59 (UTC-12:00).


1. Validation Environment

1.1 HPC Hardware Configuration.

The validation of the unseen test set will be performed on a cloud server with a configuration as follows:

  • CPU: 10 cores @ 2.50GHz;
  • RAM: 32 GB;
  • GPU: NVIDIA TESLA V100 (32 GB VRAM, Volta Architecture, single GPU);
  • GPU DRIVER: NVIDIA-DRIVER Version 510.47.03;
  • Network: without network access;
  • Validation Cases: 25 Cases (Up to 100 images, 1-4 images per case);
  • Time limitation: 4 hours / team · task ^-1;

It means that for each task, each team has to submit a docker image (If you only choose one task, you can commit the corresponding docker).

For each task, teams must complete the inference process within 4 hours using the above hardware accessories, including preprocessing, inference, and postprocessing.

Additional attention should be paid to using multiple threads and GPU memory.

Finally, to prevent cross-cheating, we will disable the internet connection during running time.

Note that: It is the participant’s responsibility to create a development environment and other prerequisites besides GPU drivers, such as Conda, PyTorch, CUDA, CUDNN, and other tools.

1.2 Data Arrangement

  1. Input files
    • The folder containing 25 test cases (up to 100 images, 1-4 images per case) will be mounted in your /input path;
    • The individual image should be saved in zipped NIfTI files and named ID.nii.gz;
    • Participants need to list all the file IDs in /input;
    • The following directory structure is for reference:
/input
├── P022-1-ED.nii.gz
├── P022-1-ES.nii.gz
├── P022-2-ED.nii.gz
├── P022-2-ES.nii.gz
  .
  .
  .
├── P030-3-ES.nii.gz
└── P030-4-ED.nii.gz
  1. Output files for task 1
    • The output files should be recorded in /output ;
    • The output of the task1 is a single CSV file named output.csv;
    • You need to create a CSV file in the same format as the validation phase.
  1. Output files for task 2
    • The output files should be recorded in /output ;
    • The individual segmentations should be saved in zipped NIfTI files and named ID.nii.gz ( the file name of the prediction file should be the same as the source file name, with .nii.gz as the extension.).
    • Please ensure that all prediction files have the same image shape size as the original image. (The output is an integer array with the same shape as the input image.)
    • The following directory structure is for reference:
/output
├── P022-1-ED.nii.gz
├── P022-1-ES.nii.gz
├── P022-2-ED.nii.gz
├── P022-2-ES.nii.gz
  .
  .
  .
├── P030-3-ES.nii.gz
└── P030-4-ED.nii.gz
  1. Local Environment Simulation For each task, participants should submit a dedicated docker image. In other words, one docker cannot be used to infer two tasks at the same time. You can simulate this environment locally with the following command. Please ensure that the predictions are saved in the output path after running.
docker run \
--rm \
--network=none \
--runtime="nvidia" \
-v /path/to/input:/input:ro \
-v /path/to/output:/output:rw \
<docker image>

2. Build your model and your running scripts

This section will describe how to create your model and how it must take as parameters an input and output directory.

2.1 Input files

  • All input files will be mounted in a directory called /input, the working directory of the container.
  • In this input folder, up to 100 files (25 cases, 1-4 files per case) will be available, an example here:
/input
├── P022-1-ED.nii.gz
├── P022-1-ES.nii.gz
├── P022-2-ED.nii.gz
├── P022-2-ES.nii.gz
  .
  .
  .
├── P030-3-ES.nii.gz
└── P030-4-ED.nii.gz

2.2 Output files

  • All output files should be written into a directory called /output in the working directory of the container.
  • Your models will be writing your segmentation files to the /output folder and should each be called ID.nii.gz where the ID is the case ID.
  • The individual segmentations should be saved in zipped NIfTI files and named ID.nii.gz ( the file name of the prediction file should be the same as the source file name, with .nii.gz as the extension.).
  • Please ensure that all prediction files have the same image shape size as the original image. (The output is an integer array with the same shape as the input image.)
  • The following directory structure is for reference:
/output
├── P022-1-ED.nii.gz
├── P022-1-ES.nii.gz
├── P022-2-ED.nii.gz
├── P022-2-ES.nii.gz
  .
  .
  .
├── P030-3-ES.nii.gz
└── P030-4-ED.nii.gz

2.3 Running Example

Here is an example of what an inference script might look like, utilizing the conventions above:

python inference.py --input /input --output /output

inference.py (The following code is only an example, you can customize the code by changing the examples in the code below.)

import os
import argparse

def main():
    """
    The main function of your running scripts. 
    """
    # default data folder
    parser = argparse.ArgumentParser()
    parser.add_argument('--input', type=str, nargs='?', default='/input', help='input directory')
    parser.add_argument('--output', type=str, nargs='?', default='/output', help='output directory')
    args = parser.parse_args()

    ## functions are not real python functions, but are examples here.

    ## Read in your trained model
    trained_model_weights_dir = './model_weights'
    model = load_model_weights(trained_model_weights_dir)

    ## Make your prediction segmentation files
    segmentation_outputs = inference(model, args.input)

    ## Write your prediction to the output folder
    write_outputs(segmentation_outputs, args.output)

if __name__ == "__main__":
	main()

3. Create Dockerfile

This section will describe how to write your Dockerfile. If you are familiar with docker or have your own habits, you can jump to Section 4 and build docker yourself.

The Dockerfile describes the dependencies required to execute the Docker image. These dependencies are encapsulated within the Docker image when it is built. As such, the Docker image is a self-contained execution environment that will allow the Challenge organizers to run and reproduce your results. This file must be named Dockerfile.

Here is an example Dockerfile using the inference.py script created above:

Dockerfile

## Start from this Docker image
FROM ubuntu

## Set workdir in Docker Container
# set default workdir in your docker container
# In other words your scripts will run from this directory
RUN mkdir /workdir
WORKDIR /workdir

## Copy your files into Docker Container
COPY -R /path/to/your/workdir/* /workdir
RUN chmod a+x /workdir/inference.py

## Install python in Docker image
RUN apt-get update && apt-get install -y python3 && apt-get install -y python3-pip

## Install requirements
RUN pip3 install -r requirements.txt

## Make Docker container executable
ENTRYPOINT ["/usr/bin/python3", "inference.py"]

The rest of this Wiki will go through each line in the Dockerfile example and explain its purpose.

3.1 FROM (Pull from a base image)

The FROM command establishes what existing Docker image your image starts with.

  • Whenever possible, use current Official Repositories of ubuntu as the basis for your image.
  • We recommend using Ubuntu.
  • To get rid of the tedious environmental installation process, we also recommend using the NVIDIA container image for PyTorch, release 22.02.(Skip 3.1-3.4 and continue in Section 3.5) or from a lightweight CUDA image.
  • You can refer to this example to organize your workdir (https://github.com/ConnerWK/CMRxMotion-DockerBuildExample)

## Start from this Docker image
FROM ubuntu

3.2 COPY (Transfer local files into Docker image)

All files, including any scripts and their input files, should be copied into the Docker image; scripts should also be made executable. In this example, we are copying the script and model file from the previous section’s example into the Docker image:

## Copy your files into Docker Container
COPY /path/to/your/workdir/* /workdir

as well as giving the script executable permissions:

RUN chmod a+x /workdir/inference.py

3.3 RUN (Install dependencies)

The most common use-case for RUN is an application of apt-get to install dependencies. This example installs python, along with the pip packages

## Install python in Docker image
RUN apt-get update && apt-get install -y python3 && apt-get install -y python3-pip

## Install requirements
RUN pip3 install -r requirements.txt

3.4 ENTRYPOINT (Make your Docker container executable)

The ENTRYPOINT command specifies what gets executed when your Docker container is run. In this example, we want to run the Python script we copied into /workdir:

## Make Docker container executable
ENTRYPOINT ["/usr/bin/python3", "/workdir/inference.py"]

For more information, visit Best practices for writing Dockerfiles.

3.5 Build form the NVIDIA container image

  • You can create your own Docker image using a third-party Docker image, such as the NVIDIA container image for PyTorch (PyTorch Release Notes:: NVIDIA Deep Learning Frameworks Documentation), to avoid the time-consuming environmental installation process for Python, PyTorch, CUDA, and CUDNN.
  • Using this, you can skip the installation of the basic package in 3.1-3.4, and the example in 3.1-3.4 will be changed as fellows
## Start from this Docker image
## for the version, we recommand the version xx.xx less than 22.02
FROM nvcr.io/nvidia/pytorch:xx.xx-py3

## Set workdir in Docker Container
# set default workdir in your docker container
# In other words your scripts will run from this directory
WORKDIR /workdir

## Copy all your files of the current folder into Docker Container
COPY ./ /workdir
RUN chmod a+x /workdir/inference.py

## Install requirements
RUN pip3 install -r requirements.txt

## Make Docker container executable
ENTRYPOINT ["/opt/conda/bin/python", "inference.py"]

You can alternatively utilize a lightweight CUDA image given by Nvidia by installing PyTorch or TensorFlow on your own because PyTorch images take up too much disk space. The supported CUDA tages could be found here (https://gitlab.com/nvidia/container-images/cuda/blob/master/doc/supported-tags.md) For illustration, we use the tag 11.3.0-base-ubuntu20.04.

## Start from this Docker image

FROM nvidia/cuda:11.3.0-base-ubuntu20.04

## Set workdir in Docker Container
# set default workdir in your docker container
# In other words your scripts will run from this directory
RUN mkdir /workdir
WORKDIR /workdir

## Copy all your files of the current folder into Docker Container
COPY ./ /workdir
RUN chmod a+x /workdir/inference.py


## Install requirements
RUN pip3 install -r requirements.txt

## Make Docker container executable
ENTRYPOINT ["/opt/conda/bin/python", "inference.py"]

4. Build Docker image

This section describes how to create your Docker image. You will need:

  • ID of your project at our DockerHub
  • Dockerfile

4.1 Set up your working directory

  • Move your Dockerfile and all files you are copying into your Dockerfile into the same directory.
  • Make the above directory your current working directory.

4.2 Build your Docker image

The basic syntax for creating a Docker image repository within your project is:docker build -t docker.miccai.cloud/<Your project ID>/<Task name>:<Tag> <Dockerfile path>where:

  • <Your project ID>: A project ID at our DockerHub
  • <Task name>: The repository name will need to be unique in that namespace; it can be two to 255 characters, and can only contain lowercase letters, numbers, or – and _.
  • <Tag>: Optional. If no tag is specified, a latest tag is added to your image. Tagging your image is very helpful because it allows you to build different versions of your Docker image. We will use the docker image tagged latest as the final validation docker.
  • <Dockerfile path>: Should be . since the Dockerfile should be in your current working directory.

In our example, we will use my_model as the repository name. The Docker image repo may be created with a tag or without one, e.g.

# With tagging:
$ docker build -t  docker.miccai.cloud/teamname/task1:latest .

# Without tagging:
$ docker build -t  docker.miccai.cloud/teamname/task1 .

4.3 Build your Docker image and tag latter

For local debugging, you can also choose to build the docker image locally first, and then tag the image later.

You can start by naming the image with a name of your choice (e.g. my_model) when you build it with Dockerfile in your current working directory.

docker build -t my_model .

Then you can use this docker name for section 5.

You can re-tag the docker before you submit.

docker tag my_model docker.miccai.cloud/teamname/task1

5. Locally test Docker container

This section describes how to test and run your Docker locally to test your model.

5.1 Run your container for testing

After you build your Docker image in step 4, you can run your container locally to check that your model will correctly run as a Docker container.

CPU:

docker run -it --rm -v "/your/input/folder/":"/input" -v "/your/output/folder/":"/output" your_docker_image_name

GPU:

docker run -it --rm --gpus device=0 -v "/your/input/folder/":"/input" -v "/your/output/folder/":"/output" your_docker_image_name

Check the output folder to make sure your container properly outputs the segmentation files.


6. Upload your Docker image

This section describes how to push your built Docker image from your local workstation up into our DockerHub at docker.miccai.cloud

6.1 Login to CMRxMotion Docker Registry

Enter the following command, then answer its questions(username and password are sent by email):

docker login docker.miccai.cloud

6.2 Login trouble-shooting

If you CLI appear the error INFO as fellows

Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/auth": dial unix /var/run/docker.sock: connect: permission denied

just type the following command in your CLI.

sudo chmod 666 /var/run/docker.sock

More solutions about this issue: https://newbedev.com/got-permission-denied-while-trying-to-connect-to-the-docker-daemon-socket-at-unix-var-run-docker-sock-post-http-2fvar-2frun-2fdocker-sock-v1-24-auth-dial-unix-var-run-docker-sock-connect-permission-denied-code-example

6.3 View your built images (optional)

Before pushing your image, you can first check what you have built so far. This output shows the result of building your model without tagging; notice that the TAG is latest. We have previously built several other models as well, which are listed:

$ docker images
REPOSITORY                                       TAG                 IMAGE ID            CREATED             SIZE
docker.miccai.cloud/<Your project ID>/task2     latest              e5993fdf4a41        8 minutes ago       846 MB
docker.miccai.cloud/<Your project ID>/task1     latest              e6800fcac281       18 minutes ago       706 MB
ubuntu                                          latest              14f60031763d        6 days ago          120 MB

6.4 Push your Docker image

You may now push your Docker image into our Docker Hub, using the command:docker push docker.miccai.cloud/<Your project ID>/<Repo name>:<Tag>

docker push docker.miccai.cloud/<Your project ID>/<taskname>

Notice how a TAG is not included in the command above; recall that latest will be the default tag. If there is a specific Docker image with a tag you want to push, e.g. the_most_valuable_team, simply include the TAG name into the push command:

docker push docker.miccai.cloud/the_most_valuable_team/task1:<your_tag>

# or

docker push docker.miccai.cloud/the_most_valuable_team/task2:<your_tag>

Notice: As project IDs do not allow capital letters and spaces, we have made some adjustments to the team names, please check the email we sent for the item <Your project ID>.

6.5 Verify the Docker image was successfully pushed(optional)

If the Docker image was successfully pushed, it should show up in the Docker tab of your Docker project page. You can navigate there by first going to your project page in CMRxMotion:

https://docker.miccai.cloud

login using the username and password we provided. (the initial password sent to you mail (registrated at cmr.miccai.cloud) and the team members share the same account.)

The Docker image (e.g., docker.miccai.cloud/the_most_valuable_team/task1) should be listed.

We have occasionally experienced problems using older versions of Docker, where the previous push step appears to complete successfully, yet the image does not appear in the project’s Docker tab. If you experience a similar issue, consider updating your version of Docker.


7. Additional notes

  1. The docker in your project at [docker.miccai.cloud](<http://docker.miccai.cloud>) tagged latest will be taken as the final docker for testing validation.
  2. Docker image must contain your running scripts, software and requirements, and trained model weights.
  3. The peak bandwidth of our server is over 100 Mbps. If the uploading is too slow, you can record your docker image in Google Drive or anywhere else and send a shareable link of your docker image to cmrxmotion@163.com. We recommend that you can upload your docker image via docker push