Submission Instructions

Please follow these instructions to create a submission, after you developed your model/algorithm. To get started with development, please read the Getting started guide instead.

To be able to create a submission, you need to fullfil the following requirements:

  1. Be registered on grand-challenge.com.
  2. Have joined the TrackRAD2025 challenge. You can request to do so here.

Once you have joined the challenge you can create up to 10 submissions during the preliminary testing phase and up to 2 submissions during the final testing phase.

Next follow the submission instructions below. For technical challenges please consult the respective grand-challenge.org documentation.

Create an algorithm on Grand Challenge

If you already created an algorithm for a prior phase or a different model, you do not need to create a new algorithm. Instead the linked page will guide you to manage the container images linked to your previous algorithm. Here, you can link a new repository or upload a new container image. If you already linked a github repository to your algorithm, creating a new tag, i.e., release will trigger the new build process and be shown on the manage containers page.

To create a new new algorithm go to the following page and create a new algorithm.

https://trackrad2025.grand-challenge.org/evaluation/preliminary-testing/algorithms/create/

For the title we recommend choosing a meaningful name, however you will be able to provide a final official name for your algorithm at a later stage.

For the GPU, please select "NVIDIA A10G Tensor Core GPU". For the memory, please select 32 gb.

Next, you can either upload a docker image or link a github repository to the algorithm. However we strongly recommend linking a github repository. If you followed our getting started guide, you should be able to just copy your algorithm folder into a new repository and everything should function. Please let us know, if you encounter any difficulties.

Once this step is done, grand-challenge will start building a docker image from the repository (or the docker container file). You can see the progress on the manage container page. Please wait until the container is marked Active, which might take a while. If the build process failed, you can view the logs by clicking the [i] button. Note that the upload/building of algorithm containers does not count towards your submission limit. However, if you have trouble with packaging your algorithm, please contact the organizers for help.

Create a submission

Once your algorithm is marked Active, go to the submission page and select the phase you want to submit to on the top. Before submitting to the final testing phase, make sure your algorithm works locally on your device and by submitting to the preliminary test phase.

The only required field is the algorithm field, where you select the algorithm you created. Optionally, you can leave a comment to your submission.

Check the results

After you click the "Save"-Button on the submission page, a new evaluation is created on the grand challenge platform.

For the preliminary testing phase, logs are enabled and you can see the output, as well as your scores and if there have been any errors. If your code produces errors in the final testing phase we recommend to test it locally as explained in the getting started guide. You can also contact the organisers.

For the final testing phase, logs are disabled, to prevent leakage of the testing dataset through logs. You will thus not receive any feedback other than the scores and whether or not there have been any errors. If your code produces errors in the final testing phase we recommend to submit it to the preliminary test phase again. You can also contact the organisers for any help regarding the grand-challenge platform.

Once the evaluation is complete and scored, you can submit the results to the leaderboard.

Additional requirements

To be eligible for prices, you must also submit a form with information on your method. Please fill this form as soon as possible after you submitted your algorithm, and in any case until 01/09/2025. The answers gathered will not be used for ranking purposes, but for the challenge report and other evaluation purposes regarding the challenge itself.

Link to the submission reporting form

You must also submit a paper reporting the details of the methods, following the checklist provided below. Other formats containing the same information may also be allowed.

The deadline for the submission of the form and the description is 01/09/2025.

Algorithm description checklist

For the test phase, submit a paper reporting the details of the methods in a short paper (6-11 pages) in LNCS format as a PDF.

This checklist outlines the key elements expected for a comprehensive description of an algorithm submitted to the TrackRAD2025 Grand Challenge. It is adapted from Morgan et al. (2020)  and inspired by the abstract style from section 3 of https://arxiv.org/abs/2403.08447.

Organizers reserve the right to exclude submissions lacking any of these reporting elements.

1. Title

  • Clearly identify the submission methodology, specifying the category of architecture used (classical deformable image registration or template matching, deep learning, traditional machine learning and/or the architecture).

2. Abstract Max 250 words

The provided abstract will be directly used by the organizers as part of the challenge report. Submitting your method means that you allow the organizers to use your descriptions for the future publication.

  • Some examples of abstracts are provided at section 3 of https://arxiv.org/abs/2403.08447; please use these as inspiration and adapt for your case.
  • Briefly outline the methodology, including: 
    • Method architecture & configuration: 
    • Same method used for all the different anatomical locations or field strengths?
    • Key techniques: Highlight any specific techniques employed within the method
    • Loss function(s)
    • Optimizer and learning rate (including scheduling if applicable)
    • Data preprocessing (both in training and in testing if performed) steps and augmentation
    • Image input size
    • Post-processing
    • Best model strategy: Briefly explain how the final model was selected based on validation performance
  • Mention any key results achieved (optional).

3. Introduction

  • Provide scientific and clinical background motivating the chosen methodology design.

4. Methods

  • Elaborate on all details mentioned in the abstract (point 2 of the checklist) following the structure from 4.1 to 4.4.

4.1 Data

  • Specify the data subset used for hyperparameter optimization.
  • Consider including an optional flow diagram illustrating data processing steps.
  • If you used any additional public data (outside the TrackRAD2025 dataset) for training, please provide detailed information on this.
  • Please explain how you used the unlabeled and labeled data.

4.2 Model

  • Provide a detailed description of the algorithm/model, including architecture, layers, and connections.
  • Report the total number of parameters.
  • List the software libraries, frameworks, and packages used.
  • Explain the initialization of model parameters.
  • Clearly indicate if you employed or fine-tuned a pre-trained model and include a link to the corresponding repository.

4.3 Training

  • Detail the training approach, including specific data augmentation techniques employed.
  • Specify the hyperparameters used and their optimization methods.
  • Describe the criteria for selecting the final model.
  • If applicable, explain any ensembling techniques used.

4.4 Evaluation

  • List the metrics used to assess model performance. Include runtime. Please try to make use of the evaluation metric code provided with the challenge.
  • Describe the statistical measures employed for significance and uncertainty (e.g., confidence intervals).
  • Explain any methods used for model explainability or interpretability.

5. Results

Model Performance

  • Report performance metrics for the optimized model(s) on specified dataset partitions (if used) and the validation set.
  • Analyze any cine-MRI with poor performance.
  • Include run-time performance on your hardware.

6. Discussion

  • Discuss the limitations of the study, including potential bias and generalizability concerns.

7. Author contributions

  • For transparency, we require corresponding authors to provide co-author contributions to the manuscript using the relevant CRediT roles. The CRediT taxonomy includes 14 different roles describing each contributor’s specific contribution to the scholarly output.

8. Other information

  • Acknowledge any sources of funding and collaborators.

The organizers reserve the right to exclude submissions lacking any of these reporting elements. Please contact us as soon as possible if you encounter any problems.