How to submit🔍¶
From June, 1 2025: To let you familiarize with the submission system, and packaging of the algorithms in a Docker container, we set up preliminary phases for task 1 -MRI and task 2 - CBCT where you are allowed to perform 2 submissions/day. A video has been prepared to facilitate your submission, along with code [follows] to facilitate you.
For the validation phases of task1 and task2 (3 submissions/week), you will be requested to upload a .zip file containing the predicted sCTs.
For the test phases (1+1 submission only), you are allowed to upload from July 16 your algorithm contained in a Docker for task 1 and task 2. N.B. only one final submission is allowed in this phase: we wish to receive only your best algorithm, so make sure you made use of the preliminary phase to double-check your algorithm is properly working!
Once the test phase is complete: submit the following form for task 1 or for task 2 reporting the details of your algorithm. The form will be used to analyze and compare your methods. Please, save the form locally for your record.
The post-challenge phases, which are equivalent to the test phases, will open on 1/09/2025 up to 1/03/2030 allowing 2 submissions/60 days: task 1 and task 2.
For each phase, instruction is provided in the related submission area.
Algorithm description checklist¶
For the test phase, submit a paper reporting the details of the methods in a short paper (6-11 pages) in LNCS format as a PDF.
This checklist outlines the key elements expected for a comprehensive description of an algorithm submitted to the SynthRAD2025 Grand Challenge. It is adapted from Morgan et al. (2020) and inspired by the abstract style from section 3 of https://arxiv.org/abs/2403.08447.
Organizers reserve the right to exclude submissions lacking any of these reporting elements.
1. Title
- Clearly identify the submission as an AI methodology study, specifying the category of architecture used (e.g., deep learning, traditional machine learning and/or the architecture).
2. Abstract Max 250 words for single-task, 400 words for two-task algorithms
The provided abstract will be directly used by the organizers as part of the challenge report. Submitting your method means that you allow the organizers to use your descriptions for the future publication.
- Some examples of abstracts are provided at section 3 of https://arxiv.org/abs/2403.08447; please use these as inspiration and adapt for your case.
- Briefly describe the task(s) addressed, e.g., task 1, task 2.
- Briefly outline the methodology, including:
- Spatial configuration: 2D/3D, 2.5D according to the definition provided in Spadea & Maspero et al. 2021
- Model architecture & configuration: U-Net, GAN, paired/unpaired/self-supervised, etc;
- Same model used for all the different anatomical locations?
- Key techniques: Highlight any specific techniques employed within the model, e.g., channel and spatial-wise attention, residual dilated Swin transformer;
- Loss function(s)
- Optimizer and learning rate (including scheduling if applicable)
- Data preprocessing(both in training and in testing if performed) steps and augmentation
- Image input size
- Post-processing
- Best model strategy: Briefly explain how the final model was selected based on validation performance (e.g., best validation MAE)
- Mention any key results achieved (optional).
3. Introduction
- Provide scientific and clinical background motivating the chosen methodology design.
Methods
- Elaborate on all details mentioned in the abstract (point 2 of the checklist) and including the following points.
4. Data
- Specify the data subset used for hyperparameter optimization.
- Consider including an optional flow diagram illustrating data processing steps.
- If you used any additional public data (outside the SynthRAD2025 dataset) for training, please provide detailed information on this.
5. Model
- Provide a detailed description of the algorithm/model, including architecture, layers, and connections.
- Explicit the total number of parameters.
- List the software libraries, frameworks, and packages used.
- Explain the initialization of model parameters (e.g., randomization, transfer learning).
- Clearly indicate if you employed or fine-tuned a pre-trained model and include a link to the corresponding repository.
6. Training
- Detail the training approach, including specific data augmentation techniques employed.
- Specify the hyperparameters used and their optimization methods.
- Describe the criteria for selecting the final model.
- If applicable, explain any ensembling techniques used.
7. Evaluation
- List the metrics used to assess model performance.
- Describe the statistical measures employed for significance and uncertainty (e.g., confidence intervals).
- Explain any methods used for model explainability or interpretability.
8. Results
Model Performance
- Report performance metrics for the optimized model(s) on specified dataset partitions (if used) and the validation set.
- Analyze any sCT scans with poor performance, including identification of potential hallucinations.
9. Discussion
- Discuss the limitations of the study, including potential bias and generalizability concerns.
10. Author contributions
- For transparency, we require corresponding authors to provide co-author contributions to the manuscript using the relevant CRediT roles. The CRediT taxonomy includes 14 different roles describing each contributor’s specific contribution to the scholarly output.
11. Other information
- Acknowledge any sources of funding and collaborators.