Competition Timeline¶
- Challenge website open 1/12/2024
- Start challenge: Release training cases 1/03/2025
- Training phase (13+7+4 weeks) 1/03/2025 - 15/08/2025
- Preliminary phase (7+4 weeks, 5 submissions/week) 1/06/2025 - 15/08/2025
- Validation phase (7+4 weeks, 3 submissions/week) 1/06/2025 - 15/08/2025
- Presentation of the challenge at ESTRO25 05-06/2025
- Test phase (4 weeks, max 2 submissions) 16/07/2025 - 15/08/2025
- Announcements and invitation to present 10/09/2025
- Presentation of the challenge results MICCAI25, Daejeon South Korea, 6-10/10/2025 ESTRO April/May 2026
- Post-challenge phase (4.5 years, 2 submissions/60 days) 1/09/2025-1/03/2030
Rules📰¶
ENTRY INTO THIS CHALLENGE CONSTITUTES YOUR ACCEPTANCE OF THESE
OFFICIAL RULES.Every participant must sign up for a
verified Grand-Challenge account
on www.grand-challenge.org and join the challenge to be
able to submit.
Methods¶
Only fully automatic methods are allowed. Methods should be submitted as specified in the submission page.¶
Inference should run on a AWS g4dn.2xlarge instance using single GPU with 16 GB RAM, 8 cores CPU and 32 GB RAM.
Maximum inference time to produce sCT for a single case (one patient) should be 15 min.
One account per participant/team¶
Each participant/team can only use one account to participate in the competition. Participants who use multiple accounts will be disqualified from the competition. Each team can be composed of five participants maximum.
Use of other training data/pre-trained models¶
The data used to train algorithms is restricted to the data provided by the challenge or to publicly available data, which should be reported in the document describing the submitted method. Pre-trained models may NOT be used in the challenge to ensure fairness evaluating the methods proposed.
Code of the submitted algorithm¶
The top three teams in each task must disclose and openly share their code, weights, or models to allow for future re-use of their algorithms. The model can also be shared as a Docker version without sharing the code. While all other teams are strongly encouraged to do so, it is not mandatory. The code or Docker image should be provided within 14 days of the announcement of the winning participant (so by 24-09-2025).
Award eligibility
As a condition for being ranked and considered as the challenge winner or eligible for any prize, the teams/participants must fulfil the following obligations:
- Present their method in person at the final event of the challenge at MICCAI 2025.
- Submit a paper reporting the details of the methods in a short or long LNCS format, following the checklist provided on the submission page. Organizers reserve the right to exclude submissions lacking any of the elements listed in the checklist.
- Submit the following form reporting the details of the challenge after the test submission has been completed.** **
- Sign and return all prize acceptance documents as may be required by Competition Sponsor/Organizers.
- Commit to citing the data challenge paper and the data overview paper whenever submitting the developed method for scientific and non-scientific publications.
- The top three teams of each task are obliged to disclose and openly share their code, the other teams are strongly encouraged, but it is not mandatory. The code should be provided within 10 days from the announcement of the winning participants.
Awards¶
The results and winner will be announced publicly, and the top teams will be invited to present their approach during the final MICCAI event.
Once participants submit via the challenge website, they will be considered fully vested in the challenge so that their performance results will become part of presentations, publications, or subsequent analyzes derived from the challenge at the discretion of the organization. Specifically, all the performance results will be made public.
Depending on the available funding, organizers reserve the possibility to award prizes to the top teams in both tasks.
Prize 🏆¶
The best five submissions of each task
will be awarded prizes distributed in cash, for a total of €3.500,- as follows:
Award Task 1: MRI-to-CT
1. €875,-
2. €550,-
3. €325,-
Award Task 2: CBCT-to-CT
1. €875,-
2. €550,-
3. €325,-
Participation policy for organizers' institutes¶
Members of the organizers' institutes may participate in the challenge if not listed among the organizers, contributors, or data providers and if they did not co-author any publication (accepted publication date) with the organizers in the timeframe 2022-09/2025-09; otherwise, they are not eligible for the prizes. The only exception is made for the co-authors of the SynhtRAD2023 challenge report who were not organizers: these co-authors (not the old challenge organizers) are allowed to participate to the challenge and possibly be awarded (see the list at https://doi.org/10.48550/arXiv.2403.08447).
No private sharing outside teams¶
Privately sharing code or data outside of teams is not permitted.
Data¶
The dataset is released under a CC-BY(-NC) license in .mha compressed format.
Training input (MRI for task 1, CBCT for task 2 and a mask) and ground truth (CT for both the tasks) will be made available on Zenodo. Validation input will be released when the validation phase is opened. Validation ground truth will be made available after the presentation of the challenge results. The test data will be released only when the challenge will be closed (expected ~ 2030, but the date can be subject to change according to funding availability).
Follow-up publication¶
The SynthRAD2025 organizers will consolidate the results and submit a
challenge paper to Medical Image Analysis or similar.
The first ten teams of each task will be invited to participate in this
publication, with the requirement that they submit an algorithm summary
in the form requested. The organizers reserve the right to reduce the
number of co-authors among the team participants to a minimum of two.
The organizers will analyze their sCT as the challenge submission system
will have automatically solicited them.
Publishing the submitted method elsewhere¶
The organizers, contributors, and data providers can independently publish methods based on the challenge data after an embargo of 6 months from the challenge's final event. The embargo is counted from the final event, considering the submission date of the work. Participants can submit their results elsewhere after an embargo of 6 months; however, if they cite the overview paper, no embargo will be applied.
Other rules¶
Once a participant or a team submits, the submission or the team cannot withdraw from the challenge.
The remaining rules are provided along with the challenge design that can be found at https://doi.org/10.5281/zenodo.14051075.