Rules
The collection of the data, organization of the competition, and
maintainance of this website requires a large effort. We are
committed to maintaining this site as a repository of
benchmark results on our test data set in the spirit of cooperative scientific progress.
In return, everyone who uses this site needs to respect the rules below.
These rules amount to the following: in order to use this MS lesion segmentation challenge database, you need to send us the results of your algorithm, in the form of binary segmentation and a document that describes your method, and allow us to make these results publicly available on this site. We do not claim any ownership or right to the algorithms or uploaded documents, or create obstacles for publishing methods that use our data. On the contrary, we welcome publications that use the data on this site.
The following rules apply to all visitors of this site:
- This site is copyrighted. That means that users of this site
may not copy or redistribute content from it without explicit
permission from the maintainers of this site, listed at the bottom of
this page. This also applies to all images contained on the site.
- We welcome links to this site. You are free to establish a
hypertext link to any web page.
The following rules apply to those who register a team, download data and submit results:
- The original data sets and associated segmentation data
downloaded here, or any data derived from these data sets, must not
be given nor redistributed under any circumstances to persons not
belonging to the registered team.
- Each team should submit a document that describes their method before the submission deadline.
- All teams should allow us to make the results publicly available on this site and in an international journal paper (with all the participants as co-authors).
- Teams are allowed to use images from this site for illustrations in their papers and scientific presentations.
- Teams must not report results of their segmentation algorithms
on only the training data of this website. Instead teams must always
include results on the test data as well. Those results will be
obtained by submitting results to this website.
- Results uploaded to this website, comprising segmentations and a descriptive document for each submission, will be made publicly available on this site, and by submitting results, you grant us permission to do so. Obviously, teams maintain full ownership and rights to the method.
- Teams must notify the maintainers of this site about any publication that is (partly) based on the data on this site, in order for us to maintain a list of publications associated with this dataset.
- Reference scores for the evaluation are following. In the order of voldiff, avgdist, tanerr, truepos, and falsepos, 68.29, 4.85, 75.16, 67.79, 32.20, respectively. The score for voldiff and avgdist is computed as '100 - 10 * measurement / reference', and the score for truepos, and falsepos is computed as $ 90 - 15\times \frac{err-referr}{refstd}$ where err is the measurement of an error for each criteria of a submitted segmentation and referr and refstd stands for the average and standard deviation of the measurement of an error among human rater segmentations. The measurement is a computed value from the segmentation and the reference is corresponding reference score.
The content of this website is copyrighted © Martin Styner, Simon Warfield,Wiro Niessen, Theo van Walsum, Coert Metz, Michiel Schaap, Xiang Deng, Tobias Heimann, and Bram van Ginneken. This work is supported by the UNC Neurodevelopmental Disorders Research Center HD 03110 and NIH Roadmap for Medical Research, National Alliance for Medical Image Computing, Grant U54 EB005149-01.