Computer Vision News - March 2019
13 SPIE-AAPM-NCI BreastPathQ What did we learn? • We learned that participant engagement was critical. The BreastPathQ challenge include a forum section allowing participants and organizers to ask questions, answer questions, form teams and provide feedback to all participants at once. One example of how the forum was used was when a participant identified a problem with the original challenge performance metric. Once this was identified on the forum, we, as the challenge organizers, where able to assess the problem, determine it was a concern and implement a revised performance metric to address the issue. This is quite a nice example of how participant engagement directly lead to an improved challenge. • We learned that paying attention to the performance criteria from comparing algorithms is crucial. We found that a rank-based performance metric, predication probability, would work best in our challenge because of high variability in human pathologist absolute cellularity scoring of individual patches. A rank-based performance metric assesses how the algorithm orders patches from lowest cellularity to highest cellularity. The human pathologist “truthers” had smaller variability in ranking patches compared with determining absolute percent cellularity scores. By taking this into account, we believe we implemented a more relevant approach for comparing the ability of algorithms to estimate patch cellularity as part of our challenge. Challenge Computer Vision News Figure above shows different patches within the pathology slide with various level of cellularity scoring by a pathologist. Courtesy of Anne Martel, Sunnybrook Research Institute, Toronto (and used as part of the BreastPathQ Challenge logo).
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=