Additionally, the datasets lack diversity in fetal conditions, skewing algorithm development towards the typical fetus anatomy and neglecting pathological or abnormal fetal growth and multiple gestations. Variability in US technology, both in vending machines and device setup (probe type and working frequency), also leads to discrepancies in image quality and AI outcomes. Such biases can lead to AI models that perform well within represented cohort, but fail in others, potentially leading to misdiagnoses or inadequate medical advice. This underscores the need for careful dataset evaluation and technical strategies to mitigate these issues. Diversifying data collection to include a broader range of demographics and conditions ensures more representative datasets. Incorporating annotations from diverse medical experts addresses inter-observer variability. Federated learning, by pooling data from various sources, and continuous learning, by updating models with new data, help standardizing and updating diagnostic models. Domain adaptation techniques ensure model consistency across different US technologies, while generative AI enhances dataset variety by creating synthetic examples of rare conditions. Together, these strategies can foster a more equitable and accurate framework for prenatal diagnostics. Public fetal US datasets are invaluable resources, but may carry inherent biases, potentially skewing research and clinical applications. By promoting awareness of dataset biases and advocating for ethical AI practices, we can pave the way for accurate, fair, and clinically reliable diagnostic tools in prenatal care. “As engineers and AI ethicists that work on designing trustworthy deeplearning algorithms for medical image analysis, fostering equitable AI is now a priority for us,” says Mariachiara’s PhD Tutor Sara Moccia. “Thanks, Mariachiara, for your work!” 41 Computer Vision News Computer Vision News AI Fairness Cluster Inaugural Conference
RkJQdWJsaXNoZXIy NTc3NzU=