Booth Id:
SOFT031
Category:
Systems Software
Year:
2022
Finalist Names:
Chada, Raahi (School: Academies of Loudoun)
Abstract:
All disorders, particularly progressive and degenerative ones, are more severe in under-resourced areas due to lower access to diagnostic resources. A delayed or inaccurate diagnosis can be the difference between recovery and permanent impairment. Ultrasound images of various disorders can be classified by disorder through machine learning (ML) models; however, current ML research does not comprehensively consider image degradations which are common in underserved areas. This research aims to build an image preprocessing system that aids low-quality ultrasound image classification of a broad range of afflictions, including retinal disorders and breast cancer. Open-source image datasets of retinal disorders and breast cancer were classified by binary image quality (high-quality or low-quality). Common degradations specific to each affliction were compiled into a randomized degradation program that controls the number, magnitude, and position of degradations in a single image. After running the degradation program on the high-quality image dataset, the degraded images were utilized to optimize a new convolutional neural network - DegNet. Training the model on synthetically-degraded image data led to a 20.4% and 23.5% greater accuracy than training on low-quality image data when classifying by retinal disorder and breast cancer malignancy, respectively. My results support the conclusion that training models on synthetically-degraded images leads to higher classification accuracy of low-quality images. Synthetic degradation has the potential to break the geographic barriers surrounding medical diagnosis.
Awards Won:
Third Award of $1,000