Research and Development Department, Med Fanavarn Plus Co., Karaj, Iran



Accurate automated medical image segmentation plays vital role in clinics to help clinicians for accurate diagnosis as well as measurement. Accurate medical image segmentation systems suffer from some problems including, different object size, noisy data, and different types of medical image. Thus, the base line image segmentation methods are not sufficient for such complex segmentation tasks in the various medical image types. To overcome the issues, a novel One-to-Many U-Net based model introduced in this paper.
In the proposed U-Net based model, the first block of the encoder path consists of three layers with different level of feature maps. Each of these layers extended to three sub blocks that each of these sub-blocks itself comprised of three layers. Encoder blocks constructed of 2D-convolution, Leaky ReLU and batch-normalization layers. The same extension strategy has been used in decoder path of the proposed model. The decoder path includes the transposed-2D convolution, Leaky ReLU, dropout, and batch-normalization layers in a consecutive manner. Each layer of the encoder path concatenated with the decoder modules using skip connections strategy. Finally, the output of the model has achieved by concatenation of last three layers of the decoder path.
To evaluate our architecture, we investigated two distinct data-sets including CVC-ClinicDB dataset for polyp segmentation, and HC18 Grand challenge ultrasound dataset for fetal head segmentation purpose. The proposed algorithm achieved Dice and Jaccard coefficients of 97.26%, and 94.73%, respectively for fetal head segmentation in HC18 dataset. Moreover, the proposed model outperformed the state-of-the-art U-Net based models on the CVC-ClinicDB dataset with Dice and Jaccard coefficients of 83.95%, 75.35%, respectively.
The proposed One-to-Many U-Net model has demonstrated promising results in various medical images segmentation (polyp and fetal head segmentation) tasks and outperformed state-of-the-art approaches.