![]() Deep learning network architectures have a large number of parameters, thus, in order to reach high accuracy, they require a massive amount of annotated data. Recent advancements in deep learning have revolutionized the way microscopy images of cells are processed. We provide quantitative results demonstrating that images annotated by under-graduates are sufficient for training instance-aware segmentation architectures to efficiently segment complex fluorescence nuclear images. Mask R-CNN trained on artificial images achieves the overall highest F1 score on complex images of similar conditions to the training set images while Cellpose achieves the overall highest F1 score on complex images of new imaging conditions. ![]() ![]() Training with additional artificially generated images improves recall and F1 scores for complex images, thereby leading to top F1 scores for three out of five sample preparation types. Results show that instance-aware segmentation architectures and Cellpose outperform the U-Net architectures and conventional methods on complex images in terms of F1 scores, while the U-Net architectures achieve overall higher mean Dice scores. We propose and evaluate a novel strategy to create artificial images to extend the training set. In this work, we evaluate and compare the segmentation effectiveness of multiple deep learning architectures (U-Net, U-Net ResNet, Cellpose, Mask R-CNN, KG instance segmentation) and two conventional algorithms (Iterative h-min based watershed, Attributed relational graphs) on complex fluorescence nuclear images of various types. Deep learning based segmentation requires annotated datasets for training, but annotated fluorescence nuclear image datasets are rare and of limited size and complexity. Deep Convolutional Neural Networks have been demonstrated to solve nuclear image segmentation tasks across different imaging modalities, but a systematic comparison on complex immunofluorescence images has not been performed. Separating and labeling each nuclear instance (instance-aware segmentation) is the key challenge in nuclear image segmentation. We deployed the model as a plug-in to CellProfiler, a widely used software platform for cellular image analysis. Our results demonstrate that using two-stage domain adaptation with a weakly labeled dataset can effectively boost system performance, especially when using a small training sample size. When using smaller training sample sizes for fine-tuning, the proposed method provided comparable performance to that obtained using much larger training sample sizes. Our proposed method, using a weakly labeled dataset for pre-training, showed superior performance in all of our experiments. Our method yields comparable results to the multi-observer agreement on an ovarian cancer dataset and improves on state-of-the-art performance on a publicly available dataset of mouse pancreatic tissues. We validated our method against manual annotations on three different datasets. We used two-stage domain adaptation by first using a weakly labeled dataset followed by fine-tuning with a manually annotated dataset. We propose a deep learning pipeline to train a Mask R-CNN model (deep network) for cell segmentation using nuclear (DAPI) and membrane (Na+K+ATPase) stained images. Accurate cell segmentation of the MxIF images is an essential step. ![]() Cellular profiling with multiplexed immunofluorescence (MxIF) images can contribute to a more accurate patient stratification for immunotherapy.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |