Current Issue Cover
全监督和弱监督图网络的病理图像分割

沈熠婷1, 陈昭1, 张清华1, 陈锦豪1, 王庆国2(1.东华大学计算机科学与技术学院, 上海 201620;2.上海市第一人民医院, 上海 200080)

摘 要
目的 计算机辅助技术以及显微病理图像处理技术给病理诊断带来了极大的便利。病理图像分割是常用的技术手段,可用于划分病灶和背景组织。开发高精度的分割算法,需要大量精准标注的数字病理图像,但是标注过程耗时费力,具有精准标注的病理图像稀少。而且,病理图像非常复杂,对病理组织分割算法的鲁棒性和泛化性要求极高。因此,本文提出一种基于图网络的病理图像分割框架。方法 该框架有全监督图网络(full supervised graph network,FSGNet)和弱监督图网络(weakly supervised graph network,WSGNet)两种模式,以适应不同标注量的数据集以及多种应用场景的精度需求。通过图网络学习病理组织的不规则形态,FSGNet能达到较高的分割精度;WSGNet采用超像素级推理,仅需要稀疏点标注就能分割病理组织。结果 本文在两个公开数据集GlaS (Gland Segmentation Challenge Dataset)(测试集分为A部分和B部分)、CRAG (colorectal adenocarcinoma gland)和一个私有数据集LUSC (lung squamous cell carcinoma)上进行测试。最终,本文所提框架的两种模式在3个数据集中整体像素级分类精度(overall accuracy,OA)和Dice指数(Dice index,DI)均优于对比算法,且FSGNet在GlaS B数据集中效果最明显,分别提升了1.61%和2.26%,WSGNet在CRAG数据集中较先进算法提升效果最明显,分别提升了2.63%和2.54%。结论 本文所提框架的两种模式均优于多种目前先进的算法,表现出较好的泛化性和鲁棒性。
关键词
Fully and weakly supervised graph networks for histopathology image segmentation

Shen Yiting1, Chen Zhao1, Zhang Qinghua1, Chen Jinhao1, Wang Qingguo2(1.School of Computer Science and Technology, Donghua University, Shanghai 201620, China;2.Shanghai General Hospital, Shanghai 200080, China)

Abstract
Objective Computer-assisted techniques and histopathology image processing technologies have significantly facilitated pathological diagnoses. Among them, histopathology image segmentation is an integral component of histopathology image processing, which generally refers to the separation of target regions(e. g., tumor cells, glands, and cancer nests) from the background, is further used for downstream tasks(e. g., cancer grading and survival prediction). In recent years, the rapid development of deep learning has resulted in significant breakthroughs in histopathology image segmentation. Segmentation networks, such as FCN and U-Net, have demonstrated strong capabilities in accurately delineating edges. However, most existing deep learning methods rely on fully supervised learning mode, which depends on numerous accurately annotated digital histopathology images. Manual annotation, conducted by medical professionals with expertise in histopathology, is time-consuming and also introduces a high likelihood of missed diagnoses and false detections. Consequently, there is a scarcity of histopathology images with precise annotations. Moreover, histopathology images are highly complex, making it extremely challenging to distinguish targets from the background, thereby leading to inter-class homogeneity. Within the same dataset of tissue samples, there are significant variations among pathological objects, exhibiting intra-class heterogeneity. Differences between patients and nonlinear relationships between image features impose high requirements on the robustness and generalization of histopathological tissue segmentation algorithms. Therefore, this study proposes a graph-based framework for histopathology image segmentation. Method The framework consists of two modes, namely, fully supervised graph network(FSGNet) and weakly supervised graph network(WSGNet), aiming to adapt to datasets with different levels of annotation and precision requirements in various application scenarios. FSGNet is used when working with samples having pixel-level labels and requiring high accuracy. It is trained in a fully supervised manner. Meanwhile, WSGNet is utilized when dealing with samples that only have sparse point labels. It utilizes weakly supervised learning to extract histopathology image information and train the segmentation network. Furthermore, the proposed framework uses graph convolutional networks(GCN) to represent the irregular morphology of histopathological tissues. GCN is capable of handling data with arbitrary structures and learns the nonlinear structure of images by constructing a topological graph based on histopathology images. This approach contributes to improving the accuracy of histopathology image segmentation. The current study introduces graph Laplacian regularization to facilitate the learning of similar features from neighboring nodes, effectively aggregating similar nodes and enhancing the proposed model's generalization capability. FSGNet consists of a backbone network and GCN. The backbone network follows an encoder-decoder structure to extract deep features from histopathology images. GCN is used to learn the nonlinear structure of histopathological tissues, enhancing the network's expressive power and generalization ability, ultimately resulting in the segmentation of target regions from the background. WSGNet utilizes simple linear iterative clustering(SLIC) for superpixel segmentation of the original image. This method transforms the weakly supervised semantic segmentation problem into a binary classification problem for superpixels. WSGNet leverage local spatial similarity to reduce the computational complexity of subsequent processing. In the preprocessing stage, the semantic information of point labels can be propagated to the entire superpixel region, thereby generating superpixel labels. WSGNet is capable of accomplishing the segmentation of histopathology images even with a limited number of point annotations. Result This study paper conducted tests on two public datasets, namely, Gland Segmentation Challenge Dataset(GlaS) and Colorectal Adenocarcinoma Gland(CRAG) dataset, as well as one private dataset called Lung Squamous Cell Carcinoma(LUSC). GlaS consists of 165 images, with a training-to-testing ratio of 85:80. It is stratified based on histological grades and fields of view, and the testing set is further divided into Parts A and B(60 and 20 images, respectively). CRAG comprises 213 images of colorectal adenocarcinoma, with a training-totesting ratio of 173:40. LUSC contains 110 histopathological images, with a training-to-testing ratio of 70:40. The performance of FSGNet was compared with FCN-8, U-Net, and UNeXt. WSGNet was compared with recently proposed weakly supervised models, such as WESUP, CDWS, and SizeLoss. The two modes of the proposed framework outperformed the comparison algorithms in terms of overall accuracy(OA) and Dice index(DI) on the three datasets. FSGNet achieved an OA of 88. 15% and DI of 89. 64% on GlaS Part A, OA of 91. 58% and DI of 91. 23% on GlaS Part B, OA of 93. 74% and DI of 92. 58% on CRAG, and OA of 92. 84% and DI of 93. 20% on LUSC. WSGNet achieved an OA of 84. 27% and DI of 86. 15% on GlaS Part A, OA of 84. 91% and DI of 83. 60% on GlaS Part B, OA of 85. 50% and DI of 80. 17% on CRAG, and OA of 88. 45% and DI of 87. 89% on LUSC. Results indicate that the proposed framework exhibits robustness and generalization capabilities across different datasets because its performance does not vary significantly. Conclusion The two modes of the proposed framework demonstrate excellent performance in histopathological image segmentation. Subjective segmentation results indicate that the framework is able to achieve more complete segmentation of examples and provide more accurate predictions of the central regions of the target samples. It exhibits fewer instances of missed and false detections, thereby showcasing strong generalization and robustness.
Keywords

订阅号|日报