目的 MRI正逐步代替CT进行骨头与关节的检查,肩关节MRI中骨结构的精确自动分割对于骨损伤和疾病的度量与诊断至关重要,现有骨头分割算法无法做到不用任何先验知识进行自动分割,且通用性和精准度相对较低,本文提出一种基于图像块和全卷积神经网络(PCNN和FCN)相结合的自动分割算法。方法 首先建立四个分割模型,包括三个基于U-Net的骨头分割模型(肱骨分割模型,关节骨分割模型,肱骨头和关节骨作为整体的分割模型)和一个基于块的AlexNet分割模型；然后使用四个分割模型来获取候选的骨头区域,并通过投票的方式准确检测到肱骨和关节骨的位置区域；最后在检测到的骨头区域内进一步使用AlexNet分割模型,从而分割出精确度在像素级别的骨头边缘。结果 实验数据来自美国哈佛医学院/麻省总医院骨科的八组病人,每组扫描序列包括100片左右图像,都已经分割标注。5组病人用于训练和进行五倍的交叉验证,3组病人用于测试实际的分割效果,其中Dice Coefficient、Positive Predicted Value(PPV)和Sensitivity平均准确率分别达到0.92±0.02、0.96±0.03和0.94±0.02。结论 本方法针对小样本的病人数据集,仅通过二维医学图像上的深度学习,可以得到非常精确的肩关节分割结果。所提算法已经集成到我们开发的医学图像度量分析平台“3DQI”,通过该平台可以展示肩关节骨头3D分割效果,给骨科医生提供临床的诊断指导作用。同时,所提算法框架具有一定的通用性,适应于小样本数据下CT和MRI中特定器官和组织的精确分割。
Automatic segmentation of shoulder joint in MRI using patch-wise and full-image fully convolutional networks
Liu Yunpeng,Cai Wenli,Hong Guobin,Wang Renfang,Jin Ran(Radiology D Imaging Laboratory,Harvard Medical School,Boston;Medical Imaging Department,The Fifth Hospital Affiliated to Zhongshan University;Faculty of Electronics Computer,Zhejiang Wanli University,Ningbo)
Objective MRI uses the principle of nuclear magnetic resonance. It is safer to use and has higher soft tissue resolution. CT is gradually replaced by MRI to check bones and joints. The automated detection and segmentation of shoulder joint structure on MRI are very important for the measurement and diagnosis of bone injury and disease. In MRI, the bone internal region and the air, fat and some soft tissue, etc. show similar gray and black features, plus a lower image signal to noise ratio and partial volume effect, so the automatic and accurate segmentation of the clinically valuable glenoid and humeral head in the shoulder joint have a higher difficulty. The common conventional bone segmentation algorithms, such as region growing or level set, cannot be implemented without any prior knowledge, and the generality and accuracy are relatively low. Although various deep learning algorithms have been applied to the segmentation of medical images such as MRI and CT, it is almost impossible to operate a successful segmentation only by one deep network without post-processing. As far as we know, a few papers are on the use of deep learning to segment bones in MRI, among which there is no papers about shoulder segmentation. Two deep learning networks, patch-based and fully convolutional networks (PCNN and FCN), are employed for automated detection and segmentation of shoulder joint structure on MRI. Method First, four segmentation models are build including three U-Net based models (glenoid segmentation model, humeral head segmentation model, glenoid and humeral head as a whole segmentation model) and one patch-based AlexNet segmentation model. The network depth of U-Net is three, and the number of features in the first layer is sixteen. Since the resolution of the input image is reduced after passing through U-Net, in order to ensure that the resolution of the output segmentation map is consistent with the original image, edge expansion is performed by mirroring. The traditional AlexNet has a three-channel RGB input, but MRI is a grayscale image, the RGB three channel values are the same. Three channels can be added to an image by using three Window-Level mappings or three different resolutions. However, there is no improvement in performance compared to a channel test, so we adjust the input channels to one. Then the four segmentation models are used to get the candidate bone regions from which the correct locations and regions of glenoid and humeral head are obtained by voting. But still false bone regions exist. Because the signal intensity of the bone is very close to fat and some soft tissues, which are easily misjudged as glenoid and humeral head in partial shapes and positions, and due to the complex shape and wider variation range of humeral head, this diversity of morphologies will make it easier to misinterpret noise or fat as a humeral head. These false bone regions are filtered by location information and the missing bone objects are calculated by inter-frame prediction due to the continuity of MRI scanning in the time direction. Last, AlexNet model is further used to segment the edge of the bone with accuracy at the pixel level. Result The experimental data are from eight groups of patients in Harvard Medical School/Massachusetts General Hospital of the United States, each scan sequence includes about 100 images with the marked bone edge labels. Five groups of patients are used for training and five-fold cross-validation, and three groups are used to test the actual segmentation results. Dice Coefficient, Positive Predicted Value (PPV) and Sensitivity average accuracy is 0.92±0.02, 0.96±0.03 and 0.94±0.02 respectively. From the experimental results, it can be seen that the segmentation accuracy is very high, basically consistent with the results of artificial segmentation, and even the segmentation accuracy exceeds the average artificial annotation in a considerable part of the images from observation. In practical segmentation applications, training and segmentation are generally performed at the service end of the GPU device, and the segmentation result is displayed at the client end. For medical institutions, the operation is usually performed in a local area network. For a slice of a patient"s MRI sequence, the overall time from a segmentation request in the client to the segmentation result in the server is about 1.2 seconds meeting the real-time requirements in the application. In fact, in many cases, all the scanned images of a group of patients are uniformly processed offline on the server side, and the segmentation results are saved, the segmentation results can be called when the client loads the patient data. In this case, there is no real-time performance requirement. This also is a very common application mode. From the experimental dataset, our sample set is very limited, a total of 8 sets of patient use cases. It means the very good predictive effect can be achieved as long as the methods of getting image blocks and data argument are proper. Conclusion The model ensemble method by voting is used to accurately locate the glenoid and humeral head bone in shoulder joint from the four kinds of segmentation models. And the spatial consistency of the image sequence is used to predict the wrongly deleted area. Then PCNN segmentation is employed by the local perception and features in the located bone region of interest. Although the patients’ datasets are quite small, the very accurate shoulder joint segmentation results are obtained. The proposed algorithm has been integrated into 3DQI, a medical image measurement and analysis platform developed by us, which can demonstrate 3D segmentation of shoulder bones and provide clinical diagnosis and guidance to orthopedist. With the deepening cooperation with hospitals and the increase in the number of MRI samples, in the future study, we will test and analyze the three-dimensional segmentation based on deep learning, and compare the segmentation results with the two-dimensional operation.