在医学图象中,多模态数据因成像机理不一样而能从多种层面提供信息。多模态图像分割包含重点问题为如何融合(fusion)不一样模态间信息,本文主要记录笔者最近所读,欢迎批评指正补充
1. A review: Deep learning for medical image segmentation using multi-modality fusion (Array 2019)***


综述,按照方法的位置将融合策略分为三大类:Input-level, Layer-level, Decision-level. spa
数据集:3d


2. Co-Learning Feature Fusion Maps from PET-CT Images of Lung Cancer (TMI 2019)****
Abstract : Layer-level fusion, U-net, PET-CT, Lung cancercode
Method : 对CT与PET各一个encoder,不一样层feature stack后经conv获得权重,与concat后的feature点积,得加权feature map。blog


Experiment : 肺癌数据,对比layer-level fusion : MB(multi-branch), MC(multi-channel), FS(fused),效果不错。ci


3. 3D FULLY CONVOLUTIONAL NETWORKS FOR CO-SEGMENTATION OF TUMORS ON PET-CT IMAGES (ISBI 2018)**
Abstract : Decision-level, Unet, Graph cut, PET-CT, Lung cancerit
Method : CT, PET独立的Net,各输出几率图后Graph Cut。io


Experiment : 肺癌数据,对比graph-cut。class


4. HyperDense-Net: A hyper-densely connected CNN for multi-modal image segmentation (TMI 2019) ****
Abstract : Layer-level, DenseNet, MRI, Brainsed
Method : modality各一个net, 中间层相互dense链接。network




Experiment : Brain(iseg-2017, MRBrainS),对比layer level fusion : Single Dense Path, Dual Dense Path, Disentangled modalities with early fusion。各模态先经卷积再拼接相比直接双通道输入有较明显提高。


5. Deep learning for automatic tumour segmentation in PET/CT images of patients with head and neck cancers (MIDL 2019) *
Abstract : Input-level, Unet, Pet-CT, head and neck
Method : 2通道输入
Experiment :头颈部肿瘤, 对比单模态Unet,是HECKTOR比赛数据来源。


6. Tumor co-segmentation in PET/CT using multi-modality fully convolutional neural network (Physics in Medicine & Biology 2019) **
Abstract : Layer-level, V-net, Pet-CT, Lung cancer
Method : 2个V-net 先分别提取PET/CT feature,sum后经4层卷积得result. 提出weighted cross entropy loss以balance不一样模态影响。
Experiment : 肺癌数据,对比其它几种fusion方法,传统方法,单模态V-net。






欢迎评论与补充相关的论文~