高级检索

    融合病理图像和基因组学多模态的癌症生存预测

    Survival Prediction by Integrating Multimodal Pathological Images and Genomics

    • 摘要: 融合病理图像和基因组学多模态数据进行生存预测有助于提高癌症患者生存预测的准确性,从而为个性化医疗和精准治疗提供更加可靠的依据。为了提高生存预测的准确性,针对病理图像和基因组学两个模态数据,提出了一种基于中期特征融合的生存预测方法,从“全局-局部-全局”3个层面来挖掘多模态数据间的潜在关系。该方法采用多示例学习,基于ResNet50网络提取全尺寸病理图像示例级特征,采用自归一化网络提取基因组学特征;使用相似性度量方法学习模态间的全局相似语义信息,利用双向交叉注意力模块挖掘模态间的密集局部联系,通过最优传输方法捕获模态间的全局结构一致性,同时使用基于Transformer编码器和门控注意力池化层构建的聚合器聚合形成包级特征,最后采用估计危险函数预测得到癌症患者生存风险。在膀胱尿道上皮癌(BLCA)、肺腺癌(LUAD)和子宫体内膜癌(UCEC)这3个公共全尺寸病理图像数据集上的实验结果表明,本文所提方法优于其他对比方法,能够有效融合病理图像和基因组学数据,显著提高生存预测的准确性。

       

      Abstract: Integrating data from pathology images and genomics enhances cancer patient survival prediction accuracy, offering solid foundation for personalized medicine and precision treatment. To enhance predictive accuracy, we propose an intermediate fusion-based method to explore the latent relationships between histopathological images and genomic data across global and local levels. Whole Slide Image (WSI) features are extracted using ResNet50 and genomic features are extracted by Self-Normalizing Networks (SNN). Similarity measures are used to learn enable global semantic similarities across modalities. A bidirectional cross-attention module identifies dense local connections. Optimal transport methods capture global structural consistency between modalities. These features are aggregated through a Transformer encoder and gated attention pooling (GAP) to form bag-level representations. The model estimates the hazard function to predict cancer survival risk. Experimental results on the BLCA, LUAD, and UCEC WSI datasets demonstrate that the proposed method surpasses other comparative methods, effectively integrating pathology images and genomic data to significantly improve survival prediction accuracy.

       

    /

    返回文章
    返回