基于双分支特征融合增强网络的小样本桥梁路面裂缝分割
DOI:
CSTR:
作者:
作者单位:

1.山西工程科技职业大学建筑工程学院;2.兰州理工大学土木工程学院

作者简介:

通讯作者:

中图分类号:

基金项目:

甘肃省自然科学基金(项目编号:22JR5RA286).


Few-shot bridge pavement crack with dual-branch feature fusion enhancement network
Author:
Affiliation:

1.College of Architectural Engineering,Shanxi Vocational University of Engineering Science and Technology;2.School of Civil Engineering,Lanzhou University of Technology

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    针对现有桥梁路面裂缝分割方法对微小变化的桥梁路面裂缝定位不准,分割效果不佳的问题,提出一种基于双分支特征融合增强网络的小样本桥梁路面裂缝分割方法。该方法以支持分支和查询分支的双分支网络结构建立基线模型,利用标注的支持图片指导与之同类的查询图片中裂缝的分割。首先,利用预训练的Swin Transformer和ResNet-50网络提取支持分支中桥梁路面裂缝图片的多尺度特征。然后,利用多尺度特征增强注意力模块促进不同主干网络特征之间的交互,并在交互特征上生成指导查询图片中桥梁裂缝区域分割的原型集。最后,逐位置计算查询特征与原型集的相似度值,并根据最大相似度值逐像分割出查询图片中的裂缝区域。在自建数据集上进行了大量实验,所提出方法实现了72.04%和91.32%的mIoU和FB-IoU,同时获得了95.23%的Precision、95.08%的Recall和95.02%的F1得分,综合性能优于当前主流的分割模型。

    Abstract:

    To address the issues of inaccurate localization and poor segmentation performance for minor variations in bridge deck cracks in existing segmentation methods, a few-shot bridge pavement crack with dual-branch feature fusion enhancement network is proposed. This method establishes a baseline model using a dual-branch network structure with support and query branches. It leverages annotated support images to guide the segmentation of cracks in query images of the same class. First, the pre-trained Swin Transformer and ResNet-50 networks are used to extract multi-scale features from bridge pavement crack images in the support branch. Then, a multi-scale feature enhancement attention module is used to promote interaction between features from different backbone networks, and a prototype set is generated on the interacted features to guide the segmentation of bridge pavement crack regions in the query images. Finally, the similarity values between query features and the prototype set are calculated position by position, and the crack regions in query images are segmented based on the maximum similarity values. Extensive experiments on a self-built dataset demonstrated that the proposed method achieved 72.04% and 91.32% mIoU and FB-IoU, respectively, and obtained 95.23% Precision, 95.08% Recall, and 95.02% F1 score, outperforming current mainstream segmentation models in overall performance.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-10-07
  • 最后修改日期:2024-12-23
  • 录用日期:2025-01-07
  • 在线发布日期:
  • 出版日期:
文章二维码