切换至 "中华医学电子期刊资源库"

中华损伤与修复杂志(电子版) ›› 2025, Vol. 20 ›› Issue (03) : 192 -198. doi: 10.3877/cma.j.issn.1673-9450.2025.03.002

论著

烧烫伤创面深度评估模型HFNet 的构建及测试效果
张克诚1, 王瑞1, 易磊2, 周增丁2,()   
  1. 1. 200444 上海大学通信与信息工程学院
    2. 200025 上海交通大学医学院附属瑞金医院烧伤整形科
  • 收稿日期:2025-03-03 出版日期:2025-06-01
  • 通信作者: 周增丁
  • 基金资助:
    上海市科技创新行动计划自然科学基金(21ZR1440600)

Establishment and test results of HFNet model for burn and scald wound depth assessment

Kecheng Zhang1, Rui Wang1, Lei Yi2, Zengding Zhou2,()   

  1. 1. School of Communication and Information Engineering,Shanghai University,Shanghai 200444,China
    2. Department of Burn and Plastic Surgery,Ruijin Hospital,School of Medicine,Shanghai Jiao Tong University,Shanghai 200025,China
  • Received:2025-03-03 Published:2025-06-01
  • Corresponding author: Zengding Zhou
引用本文:

张克诚, 王瑞, 易磊, 周增丁. 烧烫伤创面深度评估模型HFNet 的构建及测试效果[J/OL]. 中华损伤与修复杂志(电子版), 2025, 20(03): 192-198.

Kecheng Zhang, Rui Wang, Lei Yi, Zengding Zhou. Establishment and test results of HFNet model for burn and scald wound depth assessment[J/OL]. Chinese Journal of Injury Repair and Wound Healing(Electronic Edition), 2025, 20(03): 192-198.

目的

构建全局-局部特征分层融合图像分类网络模型,以提升烧烫伤创面深度评估的可靠性与准确性。

方法

收集就诊于上海交通大学医学院附属瑞金医院烧伤整形科门急诊烧烫伤患者创面图像共619张,由2名具有3年以上工作经验的烧伤科医师使用图像标注工具LabelMe对图像进行独立标注,并与科室其他医师进行交叉验证。按7∶2∶1的比例将图像集分为训练集、验证集与测试集,并对训练集进行数据扩充,共得到2 598张训练集图像。设计并构建全局-局部特征分层融合网络HFNet进行预训练,学习图像的基础特征后迁移到已收集的烧烫伤图像分类数据集,通过参数优化提升分类精度。将HFNet模型的查准率、召回率、F1指数和推理时间与ConvNeXt、Swin-Transformer、UniFormer、BiFormer模型进行对比,评估其性能。

结果

经测试,HFNet模型在Ⅰ度、Ⅱ度、Ⅲ度烧烫伤创面分类任务中的查准率分别为93.53%、94.08%和86.52%,均值为91.63%;召回率分别为91.99%、89.89%和92.71%,均值为91.69%;F1指数分别为93.56%、90.96%和90.46%,均值为91.66%;平均精度分别为92.75%、91.94%和89.51%,平均精度均值为91.40%。混淆矩阵显示,HFNet模型在Ⅰ度、Ⅱ度、Ⅲ度烧烫伤创面分类任务中的准确度分别为90%、92%和93%。与BiFormer等其他模型相比,HFNet模型查准率较高,推理速度中等,整体上在准确性与推理效率之间取得了良好平衡。

结论

HFNet模型应用于烧烫伤创面深度评估能提高其准确性和诊断效率,可为烧伤专科医师快速判断烧烫伤严重程度提供精准的分类信息;同时能够积累大量高质量的烧烫伤创面分类数据,为后续优化模型提供支持。

Objective

To establish a global-local feature hierarchical fusion image classification network model,improve the reliability and accuracy of burn and scald wound depth assessment.

Methods

A total of 619 wound images of burn and scald patients who were admitted to the Department of Burn and Plastic Surgery at Ruijin Hospital Affiliated to Shanghai Jiao Tong University School of Medicine were collected.Two burn physicians with more than 3 years of work experience independently annotated the images using the image annotation tool LabelMe, and cross validated with other physicians in the department.The image set was divided into a training set, validation set, and test set in a ratio of 7∶2∶1, and the training set was expanded to obtain 2 598 images.Designed and constructed a global-local feature hierarchical fusion network HFNet for pre-training, learned the basic features of images and transferred them to the collected burn image classification dataset.Improved classification accuracy through parameter optimization and compared the precision, recall, F1 score, and inference time of the HFNet model with ConvNeXt, Swin-Transformer,UniFormer, and BiFormer models to evaluate its performance.

Results

Testing results showed that the precision of the HFNet model in the classification tasks of first-degree, second-degree, and third-degree burn wounds was 93.53%, 94.08%, and 86.52%, respectively, with a mean of 91.63%.The recall rates were 91.99%, 89.89%, and 92.71%, respectively, with a mean of 91.69%.The F1 index was 93.56%, 90.96%,and 90.46%, respectively, with a mean of 91.66%.The average accuracy was 92.75%, 91.94%, and 89.51%,respectively, with an average accuracy of 91.40%.The confusion matrix showed that the accuracy of the HFNet model in the classification tasks of first-degree, second-degree, and third-degree burn wounds was 90%,92%, and 93%, respectively.Compared to models such as BiFormer, the HFNet model achieved higher precision with moderate inference speed, striking a good overall balance between accuracy and computational efficiency.

Conclusion

The HFNet model enhances the accuracy and efficiency of burn depth assessment,providing burn specialists with precise classification information to rapidly determine the severity of burn injuries.Additionally, the model enables the accumulation of high-quality classification data, supporting further model optimization.

图1 烧烫伤创面数据集图像样本。A 示Ⅰ度烧伤;B 示Ⅱ度烧伤;C 示Ⅲ度烧伤
图2 HFNet 模型的网络结构
图3 自适应分层特征融合模块结构示意图
图4 HFNet 模型检测Ⅰ度、Ⅱ度、Ⅲ度烧烫伤创面的混淆矩阵
表1 不同模型对烧烫伤创面检测性能对比
[1]
Atiyeh B, Gunn S, Hayek S.Military and civilian burns during armed conflicts[J].Ann Burns Fire Disasters,2007,20(4):203-215.
[2]
Chang CW, Ho CY, Lai F, et al.Application of multiple deep learning models for automatic burn wound assessment[J].Burns,2023, 49(5): 1039-1051.
[3]
张嘉炜,王瑞,张克诚,等.基于深度学习的皮肤烧烫伤创面图像分割与分类及检测的研究进展[J].中华损伤与修复杂志(电子版), 2024, 19(2): 172-175.
[4]
Dosovitskiy A, Beyer L, Kolesnikov A, et al.An image is worth 16×16 words: transformers for image recognition at scale[C]//Proceedings of the 9th International Conference on Learning Representations (ICLR).2021: arXiv:2010.11929.
[5]
Cirrincione G, Cannata S, Cicceri G, et al.Transformer-based approach to melanoma detection[J].Sensors, 2023, 23(12):5677.
[6]
Yuan F, Peng Y, Huang Q, et al.A bi-directionally fused boundary aware network for skin lesion segmentation[J].IEEE Trans Image Process,2024,33:6340-6353.
[7]
Liu Z, Mao H, Wu CY, et al.A convnet for the 2020s[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition.2022: 11976-11986.
[8]
Liu Z, Lin Y, Cao Y, et al.Swin transformer: hierarchical vision transformer using shifted windows[C]//Proceedings of the IEEE/CVF international conference on computer vision.2021: 10012-10022.
[9]
Li K, Wang Y, Zhang J, et al.Uniformer: unifying convolution and self-attention for visual recognition[J].IEEE Trans Pattern Anal Mach Intell, 2023, 45(10): 12581-12600.
[10]
Zhu L, Wang X, Ke Z, et al.Biformer: vision transformer with bi-level routing attention[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition.2023:10323-10333.
[11]
Feizkhah A, Mobayen M, Habibiroudkenar P, et al.The importance of considering biomechanical properties in skin graft:are we missing something?[J].Burns, 2022, 48(7): 1768-1769.
[12]
Pabitha C, Vanathi B.Densemask RCNN: a hybrid model for skin burn image classification and severity grading[J].Neural Processing Letters, 2021, 53(1): 319-337.
[13]
Wearn C, Lee KC, Hardwicke J, et al.Prospective comparative evaluation study of Laser Doppler Imaging and thermal imaging in the assessment of burn depth[J].Burns, 2018, 44(1): 124-133.
[14]
Dang J, Lin M, Tan C, et al.Use of infrared thermography for assessment of burn depth and healing potential: a systematic review[J].J Burn Care Res, 2021, 42(6): 1120-1127.
[15]
Korotkova O, Gbur G.Applications of optical coherence theory[J].Progress in Optics, 2020, 65: 43-104.
[16]
He X, Tan EL, Bi H, et al.Fully transformer network for skin lesion analysis[J].Med Image Anal, 2022, 77: 102357.
[17]
Pacal I.A novel Swin transformer approach utilizing residual multi-layer perceptron for diagnosing brain tumors in MRI images[J].International Journal of Machine Learning and Cybernetics, 2024, 15(9): 3579-3597.
[18]
Hassan E, Hossain MS, Saber A, et al.A quantum convolutional network and ResNet( 50)-based classification architecture for the MNIST medical datase[tJ].Biomedical Signal Processing and Control, 2024, 87: 105560.
[19]
Suo Y, He Z, Liu Y.Deep learning CS-ResNet-101 model for diabetic retinopathy classification[J].Biomedical Signal Processing and Control, 2024, 97: 106661.
[20]
Suha SA, Sanam TF.A deep convolutional neural networkbased approach for detecting burn severity from skin burn images[J].Machine Learning with Applications, 2022, 9:100371.
[21]
Yıldız M, Sarpdağı Y, Okuyar M, et al.Segmentation and classification of skin burn images with artificial intelligence:development of a mobile application[J].Burns, 2024, 50(4):966-979.
[22]
Zhang B, Zhou J.Multi-feature representation for burn depth classification via burn images[J].Artif Intell Med,2021, 118:102128.
[23]
Rangel-Olvera B, Rosas-Romero R.Detection and classification of burnt skin via sparse representation of signals by over-redundant dictionaries[J].Comput Biol Med,2021, 132: 104310.
[24]
张嘉炜,王瑞,张克诚,等.烧烫伤创面深度智能检测模型P-YOLO的建立及测试效果[J].中华损伤与修复杂志(电子版),2024, 19(5): 379-385.
[1] 孙舒涵, 陈雅静, 宗晴晴, 栗翠英, 缪殊妹, 杨斌, 俞飞虹. 基于超声的深度学习列线图预测乳腺癌新辅助化疗后腋窝淋巴结状态的研究[J/OL]. 中华医学超声杂志(电子版), 2025, 22(02): 97-105.
[2] 李洋, 蔡金玉, 党晓智, 常婉英, 巨艳, 高毅, 宋宏萍. 基于深度学习的乳腺超声应变弹性图像生成模型的应用研究[J/OL]. 中华医学超声杂志(电子版), 2024, 21(06): 563-570.
[3] 罗刚, 泮思林, 孙玲玉, 李志新, 陈涛涛, 乔思波, 庞善臣. 一种新型语义网络分析模型对室间隔完整型肺动脉闭锁和危重肺动脉瓣狭窄胎儿右心发育不良程度的评价作用[J/OL]. 中华医学超声杂志(电子版), 2024, 21(04): 377-383.
[4] 齐琦, 倪立桐, 王俊逸, 张蔚, 朱熹, 李逸杰, 曹豆豆, 段旭君, 王懿, 张帆, 李世俊. 基于视觉转换器深度学习算法的磁共振弥散张量成像孤独症谱系障碍分类模型研究[J/OL]. 中华妇幼临床医学杂志(电子版), 2025, 21(01): 44-53.
[5] 张嘉炜, 王瑞, 张克诚, 易磊, 周增丁. 烧烫伤创面深度智能检测模型P-YOLO的建立及测试效果[J/OL]. 中华损伤与修复杂志(电子版), 2024, 19(05): 379-385.
[6] 庄文博, 胡越, 陈沁豪, 商莉, 张逸天, 桂海军. 卷积神经网络辅助三维头影解剖标志自动化目标检测的研究与应用进展[J/OL]. 中华口腔医学研究杂志(电子版), 2025, 19(01): 70-74.
[7] 叶莉, 杜宇. 深度学习在牙髓根尖周病临床诊疗中的应用[J/OL]. 中华口腔医学研究杂志(电子版), 2024, 18(06): 351-356.
[8] 张悦, 张可, 邓锶锶, 向青, 郭亚豪, 曹键, 罗涛, 孟占鳌. 深度学习图像重建三低方案在肾动脉血管成像中的应用[J/OL]. 中华腔镜泌尿外科杂志(电子版), 2025, 19(01): 76-82.
[9] 黄俊龙, 李文双, 李晓阳, 刘柏隆, 陈逸龙, 丘惠平, 周祥福. 基于盆底彩超的人工智能模型在女性压力性尿失禁分度诊断中的应用[J/OL]. 中华腔镜泌尿外科杂志(电子版), 2024, 18(06): 597-605.
[10] 赵毅, 李昶田, 唐文博, 白雪婷, 刘荣. 腹腔镜术中超声主胰管自动识别模型的临床应用[J/OL]. 中华腔镜外科杂志(电子版), 2024, 17(05): 290-294.
[11] 尹泽新, 杨继林, 李有尧, 吴美龙, 刘利平. 肝癌微血管侵犯的术前预测研究进展[J/OL]. 中华肝脏外科手术学电子杂志, 2025, 14(01): 128-134.
[12] 祝杰生, 樊沛. 人工智能在肩肘外科中的应用与进展:现状与未来展望[J/OL]. 中华肩肘外科电子杂志, 2025, 13(01): 1-5.
[13] 胡师尧, 陈媛媛, 李辰, 严宏. 深度学习在后发性白内障混浊分析中的应用研究[J/OL]. 中华眼科医学杂志(电子版), 2024, 14(05): 262-268.
[14] 潘清, 葛慧青. 基于机械通气波形大数据的人机不同步自动监测方法[J/OL]. 中华重症医学电子杂志, 2024, 10(04): 399-403.
[15] 孙铭远, 褚恒, 徐海滨, 张哲. 人工智能应用于多发性肺结节诊断的研究进展[J/OL]. 中华临床医师杂志(电子版), 2024, 18(08): 785-790.
阅读次数
全文


摘要