切换至 "中华医学电子期刊资源库"

中华损伤与修复杂志(电子版) ›› 2024, Vol. 19 ›› Issue (05) : 379 -385. doi: 10.3877/cma.j.issn.1673-9450.2024.05.002

论著

烧烫伤创面深度智能检测模型P-YOLO的建立及测试效果
张嘉炜1, 王瑞1, 张克诚1, 易磊2, 周增丁2,()   
  1. 1. 200444 上海大学通信与信息工程学院
    2. 200025 上海交通大学医学院附属瑞金医院烧伤整形科
  • 收稿日期:2024-03-21 出版日期:2024-10-01
  • 通信作者: 周增丁
  • 基金资助:
    上海市科技创新行动计划自然科学基金(21ZR1440600)

Establishment and testing effectiveness of P-YOLO model for intelligent detection of the depth of burn wounds

Jiawei Zhang1, Rui Wang1, Kecheng Zhang1, Lei Yi2, Zengding Zhou2,()   

  1. 1. School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
    2. Department of Burn and Plastic Surgery, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200025, China
  • Received:2024-03-21 Published:2024-10-01
  • Corresponding author: Zengding Zhou
引用本文:

张嘉炜, 王瑞, 张克诚, 易磊, 周增丁. 烧烫伤创面深度智能检测模型P-YOLO的建立及测试效果[J]. 中华损伤与修复杂志(电子版), 2024, 19(05): 379-385.

Jiawei Zhang, Rui Wang, Kecheng Zhang, Lei Yi, Zengding Zhou. Establishment and testing effectiveness of P-YOLO model for intelligent detection of the depth of burn wounds[J]. Chinese Journal of Injury Repair and Wound Healing(Electronic Edition), 2024, 19(05): 379-385.

目的

基于深度学习技术和计算机语言设计烧烫伤创面深度智能检测模型并予以测试,验证该模型对烧烫伤创面图像检测的有效性及准确性。

方法

收集2022年1月至2024年2月在上海交通大学医学院附属瑞金医院烧伤整形科接受治疗且符合入选标准的烧烫伤患者伤后48 h内创面照片共492张,重置照片顺序并编号。由两名执业3年以上的副主任医师采用图像标注工具LabelMe对照片中的目标创面进行标记并判定其严重程度,严重程度分为Ⅰ度、Ⅱ度和Ⅲ度。采用图像处理技术扩充数据集至2 952张,按照7∶2∶1比例划分为训练集、验证集及测试集。在Python 3.10.0版本下,提出并构建基于深度学习的烧烫伤创面深度智能检测模型P-YOLO,通过多批次训练调整并优化网络参数。经过测试得到该模型在数据集上的各项指标参数,如查准率、召回率以及在不同交并比(IoU)下的平均精度均值等,根据实验结果绘制出相应的F1指数曲线和混淆矩阵。

结果

(1)经测试,所设计的P-YOLO智能检测模型对Ⅰ度、Ⅱ度和Ⅲ度烧烫伤创面识别的查准率分别为0.962、0.931及0.886,召回率分别为0.849、0.828及0.857,F1指数分别为0.902、0.876及0.871。(2)混淆矩阵显示,P-YOLO模型检测Ⅰ度、Ⅱ度和Ⅲ度烧烫伤创面的准确度分别为0.86、0.87及0.91。(3)当IoU阈值为0.5时,P-YOLO模型检测Ⅰ度、Ⅱ度和Ⅲ度烧烫伤创面的平均精度均值为0.893、0.885及0.838。在所有类别创面中,P-YOLO模型检测的平均精度均值为0.872。(4)与Faster R-CNN、YOLOv5及YOLOv7检测模型相比,P-YOLO模型具有最高的平均精度均值,检测效果最优。

结论

基于深度学习的智能检测模型P-YOLO整体检测准确率和可靠性较高,能够提高烧伤科医师对烧烫伤创面深度的诊断准确度和效率。

Objective

To develop an intelligent detection model for burn wound depth based on deep learning technology and computer vision, and test its efficiency and accuracy in detecting burn wound depth.

Methods

From January 2022 to February 2024, 492 burn wound photos meeting the inclusion criteria were collected from patients treated at the Department of Burn and Plastic Surgery in Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University. The photos were reordered and numbered. Two associate physicians who had been practing for more than three yeas in the Burn and Plastic Surgery Department used the LabelMe image annotation tool to mark the target wounds and determine the severity, categorized into first-degree, second-degree, and third-degree. The dataset was expanded to 2 952 images using image processing techniques and was split into training, validation, and test sets in a 7∶2∶1 ratio. An intelligent burn detection model P-YOLO based on deep learning was proposed and constructed under Python 3.10.0, with network parameters adjusted and optimized through multiple batches of training. The model's performance was tested, yielding various metrics such as precision, recall, and mean average precision (mAP) under different intersection over union (IoU) thresholds. Corresponding F1 score curves and confusion matrices were plotted based on these experimental results.

Results

(1)Testing results showed that the precision rates of the designed P-YOLO intelligent detection model for identifying first-degree, second-degree, and third-degree burns were 0.962, 0.931, and 0.886, respectively. The recall rates were 0.849, 0.828, and 0.857, respectively, and the F1 scores were 0.902, 0.876, and 0.871, respectively. (2)The confusion matrix indicated that the P-YOLO model correctly detected first-degree, second-degree, and third-degree burns with accuracy of 0.86, 0.87 and 0.91, respectively. (3)When the IoU threshold was 0.5, the mAP of P-YOLO model for detecting first-degree, second-degree, and third-degree burns was 0.893, 0.885 and 0.838, respectively. The model achieved an overall mAP of 0.872 across all categories. (4)Compared to Faster R-CNN, YOLOv5, and YOLOv7 detection models, P-YOLO model had the highest mAP and exhibited the best detection performance.

Conclusion

The deep learning-based intelligent detection model P-YOLO shows high overall detection accuracy and reliability, which can significantly enhance the accuracy and efficiency of burn wound depth diagnosis for burn surgeons.

图1 烧烫伤创面数据集图像样本。A示Ⅰ度烧烫伤创面;B示Ⅱ度烧烫伤创面;C示Ⅲ度烧烫伤创面
图2 P-YOLO检测模型网络结构
图3 P-YOLO模型检测3种烧烫伤创面深度的查准率曲线
图4 P-YOLO模型检测3种烧烫伤创面深度的召回率曲线
图5 P-YOLO模型检测3种烧烫伤创面深度的混淆矩阵
[1]
Kiser M, Beijer GMjuweni S,et al. Photographic assessment of burn wounds: a simple strategy in a resource-poor setting[J]. Burns201339(1): 155-161.
[2]
Şevik U, Karakullukçu EBerber T,et al. Automatic classifi-cation of skin burn colour images using texture‐based feature extraction[J]. IET Image Process201913(11): 2018-2028.
[3]
Chang CW, Ho CYLai F,et al. Application of multiple deep learning models for automatic burn wound assessment[J]. Burns202349(5): 1039-1051.
[4]
Abubakar AUgail HSmith KM,et al. Burns depth assessment using deep learning features[J]. J Med Biol Eng202040(6): 923-933.
[5]
Boissin C, Laflamme LFransén J,et al. Development and evaluation of deep learning algorithms for assessment of acute burns and the need for surgery[J]. Sci Rep202313(1): 1794.
[6]
Serrano CBoloix RGómez T,et al. Features identification for automatic burn classification[J]. Burns201541(8): 1883-1890.
[7]
Abubakar A, Ajuji MYahya I. Comparison of deep transfer learning techniques in human skin burns discrimination[J]. Appl Syst Innov20203(2): 20.
[8]
Pabitha C, Vanathi B. Densemask RCNN: a hybrid model for skin burn image classification and severity grading[J]. Neural Process Lett202153(1): 319-337.
[9]
Lee S, RahulYe H,et al. Real-time burn classification using ultrasound imaging[J]. Sci Rep202010(1): 5829.
[10]
Li H, Bu QShi X,et al. Non-invasive medical imaging technology for the diagnosis of burn depth[J]. Int Wound J202421(1): e14681.
[11]
Shahid SDuarte MCZhang J,et al. Laser doppler imaging-the role of poor burn perfusion in predicting healing time and guiding operative management[J]. Burns202349(1): 129-136.
[12]
Dang JLin MTan C,et al. Use of infrared thermography for assessment of burn depth and healing potential: a systematic review[J]. J Burn Care Res202142(6): 1120-1127.
[13]
Korotkova OGbur G. Applications of optical coherence theory[J]. Prog Optics202065: 43-104.
[14]
Wang JZhu HWang SH,et al. A review of deep learning on medical image analysis[J]. Mobile Netw Appl202126(1): 351-380.
[15]
Wang Y, Ke ZHe Z,et al. Real-time burn depth assessment using artificial networks: a large-scale,multicentre study[J]. Burns202046(8): 1829-1838.
[16]
Oliveira ACarvalho ABDantas DO. Faster R-CNN approach for diabetic foot ulcer detection[C]// Proc 16th Int Jt Conf Comput Vis,Imaging Comput Graph Theory Appl,2021:677-684.
[17]
韩旭晖,刘宇,何圭波,等.基于多尺度特征融合的皮肤烧伤创面分级算法[J].电子测量技术202245(18):114-118.
[18]
Tan MPang RLe QV. Efficientdet: Scalable and efficient object detection[C]// Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit, 2020: 10781-10790.
[19]
Ferdinand J, Chow DVPrasetyo SY. Automated skin burn detection and severity classification using YOLO Convolutional Neural Network Pretrained Model[C]//E3S Web of Conf. EDP Sci,2023426: 01076.
[20]
Yıldız MSarpdağı Y, Okuyar M,et al. Segmentation and classification of skin burn images with artificial intelligence: development of a mobile application[J]. Burns, 202450(4): 966-979.
[21]
Diwan T, Anirudh GTembhurne JV. Object detection using YOLO: challenges,architectural successors,datasets and applications[J]. Multimed Tools Appl202382(6): 9243-9275.
[22]
Zhao N, Huang BYang J,et al. Oceanic eddy identification using pyramid split attention U-net with remote sensing imagery[J]. IEEE GRSL202320: 1-5.
[1] 李洋, 蔡金玉, 党晓智, 常婉英, 巨艳, 高毅, 宋宏萍. 基于深度学习的乳腺超声应变弹性图像生成模型的应用研究[J]. 中华医学超声杂志(电子版), 2024, 21(06): 563-570.
[2] 罗刚, 泮思林, 孙玲玉, 李志新, 陈涛涛, 乔思波, 庞善臣. 一种新型语义网络分析模型对室间隔完整型肺动脉闭锁和危重肺动脉瓣狭窄胎儿右心发育不良程度的评价作用[J]. 中华医学超声杂志(电子版), 2024, 21(04): 377-383.
[3] 赫兰, 杨泽堃, 张颖, 王玉东, 陈伟导, 王一同, 申锷. 双输入BCNN-ResNet模型对超声颈动脉斑块稳定性的分类诊断价值[J]. 中华医学超声杂志(电子版), 2024, 21(02): 137-142.
[4] 成汉林, 史中青, 戚占如, 王小贤, 曾子炀, 单淳劼, 钱隼南, 罗守华, 姚静. 基于深度学习的超声心动图动态图像切面识别研究[J]. 中华医学超声杂志(电子版), 2024, 21(02): 128-136.
[5] 刘韩, 王胰, 舒庆兰, 彭博, 尹立雪, 谢盛华. 基于深度学习的超声心动图三尖瓣反流严重程度智能评估方法研究[J]. 中华医学超声杂志(电子版), 2024, 21(02): 121-127.
[6] 孔德铭, 刘铮, 李睿, 钱文伟, 王飞, 蔡道章, 柴伟. 人工智能辅助全髋关节置换三维术前规划准确性评价[J]. 中华关节外科杂志(电子版), 2024, 18(04): 431-438.
[7] 王瑞, 张嘉炜, 张克诚, 周增丁. 基于深度学习的皮肤烧烫伤创面图像分割与分类及检测的研究进展[J]. 中华损伤与修复杂志(电子版), 2024, 19(02): 172-175.
[8] 韩冬. 重视手部烧烫伤后瘢痕挛缩畸形的防治[J]. 中华损伤与修复杂志(电子版), 2024, 19(01): 3-7.
[9] 杨龙雨禾, 王跃强, 招云亮, 金溪, 卫娜, 杨智明, 张贵福. 人工智能辅助临床决策在泌尿系肿瘤的应用进展[J]. 中华腔镜泌尿外科杂志(电子版), 2024, 18(02): 178-182.
[10] 陈健, 周静洁, 夏开建, 王甘红, 刘罗杰, 徐晓丹. 基于卷积神经网络实现结直肠息肉的实时检测与自动NICE分型(附视频)[J]. 中华结直肠疾病电子杂志, 2024, 13(03): 217-228.
[11] 董力, 李赫妍, 魏文斌. 人工智能在糖尿病视网膜病变中的应用进展[J]. 中华眼科医学杂志(电子版), 2024, 14(01): 57-61.
[12] 李帅, 李开南. 人工智能在骨科诊断技术中的研究进展[J]. 中华老年骨科与康复电子杂志, 2024, 10(01): 46-50.
[13] 陈健, 张子豪, 卢勇达, 夏开建, 王甘红, 刘罗杰, 徐晓丹. 基于深度学习构建结直肠息肉诊断自动分类模型[J]. 中华诊断学电子杂志, 2024, 12(01): 9-17.
[14] 王靖玺, 赵丽, 吕滨. 人工智能在肺栓塞CT检查中的临床研究进展[J]. 中华心脏与心律电子杂志, 2024, 12(01): 26-31.
[15] 林一鑫, 董晶, 贾建文, 黄菊梅, 武军元, 王双坤, 柳云鹏, 汪阳. 基于人工智能分析颈内动脉颅外段迂曲特征及对称性的应用性评价[J]. 中华脑血管病杂志(电子版), 2024, 18(03): 202-209.
阅读次数
全文


摘要