[1]董 可,严云洋,耿嘉雯,等.增强感受野特征的多尺度火灾检测方法[J].南京师大学报(自然科学版),2025,48(04):87-95105.[doi:10.3969/j.issn.1001-4616.2025.04.009]
 Dong Ke,Yan Yunyang,Geng Jiawen,et al.Multi-Scale Flame Detection Based on Enhanced Receptive Field Feature[J].Journal of Nanjing Normal University(Natural Science Edition),2025,48(04):87-95105.[doi:10.3969/j.issn.1001-4616.2025.04.009]
点击复制

增强感受野特征的多尺度火灾检测方法()

《南京师大学报(自然科学版)》[ISSN:1001-4616/CN:32-1239/N]

卷:
48
期数:
2025年04期
页码:
87-95105
栏目:
计算机科学与技术
出版日期:
2025-08-20

文章信息/Info

Title:
Multi-Scale Flame Detection Based on Enhanced Receptive Field Feature
文章编号:
1001-4616(2025)04-0087-09
作者:
董 可严云洋耿嘉雯于永涛王盘龙叶 翔
(淮阴工学院计算机与软件工程学院,江苏 淮安 223003)
Author(s):
Dong KeYan YunyangGeng JiawenYu YongtaoWang PanlongYe Xiang
(Faculty of Computer & Software Engineering,Huaiyin Institute of Technology,Huaian 223003,China)
关键词:
火灾检测感受野特征注意力机制损失函数YOLOv8n
Keywords:
flame detectionreceptive field featureattention mechanismloss functionYOLOv8n
分类号:
TP391.4
DOI:
10.3969/j.issn.1001-4616.2025.04.009
文献标志码:
A
摘要:
针对当前火灾检测效果差和抗干扰能力弱等问题,提出一种增强感受野特征的多尺度火灾检测方法. 首先,引入感受野注意力卷积(receptive-field attention convolution,RFAConv),增强感受野空间特征的提取; 其次,结合反向残差移动模块(inverted residual mobile block,iRMB)和通道先验卷积注意力(channel prior convolutional attention,CPCA)设计C2fiC模块,提高模型表达和融合不同尺度特征的能力; 然后,采用共享参数结构,引入轻量卷积重构检测头,降低模型参数和计算复杂度; 最后,引入Focaler-GIoU损失函数,平衡难易样本. 实验结果表明,改进模型参数量和计算量均有所降低,检测精度更高,能满足火灾场景中的检测要求.
Abstract:
Aiming at the problems of poor fire detection effect and weak anti-interference ability,a multi-scale fire detection method based on enhanced receptive field feature is proposed. Firstly,the Receptive-Field Attention Convolution(RFAConv)is introduced to enhance the extraction of spatial features of receptive field. Secondly,the C2fiC module is designed by combining the Inverted Residual Mobile Block(iRMB)and the Channel Prior Convolutional Attention(CPCA)mechanism to improve the ability of the model to express and fuse different scale features. Then,the shared parameter structure is adopted,and the lightweight convolution reconstruction detector is introduced to reduce the model parameters and computational complexity. Finally,the Focaler-GIoU loss function is introduced to balance the difficulty samples. The experimental results show that the number of parameters and the amount of calculation of the improved model are reduced,and the detection accuracy is higher,which can meet the detection requirements in flame detection.

参考文献/References:

[1]李倩,岳亮. 吸气式感烟火灾探测器设计改进研究[J]. 消防科学与技术,2021,40(11):1644-1647.
[2]皮骏,刘宇恒,李久昊. 基于YOLOv5s的轻量化森林火灾检测算法研究[J]. 图学学报,2023,44(1):26-32.
[3]ZHANG L,LU C,XU H,et al. MMFNet:Forest fire smoke detection using multiscale convergence coordinated pyramid network with mixed attention and fast-robust NMS[J]. IEEE internet of things journal,2023,10(20):18168-18180.
[4]龚成张,严云洋,卞苏阳,等. 基于Fast-CAANet的火焰检测方法[J]. 南京师大学报(自然科学版),2024,47(2):109-116.
[5]WANG T,WANG J,WANG C,et al. Improving YOLOX network for multi-scale fire detection[J]. The visual computer,2023:1-13.
[6]ZHENG H,WANG G,XIAO D,et al. FTA-DETR:An efficient and precise fire detection framework based on an end-to-end architecture applicable to embedded platforms[J]. Expert systems with applications,2024,248:123394.1.
[7]ZHANG X,LIU C,YANG D,et al. RFAConv:Innovating spatital attention and standard convolutional operation[EB/OL].(2024-3-28)[2024-5-18]. https://arxiv.org/abs/2304.03198.
[8]ZHANG J,LI X,LI J,et al. Rethinking mobile block for efficient attention-based models[C]//2023 IEEE/CVF International Conference on Computer Vision(ICCV). Paris,France:IEEE Computer Society,2023:1389-1400.
[9]HUANG H,CHEN Z,ZOU Y,et al. Channel prior convolutional attention for medical image segmentation[J]. Computers in biology and medicine,2024,178:108784.
[10]LI H,LI J,WEI H,et al. Slim-neck by GSConv:a lightweight-design for real-time detector architectures[J]. Journal of real-time image processing,2024,21(3):62.
[11]ZHENG Z,WANG P,LIU W,et al. Distance-IoU loss:Faster and better learning for bounding box regression[C]//Proceedings of the AAAI conference on artificial intelligence. AAAI:New York,2020,34(7):12993-13000.
[12]REZATOFIGHI H,TSOI N,GWAK J Y,et al. Generalized intersection over union:A metric and a loss for bounding box regression[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. Long Beach,CA,USA:IEEE,2019:658-666.
[13]ZHANG H,ZHANG S. Focaler-IoU:More Focused Intersection over Union Loss[EB/OL].(2024-1-19)[2024-5-18]. https://arxiv.org/abs/2401.10525.
[14]REN S,HE K,GIRSHICK R,et al. Faster R-CNN:Towards real-time object detection with region proposal networks[J]. IEEE transactions on pattern analysis and machine intelligence,2016,39(6):1137-1149.
[15]ZHAO Y,LV W,XU S,et al. Detrs beat yolos on real-time object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle,WA,USA:IEEE,2024:16965-16974.
[16]TAN M,PANG R,Le Q V. Efficientdet:Scalable and efficient object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.,Seattle,WA,USA:IEEE,2020:10781-10790.
[17]WANG C Y,BOCHKOVSKIY A,LIAO H Y M. YOLOv7:Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver,BC,Canda:IEEE,2023:7464-7475.
[18]WANG A,CHEN H,LIU L,et al. YOLOv10:Real-Time End-to-End Object Detection[EB/OL].(2024-5-23)[2024-5-18]. https://arxiv.org/abs/2405.14458.

备注/Memo

备注/Memo:
收稿日期:2024-09-24.
基金项目:国家自然科学基金资助项目(62076107)、江苏省“六大人才高峰”资助项目(2013DZXX-023).
通讯作者:严云洋,博士,教授,研究方向:数字图像处理、模式识别. E-mail:yunyang@hyit.edu.cn
更新日期/Last Update: 2025-08-20