1. 工程简介 本工程基于 Ultralytics 框架扩展面向语义分割与 YOLO 系列模型改进实验。核心特点是通过切换yaml配置文件即可快速完成不同网络结构的训练、对比与验证无需为每个模型单独编写训练脚本。当前已支持的主要模型家族 语义分割模型UNet、UNet、DeepLabV3、DPT、FPN、PSPNet、MAnet、PAN、Linknet、UPerNet、SegformerYOLO 系列模型YOLOv8、YOLOv10、YOLO11、YOLO12、YOLO262. 本工程的优势 ✨只需替换ultralytics/cfg/models/...下的模型yaml就可以在相同数据集、相同训练入口、相同评估流程下完成不同结构的对比实验。本框架最大的特点是支持通过切换 YAML 快速完成不同结构的对比实验。3. 模块信息卡片 项目内容YAML 文件yolo11/yolo11-ContrastDrivenFeatureAggregation-2.yaml模块名称ContrastDrivenFeatureAggregation模型系列YOLO11变体编号方案2原始代码位置ultralytics/nn/extra_modules/attention/CDFA.py当前接入思路先在P5高层主干特征后插入对比驱动特征聚合模块再进入后续多尺度融合4. 论文介绍 4.1 文章地址 更匹配当前实现的论文地址http://arxiv.org/abs/2412.08345源码中同时保留的另一个链接https://arxiv.org/pdf/2407.197684.2 论文简介 更匹配当前实现的论文ConDSeg: A General Medical Image Segmentation Framework via Contrast-Driven Feature Enhancement。这篇工作主要解决低对比度分割场景中的前景/背景难分问题。作者认为当目标区域与背景区域在灰度、纹理或局部结构上差异不够明显时普通的卷积堆叠或者简单注意力往往不足以稳定捕获真正有判别力的区域模型很容易被背景共现模式带偏。因此论文更强调“对比关系”的建模不仅看前景特征本身也要显式建模前景与背景之间的差异及其不确定区域然后通过对比驱动特征增强与聚合方式让网络更稳定地突出有效目标响应。这个思路对分割类任务尤其有价值因为它直接针对边界模糊和低对比度问题。⚠️ 说明当前源码同时出现两个论文链接但从模块命名和实现细节看本地ContrastDrivenFeatureAggregation更贴近ConDSeg的思想而非另一个链接对应的任务背景。因此本文按更匹配的论文进行介绍并明确保留这条不一致说明。4.3 模块核心思想 ✨该模块通过对比式增强提升前景与背景的可分性。当它放在P5高层语义特征后时更适合先对高层语义做一次判别性筛选再传播到后续融合路径。在本工程里这种设计很适合与方案1、方案3做插入位置对比实验。5. 改进步骤 ️步骤1定位并加入原始模块代码 这一步先确认ContrastDrivenFeatureAggregation的原始实现位置再把对应代码加入当前工程作为后续模块导入、tasks.py注册和 YAML 调用的基础。原始代码位置ultralytics/nn/extra_modules/attention/CDFA.py当前模块类别attention本步骤作用将对比驱动特征聚合模块加入注意力模块目录供 YOLO11 语义分割结构直接调用importos,sys sys.path.append(os.path.dirname(os.path.abspath(__file__))/../../../..)importwarnings warnings.filterwarnings(ignore)fromcalflopsimportcalculate_flopsimporttorch,mathimporttorch.nnasnnimporttorch.nn.functionalasFfromultralytics.nn.modules.convimportConvclassHaarWaveletConv(nn.Module):def__init__(self,in_channels,gradFalse):super(HaarWaveletConv,self).__init__()self.in_channelsin_channels self.haar_weightstorch.ones(4,1,2,2)self.haar_weights[1,0,0,1]-1self.haar_weights[1,0,1,1]-1self.haar_weights[2,0,1,0]-1self.haar_weights[2,0,1,1]-1self.haar_weights[3,0,1,0]-1self.haar_weights[3,0,0,1]-1self.haar_weightstorch.cat([self.haar_weights]*self.in_channels,0)self.haar_weightsnn.Parameter(self.haar_weights)self.haar_weights.requires_gradgraddefforward(self,x):B,_,H,Wx.size()xF.pad(x,[0,1,0,1],value0)outF.conv2d(x,self.haar_weights,biasNone,stride1,groupsself.in_channels)/4.0outout.reshape([B,self.in_channels,4,H,W])outtorch.transpose(out,1,2)outout.reshape([B,self.in_channels*4,H,W])a,h,v,dout.chunk(4,1)returna,hvdclassContrastDrivenFeatureAggregation(nn.Module):def__init__(self,dim,num_heads8,kernel_size3,padding1,stride1,attn_drop0.,proj_drop0.):super().__init__()self.dimdim self.num_headsnum_heads self.kernel_sizekernel_size self.paddingpadding self.stridestride self.head_dimdim//num_heads self.scaleself.head_dim**-0.5self.waveletHaarWaveletConv(dim)self.vnn.Linear(dim,dim)self.attn_fgnn.Linear(dim,kernel_size**4*num_heads)self.attn_bgnn.Linear(dim,kernel_size**4*num_heads)self.attn_dropnn.Dropout(attn_drop)self.projnn.Linear(dim,dim)self.proj_dropnn.Dropout(proj_drop)self.unfoldnn.Unfold(kernel_sizekernel_size,paddingpadding,stridestride)self.poolnn.AvgPool2d(kernel_sizestride,stridestride,ceil_modeTrue)self.input_cbrnn.Sequential(Conv(dim,dim,3),Conv(dim,dim,3))self.output_cbrnn.Sequential(Conv(dim,dim,3),Conv(dim,dim,3))defforward(self,x):xself.input_cbr(x)bg,fgself.wavelet(x)xx.permute(0,2,3,1)fgfg.permute(0,2,3,1)bgbg.permute(0,2,3,1)B,H,W,Cx.shape vself.v(x).permute(0,3,1,2)v_unfoldedself.unfold(v).reshape(B,self.num_heads,self.head_dim,self.kernel_size*self.kernel_size,-1).permute(0,1,4,3,2)attn_fgself.compute_attention(fg,B,H,W,C,fg)x_weighted_fgself.apply_attention(attn_fg,v_unfolded,B,H,W,C)v_unfolded_bgself.unfold(x_weighted_fg.permute(0,3,1,2)).reshape(B,self.num_heads,self.head_dim,self.kernel_size*self.kernel_size,-1).permute(0,1,4,3,2)attn_bgself.compute_attention(bg,B,H,W,C,bg)x_weighted_bgself.apply_attention(attn_bg,v_unfolded_bg,B,H,W,C)x_weighted_bgx_weighted_bg.permute(0,3,1,2)outself.output_cbr(x_weighted_bg)returnoutdefcompute_attention(self,feature_map,B,H,W,C,feature_type):attn_layerself.attn_fgiffeature_typefgelseself.attn_bg h,wmath.ceil(H/self.stride),math.ceil(W/self.stride)feature_map_pooledself.pool(feature_map.permute(0,3,1,2)).permute(0,2,3,1)attnattn_layer(feature_map_pooled).reshape(B,h*w,self.num_heads,self.kernel_size*self.kernel_size,self.kernel_size*self.kernel_size).permute(0,2,1,3,4)attnattn*self.scale attnF.softmax(attn,dim-1)attnself.attn_drop(attn)returnattndefapply_attention(self,attn,v,B,H,W,C):x_weighted(attn v).permute(0,1,4,3,2).reshape(B,self.dim*self.kernel_size*self.kernel_size,-1)x_weightedF.fold(x_weighted,output_size(H,W),kernel_sizeself.kernel_size,paddingself.padding,strideself.stride)x_weightedself.proj(x_weighted.permute(0,2,3,1))x_weightedself.proj_drop(x_weighted)returnx_weighted步骤2在聚合导出文件中导入模块 需要在ultralytics/nn/extra_modules/__init__.py中补充导入让后续模块注册和 YAML 解析都能正确识别ContrastDrivenFeatureAggregation。from.attention.CDFAimportContrastDrivenFeatureAggregation步骤3在ultralytics/nn/tasks.py中注册模块 ⚙️需要把该模块加入注意力模块注册集合这样parse_model()在解析 YAML 时才能正确实例化ContrastDrivenFeatureAggregation。attention_modulesfrozenset({extra_modules.ACA,extra_modules.ACAB,extra_modules.CoordAtt,extra_modules.CASAB,extra_modules.ContrastDrivenFeatureAggregation,extra_modules.DeformableLKA,extra_modules.DHPF,extra_modules.EMA,extra_modules.FSA,extra_modules.KSFA,extra_modules.LSKBlock,extra_modules.MCA,extra_modules.MLCA,extra_modules.MultiSEAM,extra_modules.SimAM,})步骤4新增或修改 YAML 配置文件 当前方案对应的 YAML 位于ultralytics/cfg/models/improve/attention/yolo11/目录下。方案2的特点是先在主干最后一级高层特征P5后插入ContrastDrivenFeatureAggregation再将增强后的特征送入后续上采样与融合路径更适合验证它对高层语义判别性的增强作用。# Ultralytics AGPL-3.0 License - https://ultralytics.com/license# Ultralytics YOLO11 object detection model with P3/8 - P5/32 outputs# Model docs: https://docs.ultralytics.com/models/yolo11# Task docs: https://docs.ultralytics.com/tasks/detect# Parametersnc:80# number of classesscales:# model compound scaling constants, i.e. modelyolo11n.yaml will call yolo11.yaml with scale n# [depth, width, max_channels]n:[0.50,0.25,1024]# summary: 181 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPss:[0.50,0.50,1024]# summary: 181 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPsm:[0.50,1.00,512]# summary: 231 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPsl:[1.00,1.00,512]# summary: 357 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPsx:[1.00,1.50,512]# summary: 357 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs# YOLO11n backbonebackbone:# [from, repeats, module, args]-[-1,1,Conv,[64,3,2]]# 0-P1/2-[-1,1,Conv,[128,3,2]]# 1-P2/4-[-1,2,C3k2,[256,False,0.25]]# 2-P2/4-[-1,1,Conv,[256,3,2]]# 3-P3/8-[-1,2,C3k2,[512,False,0.25]]# 4-P3/8-[-1,1,Conv,[512,3,2]]# 5-P4/16-[-1,2,C3k2,[512,True]]# 6-P4/16-[-1,1,Conv,[1024,3,2]]# 7-P5/32-[-1,2,C3k2,[1024,True]]# 8-P5/32-[-1,1,SPPF,[1024,5]]# 9-P5/32-[-1,2,C2PSA,[1024]]# 10-P5/32-[-1,1,ContrastDrivenFeatureAggregation,[]]# 11-P5/32# YOLO11n headhead:-[-1,1,nn.Upsample,[None,2,nearest]]# 12-P4/16-[[-1,6],1,Concat,[1]]# 13-P4/16-[-1,2,C3k2,[512,False]]# 14-P4/16-[-1,1,nn.Upsample,[None,2,nearest]]# 15-P3/8-[[-1,4],1,Concat,[1]]# 16-P3/8-[-1,2,C3k2,[256,False]]# 17-P3/8-[-1,1,Conv,[256,3,2]]# 18-P4/16-[[-1,14],1,Concat,[1]]# 19-P4/16-[-1,2,C3k2,[512,False]]# 20-P4/16-[-1,1,Conv,[512,3,2]]# 21-P5/32-[[-1,11],1,Concat,[1]]# 22-P5/32-[-1,2,C3k2,[1024,True]]# 23-P5/32-[[17,20,23],1,SemanticSegmentHead,[nc]]# SemanticSegmentHead(P3, P4, P5)步骤5开始训练 # -*- coding: utf-8 -*- Auth AICurator File train.py importwarnings warnings.filterwarnings(ignore)fromultralyticsimportYOLOif__name____main__:modelYOLO(modelrG:\improve\segment\ultralytics-main\ultralytics\cfg\models\improve\attention\yolo11\yolo11-ContrastDrivenFeatureAggregation-2.yaml)# model.load()model.train(datardataset\data.yaml,imgsz640,epochs50,batch4,workers0,device0,optimizerSGD,close_mosaic10,resumeFalse,projectruns/train,nameexp,single_clsFalse,cacheFalse,)6. 总结 这份文档对应的是yolo11-ContrastDrivenFeatureAggregation-2这套配置重点是把对比驱动特征聚合模块放在P5高层语义特征之后再参与后续金字塔融合。你可以直接切换 YAML把它和方案1、方案3放到同一训练入口下做结构对比实验。订阅专栏后添加博主微信领取完整代码