在给定6HD格式的Data和6HD格式的out_backprop的情况下计算float32的3-D反卷积。
Data tensor 的shape是6HD,即(N, D, C1, H, W, C0);out_backprop Tensor 的shape是 6HD,即(N, D, C1, H, W, C0)。
conv3d_backprop_filter(x, out_backprop, filter_size, para_dict)
参数2:cout,out_backprop的batch维度大小。
参数3:groups,group卷积参数。
参数4:cout0,为tbe_platform.C0_SIZE,默认值为16。
参数5:cin0,为tbe_platform.C0_SIZE,默认值为16。
具体计算公式:
lcm(param1, param2),计算最小公倍数。
mag_factor0 = lcm(fmap_c // groups, cin0) // (fmap_c // groups)
mag_factor1 = lcm(cout // groups, cout0) // (cout // groups)
mag_factor = min(lcm(mag_factor0, mag_factor1), groups)
cin1_g = (mag_factor * fmap_c // groups + cin0 - 1) // cin0
cout_g = (mag_factor * cout // groups + cout0 - 1) // cout0 * cout0
group_dict = {"real_g": (groups + mag_factor - 1) // mag_factor,
"mag_factor": mag_factor,
"cin1_g": cin1_g,
"cout_g": cout_g,
"cin_ori": fmap_c,
"cout_ori": cout}
res_tensor:表示卷积计算的tensor,即卷积计算的结果输出。
此接口暂不支持与其他TBE DSL计算接口混合使用。
Atlas 200/300/500 推理产品
Atlas 训练系列产品
from tbe import tvm from tbe import dsl shape_dedy = (16, 4, 1, 36, 36, 16) out_backprop_dtype = "float16" filter_sizes = [16, 2, 16, 3, 3] shape_fmap = (16, 8, 1, 72, 72, 16) x_dtype = "float16" dedy = tvm.placeholder(shape_dedy, name="dedy", dtype=out_backprop_dtype) fmap = tvm.placeholder(shape_fmap, name="fmap", dtype=x_dtype) strides = [2, 2, 2] pads = [0, 0, 0, 1, 0, 1] dilations = [1, 1, 1, 1, 1] res_dtype = "float32" kernel_name = "conv3d_backprop_filter_dx_16_8_72_72_16_dy_16_4_36_36_16_dw_2_3_3_16_16_s_1_2_2_2_1_p_SAME" group_dict = {'real_g': 1, 'mag_factor': 1, 'cin1_g': 1, 'cout_g': 16, 'cin_ori': 16, 'cout_ori': 16} para_dict = { "strides": strides, "pads": pads, "dilations": dilations, "res_dtype": res_dtype, "kernel_name": kernel_name, "group_dict": group_dict } dedw = dsl.conv3d_backprop_filter( x=fmap, out_backprop=dedy, filter_size=filter_sizes, para_dict=para_dict )