beta)torch_npu.contrib.function.fuse_add_softmax_dropout

接口原型

torch_npu.contrib.function.fuse_add_softmax_dropout(training, dropout, attn_mask, attn_scores, attn_head_size, p=0.5, dim=-1):

功能描述

使用NPU自定义算子替换原生写法,以提高性能。

参数说明

输出说明

torch.Tensor:mask操作的结果。

支持的型号

调用示例

1
2
3
4
5
6
7
8
>>> from torch_npu.contrib.function import fuse_add_softmax_dropout
>>> training = True
>>> dropout = torch.nn.DropoutWithByteMask(0.1)
>>> npu_input1 = torch.rand(96, 12, 30, 30).half().npu()
>>> npu_input2 = torch.rand(96, 12, 30, 30).half().npu()
>>> alpha = 0.125
>>> axis = -1
>>> output = fuse_add_softmax_dropout(training, dropout, npu_input1, npu_input2, alpha, p=axis)