torch_npu.fast_gelu(self) -> Tensor
快速高斯误差线性单元激活函数(Fast Gaussian Error Linear Units activation function),对输入的每个元素计算FastGelu。支持FakeTensor模式。
self (Tensor) - 数据类型:float16、float32。
示例一:
1 2 3 4 5 | >>> x = torch.rand(2).npu() >>> x tensor([0.5991, 0.4094], device='npu:0') >>> torch_npu.fast_gelu(x) tensor([0.4403, 0.2733], device='npu:0') |
示例二:
1 2 3 4 5 6 | //FakeTensor模式 >>> from torch._subclasses.fake_tensor import FakeTensorMode >>> with FakeTensorMode(): ... x = torch.rand(2).npu() ... torch_npu.fast_gelu(x) >>> FakeTensor(..., device='npu:0', size=(2,)) |