torch_npu
概述
(beta)torch_npu._npu_dropout
(beta)torch_npu.copy_memory_
(beta)torch_npu.empty_with_format
(beta)torch_npu.fast_gelu
(beta)torch_npu.npu_alloc_float_status
(beta)torch_npu.npu_anchor_response_flags
(beta)torch_npu.npu_apply_adam
(beta)torch_npu.npu_batch_nms
(beta)torch_npu.npu_bert_apply_adam
(beta)torch_npu.npu_bmmV2
(beta)torch_npu.npu_bounding_box_decode
(beta)torch_npu.npu_bounding_box_encode
(beta)torch_npu.npu_broadcast
(beta)torch_npu.npu_ciou
(beta)torch_npu.npu_clear_float_status
(beta)torch_npu.npu_confusion_transpose
(beta)torch_npu.npu_conv_transpose2d
(beta)torch_npu.npu_conv2d
(beta)torch_npu.npu_conv3d
(beta)torch_npu.npu_convolution
(beta)torch_npu.npu_convolution_transpose
(beta)torch_npu.npu_deformable_conv2d
(beta)torch_npu.npu_diou
(beta)torch_npu.npu_dtype_cast
(beta)torch_npu.npu_format_cast
(beta)torch_npu.npu_format_cast_
(beta)torch_npu.npu_get_float_status
(beta)torch_npu.npu_giou
(beta)torch_npu.npu_grid_assign_positive
(beta)torch_npu.npu_gru
(beta)torch_npu.npu_indexing
(beta)torch_npu.npu_iou
(beta)torch_npu.npu_layer_norm_eval
(beta)torch_npu.npu_linear
(beta)torch_npu.npu_lstm
(beta)torch_npu.npu_max
(beta)torch_npu.npu_min
(beta)torch_npu.npu_mish
(beta)torch_npu.npu_nms_rotated
(beta)torch_npu.npu_nms_v4
(beta)torch_npu.npu_nms_with_mask
(beta)torch_npu.npu_one_hot
(beta)torch_npu.npu_pad
(beta)torch_npu.npu_ps_roi_pooling
(beta)torch_npu.npu_ptiou
(beta)torch_npu.npu_random_choice_with_mask
(beta)torch_npu.npu_reshape
(beta)torch_npu.npu_roi_align
(beta)torch_npu.npu_rotated_iou
(beta)torch_npu.npu_rotated_overlaps
(beta)torch_npu.npu_sign_bits_pack
(beta)torch_npu.npu_sign_bits_unpack
(beta)torch_npu.npu_silu
(beta)torch_npu.npu_slice
(beta)torch_npu.npu_softmax_cross_entropy_with_logits
(beta)torch_npu.npu_sort_v2
(beta)torch_npu.npu_transpose
(beta)torch_npu.npu_yolo_boxes_encode
(beta)torch_npu.npu_fused_attention_score
(beta)torch_npu.npu_multi_head_attention
(beta)torch_npu.npu_rms_norm
(beta)torch_npu.npu_dropout_with_add_softmax
torch_npu.npu_rotary_mul
torch_npu.npu_scaled_masked_softmax
(beta)torch_npu.npu_swiglu
(beta)torch_npu.one_
torch_npu.npu_fused_infer_attention_score
torch_npu.npu_group_norm_silu
torch_npu.npu_incre_flash_attention
torch_npu.npu_prompt_flash_attention
torch_npu.npu_quant_scatter
torch_npu.npu_quant_scatter_
torch_npu.npu_quantize
torch_npu.npu_group_quant
torch_npu.npu_all_gather_base_mm
torch_npu.npu_convert_weight_to_int4pack
torch_npu.npu_dynamic_quant
torch_npu.npu_dynamic_quant_asymmetric
torch_npu.npu_ffn
torch_npu.npu_fusion_attention
torch_npu.npu_grouped_matmul
torch_npu.npu_mm_all_reduce_base
torch_npu.npu_mm_reduce_scatter_base
torch_npu.npu_moe_compute_expert_tokens
torch_npu.npu_moe_finalize_routing
torch_npu.npu_moe_gating_top_k_softmax
torch_npu.npu_moe_init_routing
torch_npu.npu_quant_matmul
torch_npu.npu_scatter_nd_update
torch_npu.npu_scatter_nd_update_
torch_npu.npu_trans_quant_param
torch_npu.npu_weight_quant_batchmatmul
torch_npu.scatter_update_
torch_npu.scatter_update
torch_npu.npu_anti_quant
torch_npu.npu_fast_gelu
torch_npu.npu_gelu
torch_npu.npu_prefetch
torch_npu.npu.disable_deterministic_with_backward
torch_npu.npu.enable_deterministic_with_backward
父主题:
Ascend Extension for PyTorch自定义API