torch.distributed

若API未标明支持情况,则代表该API的支持情况待验证。

API名称

是否支持

限制与说明

torch.distributed.is_available

  

torch.distributed.init_process_group

  

torch.distributed.is_initialized

  

torch.distributed.is_mpi_available

  

torch.distributed.is_nccl_available

  

torch.distributed.is_gloo_available

  

torch.distributed.is_torchelastic_launched

  

torch.distributed.Backend

  

torch.distributed.Backend.register_backend

     

torch.distributed.get_backend

  

torch.distributed.get_rank

  

torch.distributed.get_world_size

  

torch.distributed.Store

  

torch.distributed.TCPStore

  

torch.distributed.HashStore

  

torch.distributed.FileStore

  

torch.distributed.PrefixStore

  

torch.distributed.Store.set

  

torch.distributed.Store.get

  

torch.distributed.Store.add

  

torch.distributed.Store.compare_set

  

torch.distributed.Store.wait

  

torch.distributed.Store.num_keys

  

torch.distributed.Store.delete_key

  

torch.distributed.Store.set_timeout

  

torch.distributed.new_group

  

torch.distributed.get_group_rank

  

torch.distributed.get_global_rank

  

torch.distributed.get_process_group_ranks

  

torch.distributed.send

  

torch.distributed.recv

  

torch.distributed.isend

     

torch.distributed.irecv

     

torch.distributed.batch_isend_irecv

     

torch.distributed.P2POp

  

torch.distributed.broadcast

  

torch.distributed.broadcast_object_list

  

torch.distributed.all_reduce

不支持unsigned类型的输入

torch.distributed.reduce

     

torch.distributed.all_gather

  

torch.distributed.all_gather_into_tensor

  

torch.distributed.all_gather_object

  

torch.distributed.gather

     

torch.distributed.gather_object

     

torch.distributed.scatter

     

torch.distributed.scatter_object_list

     

torch.distributed.reduce_scatter

  

torch.distributed.reduce_scatter_tensor

  

torch.distributed.all_to_all_single

  

torch.distributed.all_to_all

  

torch.distributed.barrier

  

torch.distributed.monitored_barrier

     

torch.distributed.ReduceOp

  

torch.distributed.reduce_op

  

torch.distributed.broadcast_multigpu

     

torch.distributed.all_reduce_multigpu

     

torch.distributed.reduce_multigpu

     

torch.distributed.all_gather_multigpu

     

torch.distributed.reduce_scatter_multigpu

     

torch.distributed.DistBackendError