API名称 |
是否支持 |
限制与说明 |
---|---|---|
torch.distributed.fsdp.FullyShardedDataParallel |
是 |
在昇腾NPU场景中使用FSDP,推荐传入“device_id=torch.device("npu:0")”设备相关参数 |
torch.distributed.fsdp.FullyShardedDataParallel.apply |
是 |
|
torch.distributed.fsdp.FullyShardedDataParallel.clip_grad_norm_ |
是 |
|
torch.distributed.fsdp.FullyShardedDataParallel.forward |
是 |
|
torch.distributed.fsdp.FullyShardedDataParallel.named_buffers |
是 |
|
torch.distributed.fsdp.FullyShardedDataParallel.named_parameters |
是 |
|
torch.distributed.fsdp.FullyShardedDataParallel.no_sync |
是 |
|
torch.distributed.fsdp.FullyShardedDataParallel.register_comm_hook |
是 |
|
torch.distributed.fsdp.BackwardPrefetch |
是 |
|
torch.distributed.fsdp.ShardingStrategy |
是 |
|
torch.distributed.fsdp.MixedPrecision |
是 |
|
torch.distributed.fsdp.CPUOffload |
是 |