若API未标明支持情况,则代表该API的支持情况待验证。
API名称 |
是否支持 |
限制与说明 |
---|---|---|
torch.distributed.fsdp.FullyShardedDataParallel |
是 |
|
torch.distributed.fsdp.FullyShardedDataParallel.apply |
是 |
|
torch.distributed.fsdp.FullyShardedDataParallel.clip_grad_norm_ |
是 |
|
torch.distributed.fsdp.FullyShardedDataParallel.flatten_sharded_optim_state_dict |
||
torch.distributed.fsdp.FullyShardedDataParallel.forward |
是 |
|
torch.distributed.fsdp.FullyShardedDataParallel.fsdp_modules |
||
torch.distributed.fsdp.FullyShardedDataParallel.full_optim_state_dict |
||
torch.distributed.fsdp.FullyShardedDataParallel.get_state_dict_type |
||
torch.distributed.fsdp.FullyShardedDataParallel.module |
||
torch.distributed.fsdp.FullyShardedDataParallel.named_buffers |
是 |
|
torch.distributed.fsdp.FullyShardedDataParallel.named_parameters |
是 |
|
torch.distributed.fsdp.FullyShardedDataParallel.no_sync |
是 |
|
torch.distributed.fsdp.FullyShardedDataParallel.optim_state_dict |
||
torch.distributed.fsdp.FullyShardedDataParallel.optim_state_dict_to_load |
||
torch.distributed.fsdp.FullyShardedDataParallel.register_comm_hook |
是 |
|
torch.distributed.fsdp.FullyShardedDataParallel.rekey_optim_state_dict |
||
torch.distributed.fsdp.FullyShardedDataParallel.scatter_full_optim_state_dict |
||
torch.distributed.fsdp.FullyShardedDataParallel.set_state_dict_type |
||
torch.distributed.fsdp.FullyShardedDataParallel.shard_full_optim_state_dict |
||
torch.distributed.fsdp.FullyShardedDataParallel.sharded_optim_state_dict |
||
torch.distributed.fsdp.FullyShardedDataParallel.state_dict_type |
||
torch.distributed.fsdp.FullyShardedDataParallel.summon_full_params |
||
torch.distributed.fsdp.BackwardPrefetch |
是 |
|
torch.distributed.fsdp.ShardingStrategy |
是 |
|
torch.distributed.fsdp.MixedPrecision |
是 |
|
torch.distributed.fsdp.CPUOffload |
是 |
|
torch.distributed.fsdp.StateDictConfig |
||
torch.distributed.fsdp.FullStateDictConfig |
||
torch.distributed.fsdp.ShardedStateDictConfig |
||
torch.distributed.fsdp.LocalStateDictConfig |
||
torch.distributed.fsdp.OptimStateDictConfig |
||
torch.distributed.fsdp.FullOptimStateDictConfig |
||
torch.distributed.fsdp.ShardedOptimStateDictConfig |
||
torch.distributed.fsdp.LocalOptimStateDictConfig |
||
torch.distributed.fsdp.StateDictSettings |