PyTorch原生API
torch
torch.nn
torch.nn.functional
torch.Tensor
Tensor Attributes
torch.amp
torch.autograd
torch.library
torch.cuda
torch.mps
torch.backends
torch.distributed
torch.distributed.algorithms.join
torch.distributed.elastic
torch.distributed.fsdp
torch.distributed.optim
torch.distributed.tensor.parallel
torch.distributed.checkpoint
torch.distributions
torch.fft
torch.func
torch.futures
torch.fx
torch.hub
torch.jit
torch.linalg
torch.monitor
torch.signal
torch.special
torch.overrides
torch.package
torch.profiler
torch.nn.init
torch.onnx
torch.optim
torch.cpu
torch._logging
torch.compiler
torch.export
DDP Communication Hooks
Pipeline Parallelism
Quantization
Distributed RPC Framework
torch.random
torch.nested
torch.sparse
torch.Storage
torch.testing
torch.utils
torch.utils.benchmark
torch.utils.checkpoint
torch.utils.cpp_extension
torch.utils.data
torch.utils.dlpack
torch.utils.mobile_optimizer
torch.utils.model_zoo
torch.utils.tensorboard
Type Info
Named Tensors
torch.__config__
Understanding CUDA Memory Usage
父主题:
PyTorch2.1.0