超参配置调优

以下列出业界各类大模型的超参配置表,以供参考:
表1 业界各类大模型的超参配置表

Model

Batch Size (#tokens)

Learning Rate

Warmup

Decay Method

Optimizer

Precision Type

Weight Decay

Grad Clip

Dropout

GPT3 (175B)

32K→3.2M

6 × 10-5

yes

cosine decay to 10%

Adam

FP16

0.1

1

-

PanGu-α (200B)

-

2 × 10-5

-

-

Adam

-

0.1

-

-

OPT (175B)

2M

1.2 × 10-4

yes

manual decay

AdamW

FP16

0.1

-

0.1

PaLM (540B)

1M→4M

1 × 10-2

no

inverse square root

Adafactor

BF16

lr^2

1

0.1

BLOOM (176B)

4M

6 × 10-5

yes

cosine decay to 10%

Adam

BF16

0.1

1

0

MT-NLG (530B)

64K→3.75M

5 × 10-5

yes

cosine decay to 10%

Adam

BF16

0.1

1

-

Gopher (280B)

3M→6M

4 × 10-5

yes

cosine decay to 10%

Adam

BF16

-

1

-

Chinchilla (70B)

1.5M→3M

1 × 10-4

yes

cosine decay to 10%

AdamW

BF16

-

-

-

Galactica (120B)

2M

7 × 10-6

yes

linear decay to 10%

AdamW

-

0.1

1

0.1

LaMDA (137B)

256K

-

-

-

-

BF16

-

-

-

Jurassic-1 (178B)

32K→3.2M

6 × 10-5

yes

-

-

-

-

-

-

LLaMA (65B)

4M

1.5 × 10-4

yes

cosine decay to 10%

AdamW

-

0.1

1

-

LLaMA 2 (70B)

4M

1.5 × 10-4

yes

cosine decay to 10%

AdamW

-

0.1

1

-

Falcon (40B)

2M

1.85 × 10-4

yes

cosine decay to 10%

AdamW

BF16

0.1

-

-

GLM (130B)

0.4M→8.25M

8 × 10-5

yes

cosine decay to 10%

AdamW

FP16

0.1

1

0.1

T5 (11B)

64K

1 × 10-2

no

inverse square root

AdaFactor

-

-

-

0.1

ERNIE 3.0 Titan (260B)

-

1 × 10-4

-

-

Adam

FP16

0.1

1

-

PanGu-Σ (1.085T)

0.5M

2 × 10-5

yes

-

Adam

FP16

-

-

-