下载
中文
注册

数据集使用

以下是两个用于性能测试的常见数据集,在此提供两个脚本用于自动化加载模型,将数据集转换为token ID。需要注意OA数据集的平均SequenceLen较长,总量超过三千条,在模型体量大(65B及以上)而服务化配置的MaxBatchSize较小时,跑完整个数据集耗时久,可能需要数个小时。

OA数据集

  1. 单击链接获取OA数据集。
  2. 转换为token ID方式。

    使用tokenizer_model.encode进行加密。

    python脚本示例参考如下:

    import csv
    from pathlib import Path
    import pyarrow.parquet as pq
    import glob, os
    from transformers import AutoTokenizer
    def read_oa(dataset_path, tokenizer_model):
        out_list = []
        for file_path in glob.glob((Path(dataset_path) / "*.parquet").as_posix()):
            file_name = file_path.split("/")[-1].split("-")[0]
            data_dict = pq.read_table(file_path).to_pandas()
            data_dict = data_dict[data_dict['lang'] == 'zh']
            ques_list = data_dict['text'].to_list()
            for ques in ques_list:
                tokens = tokenizer_model.encode(ques)
                if len(tokens) <= 2048:
                    out_list.append(tokens)
                else:
                    out_list.append(tokens[0:2048])
        return out_list
    def save_csv(file_path, out_tokens_list):
        with open(file_path, 'w', newline='') as csvfile:
            csv_writer = csv.writer(csvfile)
            for row in out_tokens_list:
                csv_writer.writerow(row)
    if __name__ == '__main__':
        model_path = "/data/models/baichuan2-7b"
        oa_dir = "/home/xxx/oasst1"
        save_path = "oa_tokens.csv"
        tokenizer_model = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, use_fast=True, local_files_only=True)
        tokens_lists = read_oa(oa_dir, tokenizer_model)
        save_csv(save_path, tokens_lists)

数据集获取

数据集的获取方式请参见/usr/local/Ascend/llm_model/tests/modeltest/README_NEW.md。

支持的数据集如下所示:

  • BoolQ
  • CEval
  • CMMLU
  • HumanEval
  • HumanEval_X
  • GSM8K
  • LongBench
  • MMLU
  • NeedleBench
  • TruthfulQA

GSM8K数据集转tokenids

使用pandas read_json后,然后使用tokenizer直接转换,再用numpy保存到csv中。

python脚本示例如下:

import numpy as np
import pandas as pd
from transformers import AutoTokenizer

MODEL_PATH = "/home/weight/llama2-70b"
OUT_FILE = "token_gsm8k.csv"
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, trust_remote_code=True, use_fast=True, local_files_only=True)

def gen_requests_from_trace(trace_file):
    len = 0
    with open(OUT_FILE, "w") as f:
        df = pd.read_json(trace_file, lines=True)
        for i, row in df.iterrows():
            ques = row["question"]
            token = tokenizer([ques], return_tensors="np")
            token: np.ndarray = token["input_ids"].astype(np.int64)
            np.savetxt(f, token, fmt="%d", delimiter=",")
            len+=token.shape[-1]
    print(len / 1319)

if __name__ == '__main__':
    gen_requests_from_trace("./GSM8K.jsonl")