下载
中文
注册

aclnnMatmulCompressDequant

支持的产品型号

  • Atlas 推理系列产品

接口原型

每个算子分为两段式接口,必须先调用“aclnnMatmulCompressDequantGetWorkspaceSize”接口获取计算所需workspace大小以及包含了算子计算流程的执行器,再调用“aclnnMatmulCompressDequant”接口执行计算。

  • aclnnStatus aclnnMatmulCompressDequantGetWorkspaceSize(const aclTensor* x1, const aclTensor* x2, const aclTensor* compressIndex, const aclTensor* bias, const aclTensor* deqScale, const aclTensor* offsetW, int offsetX, const aclIntArray* compressInfo, aclTensor* out, uint64_t* workspaceSize, aclOpExecutor** executor)
  • aclnnStatus aclnnMatmulCompressDequant(void* workspace, uint64_t workspaceSize, aclOpExecutor* executor, aclrtStream stream)

功能描述

  • 算子功能:进行l@r矩阵乘计算时,可先通过msModelSlim工具对r矩阵进行无损压缩,减少r矩阵的内存占用大小,然后通过本接口完成无损解压缩,矩阵乘,反量化计算。
  • 计算公式:
x2_unzip=unzip(x2,compressIndexcompressInfo)result=(x1@x2_unzip+bias)deqScalex2\_unzip = unzip(x2, compressIndex,compressInfo)\\ result=(x1 @ x2\_unzip + bias)*deqScale

其中x2表示r矩阵经过msModelSlim工具进行压缩后的一维数据, compressIndex以及compressInfo表示压缩算法相关的信息,x2_unzipx2\_unzip是本接口内部进行无损解压缩后的数据(与原始r矩阵数据一致),压缩和调用本接口的详细使用样例参考调用示例

aclnnMatmulCompressDequantGetWorkspaceSize

  • 参数说明:

    • x1(aclTensor*, 计算输入):表示矩阵乘的左输入,Device侧的2维ND格式aclTensor,数据类型仅支持INT8。不支持非连续的Tensor

    • x2(aclTensor*, 计算输入):表示压缩后的矩阵乘的右输入,Device侧的1维ND格式aclTensor,为通过msModelSlim工具中weight_compression模块压缩后的输入,数据类型仅支持INT8。不支持非连续的Tensor,公式中x2_unzipx2\_unzip的Reduce维度需要与x1的Reduce维度大小相等。

    • compressIndex(aclTensor*, 计算输入):表示矩阵乘右输入的压缩索引表,Device侧的1维ND格式aclTensor,数据类型支持INT8,通过示例中的msModelSlim工具中获取。

    • bias(aclTensor*, 计算输入):Device侧的2维ND格式aclTensor,数据类型仅支持INT32,支持空指针传入,shape仅支持(1, n)或者(n), 其中n为输出shape(m, n)的n。

    • deqScale(aclTensor*, 计算输入):表示反量化参数,Device侧的2维ND格式aclTensor,shape支持(1, n)或者(1, 1), 其中n为输出shape(m, n)中的n, tensor中的值为float通过下述示例中转换后的UINT64的数据,数据类型仅支持UINT64。

    • offsetW(aclTensor*, 计算输入):表示矩阵乘右输入的偏移量,Device侧的aclTensor,数据类型是INT8,维度与x2_unzipx2\_unzip一致, 当前仅支持空指针传入。

    • offsetX(int32, 计算输入):Host侧的标量,表示矩阵乘左输入的偏移量,数据类型仅支持INT32,当前仅支持0。

    • compressInfo(aclIntArray*, 计算输入):Host侧整型数据列表,数据类型为INT64。其中包括压缩块信息tilingN、tilingK(通过msModelSlim工具中weight_compression模块压缩后获取,分别表示压缩前shape(n, k)在n方向和k方向上一个基本压缩块的大小),压缩前x2矩阵原始shape(shape为2维,用(n, k)表示),以及压缩块遍历方向的标识。

    • out(aclTensor*, 计算输出):Device侧的2维aclTensor,数据类型支持FLOAT16。数据格式支持ND,不支持非连续的Tensor

    • workspaceSize(uint64_t*, 出参):返回需要在Device侧申请的workspace大小。

    • executor(aclOpExecutor**, 出参):返回op执行器,包含了算子计算流程。

  • 返回值:

    aclnnStatus:返回状态码,具体参见aclnn返回码

    161001(ACLNN_ERR_PARAM_NULLPTR):1. 传入的x1、x2或out是空指针。
    161002(ACLNN_ERR_PARAM_INVALID):1. x1和x2的数据类型和数据格式不在支持的范围之内。
                                     2. x1和x2 无法做数据类型推导。
                                     3. 推导出的数据类型无法转换为指定输出out的类型。

aclnnMatmulCompressDequant

  • 参数说明:

    • workspace(void*, 入参):在Device侧申请的workspace内存地址。

    • workspaceSize(uint64_t, 入参):在Device侧申请的workspace大小,由第一段接口aclnnMatmulCompressDequantGetWorkspaceSize获取。

    • executor(aclOpExecutor*, 入参):op执行器,包含了算子计算流程。

    • stream(aclrtStream, 入参):指定执行任务的 AscendCL Stream流。

  • 返回值:

    aclnnStatus:返回状态码,具体参见aclnn返回码

约束与限制

无。

调用示例

生成压缩前的输入数据

以通过以下gen_data.py生成的输入数据做示例,gen_data.py为

import numpy as np
import os
import sys
from numpy import random

def write2file(data, path):
  with open(path, 'wb') as f:
      data.tofile(f)

if not os.path.exists("./data"):
    os.mkdir("./data")

m = int(sys.argv[1])
k = int(sys.argv[2])
n = int(sys.argv[3])

# 生成mat1
mat1 = random.randn(m, k).astype(np.int8)
write2file(mat1, "./data/mat1.bin")

# 生成mat2
mat2 = random.randint(0, 100, size=(n, k)).astype(np.int8)
np.save("./data/weight.npy", {'weight': mat2})
os.chmod("./data/weight.npy", 0o0640)

# 生成output
output = np.random.randn(m, n).astype(np.float16)
write2file(output, "./data/output.bin")

# 生成bias
bias = random.randn(n).astype(np.float32)
write2file(bias, "./data/bias.bin")

# 生成deq_scale
deq_scale = random.randn(n).astype(np.float32)
write2file(deq_scale, "./data/deqScale_ori.bin")
deq_scale_int64 = np.fromfile("./data/deqScale_ori.bin", dtype=np.int32).astype(np.int64)
deq_scale_int64.tofile("./data/deqScale.bin")
# 随机生成矩阵mat1 shape(m,k): 512,1024  mat2 shape(n,k): 1024,1024, 以下入参分别为m=512, k=1024, n=1024
python3 gen_data.py 512 1024 1024

对数据进行预处理

将原始的权重通过压缩工具生成压缩后的x2, compressIndex以及compressInfo:

压缩过程请参考如下接口进行压缩,压缩后的weight和index分别为以下aclnn调用示例的文件输入 使用以下接口时,需要对cann包中msModelSlim中压缩工具进行编译,具体可以参考http://gitee.com/ascend/msit/tree/master/ 仓msmodelslim/pytorch/weight_compression目录下的README.md。

from msmodelslim.pytorch.weight_compression import CompressConfig, Compressor

compress_config = CompressConfig(do_pseudo_sparse=False, sparse_ratio=1)
compressor = Compressor(compress_config, weight_path=weight_path)

compress_weight, compress_index, compress_info = compressor.run()
# 压缩后的权重,对应aclnnMatmulCompressDequantGetWorkspaceSize接口的x2
compressor.export(compress_weight, './data/weight')
# 压缩权重的索引,对应aclnnMatmulCompressDequantGetWorkspaceSize接口的compressIndex
compressor.export(compress_index, './data/index')
# 压缩数据的相关信息,对应aclnnMatmulCompressDequantGetWorkspaceSize接口的compressInfo
compressor.export(compress_info, './data/compress_info')

将原始的float类型的反量化参数deqscale进行转换, 得到aclnn接口需要的uint64的数据:

deqScale原始为float类型,以int32读取并转换为int64

import numpy as np
data = np.fromfile('./deqScale_original.bin', dtype=np.int32).astype(np.int64)
data.tofile('./deqScale.bin')

调用aclnn接口进行运算

示例代码如下,仅供参考,具体编译和执行过程请参考编译与运行样例

#include <iostream>
#include <vector>
#include <acl/acl.h>
#include <aclnnop/aclnn_matmul_compress_dequant.h>
#include <fstream>
#include <unistd.h>
#include <sys/stat.h>
#include <stdio.h>
#include <cstdlib>
#include <string>

#define CHECK_RET(cond, return_expr) \
  do {                               \
    if (!(cond)) {                   \
      return_expr;                   \
    }                                \
  } while (0)

#define LOG_PRINT(message, ...)     \
  do {                              \
    printf(message, ##__VA_ARGS__); \
  } while (0)

int64_t GetShapeSize(const std::vector<int64_t>& shape) {
  int64_t shapeSize = 1;
  for (auto i : shape) {
    shapeSize *= i;
  }
  return shapeSize;
}

int Init(int32_t deviceId, aclrtStream* stream) {
  // 固定写法,AscendCL初始化
  auto ret = aclInit(nullptr);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("aclInit failed. ERROR: %d\n", ret); return ret);
  ret = aclrtSetDevice(deviceId);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("aclrtSetDevice failed. ERROR: %d\n", ret); return ret);
  ret = aclrtCreateStream(stream);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("aclrtCreateStream failed. ERROR: %d\n", ret); return ret);
  return 0;
}

int ReadBinFileNNop(std::string filePath, void* buffer, size_t bufferSize)
{
    struct stat sBuf;
    int fileStatus = stat(filePath.data(), &sBuf);
    CHECK_RET(fileStatus == ACL_SUCCESS, LOG_PRINT("Failed to get file %s\n", filePath); return -1);

    std::ifstream file;
    file.open(filePath, std::ios::binary);
    CHECK_RET(file.is_open(), LOG_PRINT("Open file failed.\n"); return -1);

    file.seekg(0, file.end);
    uint64_t binFileBufferLen = file.tellg();
    CHECK_RET(binFileBufferLen > 0,
        std::cout<<"File size is 0.\n";
        file.close();
        return -1);

    file.seekg(0, file.beg);
    file.read(static_cast<char *>(buffer), binFileBufferLen);
    file.close();
    return ACL_SUCCESS;
}

int CreateAclTensor(std::string filePath, const std::vector<int64_t>& shape, int typeSize,
                    void** deviceAddr, aclDataType dataType, aclTensor** tensor) {
  auto size = GetShapeSize(shape) * typeSize;
  // 调用aclrtMalloc申请device侧内存
  auto ret = aclrtMalloc(deviceAddr, size, ACL_MEM_MALLOC_HUGE_FIRST);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("aclrtMalloc failed. ERROR: %d\n", ret); return ret);

  // 调用aclrtMallocHost申请host侧内存
  void* binBufferHost = nullptr;
  ret = aclrtMallocHost(&binBufferHost, size);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("aclrtMallocHost failed. ERROR: %d\n", ret); return ret);

  // 读取文件
  ret = ReadBinFileNNop(filePath, binBufferHost, size);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("ReadBinFileNNop failed. ERROR: %d\n", ret); return ret);

  // 调用aclrtMemcpy将host侧数据拷贝到device侧内存上
  ret = aclrtMemcpy(*deviceAddr, size, binBufferHost, size, ACL_MEMCPY_HOST_TO_DEVICE);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("aclrtMemcpy failed. ERROR: %d\n", ret); return ret);

  // 计算连续tensor的strides
  std::vector<int64_t> strides(shape.size(), 1);
  for (int64_t i = shape.size() - 2; i >= 0; i--) {
    strides[i] = shape[i + 1] * strides[i + 1];
  }

  // 调用aclCreateTensor接口创建aclTensor
  *tensor = aclCreateTensor(shape.data(), shape.size(), dataType, strides.data(), 0, aclFormat::ACL_FORMAT_ND,
                            shape.data(), shape.size(), *deviceAddr);
  return 0;
}

int main(int argc, char* argv[]) {
  // 1. (固定写法)device/stream初始化,参考AscendCL对外接口列表
  // 根据自己的实际device填写deviceId
  int32_t deviceId = 0;
  aclrtStream stream;
  auto ret = Init(deviceId, &stream);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("Init acl failed. ERROR: %d\n", ret); return ret);

  // 2. 构造输入与输出,需要根据API的接口自定义构造
  int m = atoi(argv[1]);
  int k = atoi(argv[2]);
  int n = atoi(argv[3]);
  // wShape是右矩阵压缩后数据的大小
  int wCompressedSize = atoi(argv[4]);
  // indexShape是压缩索引数据的大小
  int indexSize = atoi(argv[5]);

  std::vector<int64_t> mat1Shape = {m, k};
  std::vector<int64_t> mat2CompressedShape = {wCompressedSize};
  std::vector<int64_t> indexShape = {indexSize};
  std::vector<int64_t> biasShape = {n};
  std::vector<int64_t> deqScaleShape = {n};
  std::vector<int64_t> outputShape = {m, n};

  std::vector<int64_t> compressInfoHostData = {8, 8, k, n, 1};

  void* mat1DeviceAddr = nullptr;
  void* mat2CompressedDeviceAddr = nullptr;
  void* indexDeviceAddr = nullptr;
  void* biasDeviceAddr = nullptr;
  void* deqScaleDeviceAddr = nullptr;
  void* outputDeviceAddr = nullptr;

  aclTensor* mat1 = nullptr;
  aclTensor* mat2Compressed = nullptr;
  aclTensor* index = nullptr;
  aclTensor* bias = nullptr;
  aclTensor* deqScale = nullptr;
  aclTensor* output = nullptr;
  aclIntArray* compressInfo = nullptr;

  std::string rootPath = "./data/";

  // 创建mat1 aclTensor
  std::string mat1FilePath = rootPath + "mat1.bin";
  ret = CreateAclTensor(mat1FilePath, mat1Shape, sizeof(int8_t), &mat1DeviceAddr, aclDataType::ACL_INT8, &mat1);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("Create mat1 tensor failed. ERROR: %d\n", ret); return ret);
  // 创建mat2Compressed aclTensor
  std::string mat2FilePath = rootPath + "weight/weight.dat";
  ret = CreateAclTensor(mat2FilePath, mat2CompressedShape, sizeof(int8_t), &mat2CompressedDeviceAddr,
                        aclDataType::ACL_INT8, &mat2Compressed);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("Create mat2 tensor failed. ERROR: %d\n", ret); return ret);
  // 创建index aclTensor
  std::string indexFilePath = rootPath + "index/weight.dat";
  ret = CreateAclTensor(indexFilePath, indexShape, sizeof(int8_t), &indexDeviceAddr, aclDataType::ACL_INT8, &index);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("Create index tensor failed. ERROR: %d\n", ret); return ret);
  // 创建bias aclTensor
  std::string biasFilePath = rootPath + "bias.bin";
  ret = CreateAclTensor(biasFilePath, biasShape, sizeof(int32_t), &biasDeviceAddr, aclDataType::ACL_INT32, &bias);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("Create bias tensor failed. ERROR: %d\n", ret); return ret);
  // 创建deqScale aclTensor
  std::string deqScaleFilePath = rootPath + "deqScale.bin";
  ret = CreateAclTensor(deqScaleFilePath, deqScaleShape, sizeof(int32_t), &deqScaleDeviceAddr, aclDataType::ACL_UINT64,
                        &deqScale);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("Create deqScale tensor failed. ERROR: %d\n", ret); return ret);
  // 创建compressInfo
  compressInfo = aclCreateIntArray(compressInfoHostData.data(), aclDataType::ACL_INT64);
  // 创建out aclTensor
  std::string outputFilePath = rootPath + "output.bin";
  ret = CreateAclTensor(outputFilePath, outputShape, 2, &outputDeviceAddr, aclDataType::ACL_FLOAT16, &output);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("Create output tensor failed. ERROR: %d\n", ret); return ret);

  int32_t offsetX = 0;

  // 3. 调用CANN算子库API,需要修改为具体的Api名称
  uint64_t workspaceSize = 0;
  aclOpExecutor* executor;
  // 调用aclnnMm第一段接口
  ret = aclnnMatmulCompressDequantGetWorkspaceSize(mat1, mat2Compressed, index, bias, deqScale, nullptr, offsetX, compressInfo, output, &workspaceSize, &executor);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("aclnnMatmulCompressDequantGetWorkspaceSize failed. ERROR: %d\n", ret); return ret);
  // 根据第一段接口计算出的workspaceSize申请device内存
  void* workspaceAddr = nullptr;
  if (workspaceSize > 0) {
    ret = aclrtMalloc(&workspaceAddr, workspaceSize, ACL_MEM_MALLOC_HUGE_FIRST);
    CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("allocate workspace failed. ERROR: %d\n", ret); return ret);
  }
  // 调用aclnnMm第二段接口
  ret = aclnnMatmulCompressDequant(workspaceAddr, workspaceSize, executor, stream);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("aclnnMatmulCompressDequant failed. ERROR: %d\n", ret); return ret);

  // 4. (固定写法)同步等待任务执行结束
  ret = aclrtSynchronizeStream(stream);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("aclrtSynchronizeStream failed. ERROR: %d\n", ret); return ret);

  // 5. 获取输出的值,将device侧内存上的结果拷贝至host侧,需要根据具体API的接口定义修改
  auto size = GetShapeSize(outputShape);
  std::vector<float> resultData(size, 0);
  ret = aclrtMemcpy(resultData.data(), resultData.size() * sizeof(resultData[0]), outputDeviceAddr,
                    size * sizeof(resultData[0]), ACL_MEMCPY_DEVICE_TO_HOST);
  CHECK_RET(ret == ACL_SUCCESS, LOG_PRINT("copy result from device to host failed. ERROR: %d\n", ret); return ret);
  for (int64_t i = 0; i < size; i++) {
    LOG_PRINT("result[%ld] is: %f\n", i, resultData[i]);
  }

  // 6. 释放aclTensor和aclScalar,需要根据具体API的接口定义修改
  aclDestroyTensor(mat1);
  aclDestroyTensor(mat2Compressed);
  aclDestroyTensor(index);
  aclDestroyTensor(bias);
  aclDestroyTensor(deqScale);
  aclDestroyTensor(output);
  aclDestroyIntArray(compressInfo);

  // 7.释放硬件资源,需要根据具体API的接口定义修改
  aclrtFree(mat1DeviceAddr);
  aclrtFree(mat2CompressedDeviceAddr);
  aclrtFree(indexDeviceAddr);
  aclrtFree(biasDeviceAddr);
  aclrtFree(deqScaleDeviceAddr);
  aclrtFree(outputDeviceAddr);
  if (workspaceSize > 0) {
    aclrtFree(workspaceAddr);
  }
  aclrtDestroyStream(stream);
  aclrtResetDevice(deviceId);
  aclFinalize();
  return 0;
}