使用 CUDA 手册的车顶线模型与 Nsight 计算

Roofline Model with CUDA Manual vs. Nsight Compute

提问人:Cherry Toska 提问时间:7/13/2023 更新时间:7/13/2023 访问量:110

问:

我有一个非常简单的矢量加法内核,为CUDA编写。 我想计算这个内核的算术强度和 GFLOP/s。 我计算的值与 Nsight Compute 的屋顶线分析部分获得的值明显不同。

由于我有一个非常简单的农场向量加法内核,所有三个向量都具有我期望的大小,因此我期望:算术运算和(假设)访问的字节,这将产生大约0.083的算术强度。C = A + BNN3 x N x 4sizeof(float)==4

此外,我希望除了 GFLOP/s 之外,我计算的值与 Nsight 计算明显不同,我知道 Nsight 计算会降低时钟速度,但我希望算术强度的值(每字节操作)是相同的(或大致相同,因为它分析了代码)。N x 1e-9 / kernel_time_in_seconds

我的 CUDA 内核如下所示:

#include <iostream>
#include <cuda_runtime.h>

#define N 200000

__global__ void vectorAdd(float* a, float* b, float* c)
{
    int tid = blockIdx.x * blockDim.x + threadIdx.x;
    if (tid < N)
    {
        c[tid] = a[tid] + b[tid];
    }
}

int main()
{
    // Declare and initialize host vectors
    float* host_a = new float[N];
    float* host_b = new float[N];
    float* host_c = new float[N];
    for (int i = 0; i < N; ++i)
    {
        host_a[i] = i;
        host_b[i] = 2 * i;
    }

    // Declare and allocate device vectors
    float* dev_a, * dev_b, * dev_c;
    cudaMalloc((void**)&dev_a, N * sizeof(float));
    cudaMalloc((void**)&dev_b, N * sizeof(float));
    cudaMalloc((void**)&dev_c, N * sizeof(float));

    // Copy host vectors to device
    cudaMemcpy(dev_a, host_a, N * sizeof(float), cudaMemcpyHostToDevice);
    cudaMemcpy(dev_b, host_b, N * sizeof(float), cudaMemcpyHostToDevice);

    // Define kernel launch configuration
    int blockSize, gridSize;
    cudaOccupancyMaxPotentialBlockSize(&gridSize, &blockSize, vectorAdd, 0, N);

    // Start timer
    cudaEvent_t start, stop;
    cudaEventCreate(&start);
    cudaEventCreate(&stop);
    cudaEventRecord(start);

    // Launch kernel
    vectorAdd<<<gridSize, blockSize>>>(dev_a, dev_b, dev_c);

    // Stop timer and calculate execution duration
    cudaEventRecord(stop);
    cudaEventSynchronize(stop);
    float milliseconds = 0;
    cudaEventElapsedTime(&milliseconds, start, stop);

    // Copy result from device to host
    cudaMemcpy(host_c, dev_c, N * sizeof(float), cudaMemcpyDeviceToHost);
    cudaDeviceSynchronize();

    // Print execution duration
    std::cout << "Kernel execution duration: " << milliseconds << " ms" << std::endl;

    int numFloatingPointOps = N;
    int numBytesAccessed = 3 * N * sizeof(float);
    float opsPerByte = static_cast<float>(numFloatingPointOps) / static_cast<float>(numBytesAccessed);

    std::cout << "Floating-point operations per byte: " << opsPerByte << std::endl;

    float executionTimeSeconds = milliseconds / 1e3;
    float numGFLOPs = static_cast<float>(numFloatingPointOps) / 1e9;
    float GFLOPs = numGFLOPs / executionTimeSeconds;

    std::cout << "GFLOP/s: " << GFLOPs << std::endl;

    // Cleanup
    cudaFree(dev_a);
    cudaFree(dev_b);
    cudaFree(dev_c);
    delete[] host_a;
    delete[] host_b;
    delete[] host_c;

    return 0;
}

我的电脑上的示例输出:

Kernel execution duration: 0.014144 ms
Floating-point operations per byte: 0.0833333
GFLOP/s: 14.1403

只需使用以下方法进行编译和运行/分析:

nvcc vectorAdd.cu
sudo env "PATH=$PATH" ncu -f -o vectorAdd_rep --set full ./a.out

Nsight 计算说算术强度是 0.12,我有一张照片:Roofline Graph from Nsight compuite

当我查看与全局负载 (LDG) 和存储 (STG) 相关的指令统计操作是 FADD(元素浮动加法)的 3 倍时,我反对 4 个字节的大小,我期望 0.083,但事实并非如此,是什么原因导致 2 种算术强度之间的差异,我做错了什么吗?我检查的其他指令似乎与算术强度计算无关。

我添加一张关于指令统计的照片:Instruction Statistics

cuda nvprof nsight-compute roofline

评论

1赞 Jérôme Richard 7/13/2023
几点:1)您应该检查主机数据结果,因为CUDA函数/内核可能会失败(特别是如果您计划迭代修改代码,这很常见)。2)您应该检查CUDA函数的错误代码,以避免测量可能错误的东西,并可能从中推断出垃圾。3) N=200000 非常小。事实上,启动内核需要一些时间(例如,由于与设备的通信,包括系统调用),更不用说 GPU 并不是真正的低延迟设计。某些测量问题可能是由于此类开销造成的。请考虑选择更大的 N。
1赞 Cherry Toska 7/13/2023
按照您的建议 1) 我添加了检查以验证最终结果 2) 添加了代码以在每次与 CUDA 相关的调用后检查错误 3) 我将 N 增加到大约 1 GB。现在,我从 Nsight 计算的算术强度中获得的数字是 0.8,与 0.12 相比,更接近我的计算数字 (0.83)。据我所知,Nsight 计算使用计数器来跟踪屋顶线模型的算术强度和最大性能图,并且可能与理论极限不同。它也会降低时钟速度,因此GFLOP/s较小是有道理的。
0赞 Cherry Toska 7/13/2023
此外,我想补充一点,现在我的结果与我从 Babel Streaming Benchmarks 获得的结果相匹配,在屋顶线模型中给定的算术强度下,我每秒有 ~169 GB 和 96.45% 的峰值理论 GFLOP/s。我会说这足以验证。我特别感谢您的第三个建议。它帮助了我,

答:

2赞 Cherry Toska 7/13/2023 #1

根据 Jérôme Richard 的建议更新代码,我可以确定问题所在。首先,使用 Nsight Compute 获得的结果对于较小的网格大小并不准确。有了足够多的元素,Nsight 计算的结果与我的结果非常接近。

另一个重要的注意事项是,分析代码以较低的时钟速度运行,因为确定理论边界(在内存传输和达到的峰值 FLOP/s 中)都小于通过调用 CUDA API 可以获得的值。我可以进一步验证,在我的代码和 Nsight Compute 中,实现的峰值性能百分比(相对于算术强度)非常相似。以下是更新后的代码:

#include <iostream>
#include <cuda_runtime.h>

constexpr size_t N = static_cast<size_t>(1e9 / static_cast<float>(sizeof(float)));

#define CHECK_ERR checkErr(__FILE__,__LINE__)

std::string PrevFile = "";
int PrevLine = 0;

void checkErr(const std::string &File, int Line) {{
#ifndef NDEBUG
    cudaError_t Error = cudaGetLastError();
    if (Error != cudaSuccess) {{
        std::cout << std::endl << File
                << ", line " << Line
                << ": " << cudaGetErrorString(Error)
                << " (" << Error << ")"
                << std::endl;

        if (PrevLine > 0)
        std::cout << "Previous CUDA call:" << std::endl
                    << PrevFile << ", line " << PrevLine << std::endl;
        throw;
    }}
    PrevFile = File;
    PrevLine = Line;
#endif
}}

__global__ void vectorAdd(float* a, float* b, float* c)
{
    int tid = blockIdx.x * blockDim.x + threadIdx.x;
    if (tid < N)
    {
        c[tid] = a[tid] + b[tid];
    }
}

int main()
{
    // Declare and initialize host vectors
    float* host_a = new float[N];
    float* host_b = new float[N];
    float* host_c = new float[N];
    for (int i = 0; i < N; ++i)
    {
        host_a[i] = i;
        host_b[i] = 2 * i;
    }

    // Declare and allocate device vectors
    float* dev_a, * dev_b, * dev_c;
    cudaMalloc((void**)&dev_a, N * sizeof(float)); CHECK_ERR;
    cudaMalloc((void**)&dev_b, N * sizeof(float)); CHECK_ERR;
    cudaMalloc((void**)&dev_c, N * sizeof(float)); CHECK_ERR;

    // Copy host vectors to device
    cudaMemcpy(dev_a, host_a, N * sizeof(float), cudaMemcpyHostToDevice); CHECK_ERR;
    cudaMemcpy(dev_b, host_b, N * sizeof(float), cudaMemcpyHostToDevice); CHECK_ERR;

    // Define kernel launch configuration
    // int blockSize, gridSize;
    // cudaOccupancyMaxPotentialBlockSize(&gridSize, &blockSize, vectorAdd, 0, N); CHECK_ERR;vectorAdd<<<gridSize, blockSize>>>(dev_a, dev_b, dev_c); CHECK_ERR;
    int blockSize = 256;
    int gridSize = (N + blockSize - 1) / blockSize;

    // Fire first kernel and discard
    vectorAdd<<<gridSize, blockSize>>>(dev_a, dev_b, dev_c); CHECK_ERR;
    cudaDeviceSynchronize();

    // Start timer
    cudaEvent_t start, stop;
    cudaEventCreate(&start); CHECK_ERR;
    cudaEventCreate(&stop); CHECK_ERR;
    cudaEventRecord(start); CHECK_ERR;

    // Launch kernel
    vectorAdd<<<gridSize, blockSize>>>(dev_a, dev_b, dev_c); CHECK_ERR;

    // Stop timer and calculate execution duration
    cudaEventRecord(stop); CHECK_ERR;
    cudaEventSynchronize(stop); CHECK_ERR;
    float milliseconds = 0;
    cudaEventElapsedTime(&milliseconds, start, stop); CHECK_ERR;

    // Copy result from device to host
    cudaMemcpy(host_c, dev_c, N * sizeof(float), cudaMemcpyDeviceToHost); CHECK_ERR;
    cudaDeviceSynchronize(); CHECK_ERR;

    for (int i = 0; i < N; ++i)
    {
        if (host_c[i] > 1.001f * (3.0f * static_cast<float>(i)) ||
            host_c[i] < 0.999f * (3.0f * static_cast<float>(i))){
            throw std::runtime_error("Results different from expected " + std::to_string(host_c[i]) + " != " + std::to_string(3.0f * static_cast<float>(i)));
        }
    }

    // Print execution duration
    std::cout << "Kernel execution duration: " << milliseconds << " ms" << std::endl;

    size_t numFloatingPointOps = N;
    size_t numBytesAccessed = 3 * N * sizeof(float);
    float opsPerByte = static_cast<float>(numFloatingPointOps) / static_cast<float>(numBytesAccessed);

    std::cout << "Floating-point operations per byte: " << opsPerByte << std::endl;

    float executionTimeSeconds = milliseconds / 1e3;
    float numGFLOPs = static_cast<float>(numFloatingPointOps) / 1e9;
    float GFLOPs = numGFLOPs / executionTimeSeconds;

    std::cout << "GFLOP/s: " << GFLOPs << std::endl;

    float peakMemoryBandwidthTheo = 176.032; // GB /s
    float peakGFLOPTheo  = 4329.47; // GFlop /s
    float peakGFLOPforIntensity = std::min(peakMemoryBandwidthTheo * opsPerByte, peakGFLOPTheo);

    float achievedPeak = (static_cast<float>(GFLOPs) / peakGFLOPforIntensity) * 100.0f;
    std::string strAchievedPeak(6, '\0');
    std::sprintf(&strAchievedPeak[0], "%.2f", achievedPeak);
    std::cout << "Percentage of Peak Performance: " << strAchievedPeak << "%" << std::endl;

    float GBPerSecond = (static_cast<float>(numBytesAccessed) * 1e-9) / executionTimeSeconds;
    std::cout << "GB per Second: " << GBPerSecond << std::endl;

    // Cleanup
    cudaFree(dev_a); CHECK_ERR;
    cudaFree(dev_b); CHECK_ERR;
    cudaFree(dev_c); CHECK_ERR;
    delete[] host_a;
    delete[] host_b;
    delete[] host_c;

    return 0;
}

我的 RTX 3050 的输出示例:

Kernel execution duration: 17.6701 ms
Floating-point operations per byte: 0.0833333
GFLOP/s: 14.1482
Percentage of Peak Performance: 96.45%
GB per Second: 169.778