diff --git a/README.md b/README.md index 0e38ddb..db5ab3d 100644 --- a/README.md +++ b/README.md @@ -3,12 +3,94 @@ CUDA Stream Compaction **University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2** -* (TODO) YOUR NAME HERE - * (TODO) [LinkedIn](), [personal website](), [twitter](), etc. -* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab) +* Helena Zhang +* Tested on: Windows 11, i7-10750 @ 2.6GHz 16GB, Geforce RTX 2060 6GB -### (TODO: Your README) +### Performance analysis -Include analysis, etc. (Remember, this is public, so don't put -anything here that you don't want to share with the world.) +First up, I compared the runtimes of different GPU implementations (naive scan, work efficient scan, and stream compaction) on an array size of 2^20 and took the average of three runs each to find the optimal block size on my GPU: +![](img/blocksize.jpg) + +Although the runtimes past 64 threads per block were similar, 64 threads per block narrowly beat out the larger blocksizes. + +Next, using the optimal blocksize of 64, I compared the runtimes of the base CPU exclusive scan, and the three GPU scan implementations: naive, work efficient, and thrust. I ran each implementation on each array size 5 times and took the average to account for any potential outliers. + +![](img/runtime.jpg) + +The CPU runtime scaled linearly with the increase in array size, so the trendline appeared to be linear on a chart with log-scaled runtime on log-scaled array size. The GPU implementation all ran slower than the CPU implementation on small array sizes, and only the naive implementation showed a large spike in runtime at the end. The work efficient implementation runtime increased slowly towards the end, and the thrust runtime remained the fastest at the end. + +In regards to the optimized work efficient GPU implementation, I managed to minimize the number of blocks needed at small numbers of threads. During the upstream / downstream summation, there were only a few additions done at the top levels of the tree. Instead of taking the 2^d th thread to run in the kernel, I scaled each thread with a factor of 2^(d-1), so that only k threads are needed. In addition, instead of launching approximately N / blocksize blocks at each level of summation, I launched only **1 << (ilogceil(N) - d - log(blocksize) - 1)** blocks, which is significantly fewer blocks. For any d, only every other 2^d numbers will get incremented, meaning there should be at most **1 << (ilogceil(N) - d - 1)** threads. Since the max number of threads in each block is **blocksize**, we will scale down the number of threads by a factor of that, leaving each level only launching **1 << (ilogceil(N) - d - log(blocksize) - 1)** blocks, and every thread in those blocks are used. + +For the fastest implementation, **thrust::exclusive_scan**, I've observed its execution in Nsight: + +![](img/nsight.jpg) + +Assuming **cudeEventRecord** are the GPU timer operations, there are 5 operations within exclusive scan: **cudaMalloc**, **DeviceScanInitKernel**, **DeviceScanKernel**, **cudaStreamSynchronize**, **cudaFree**. Since **idata** was copied into device memory prior to starting the GPU timer, **cudaMalloc** was likely an operation to store temporary data within the operation since that memory was freed at the end of this function. The two scan operations were likely performing the addition, and they took the least time, meaning the **cudaMalloc** most likely allocated some shared memory for these operations to execute quickly. + +Based on the increased runtimes of all the implementations, a large bottleneck seems to be caused by excessive yet unproductive blocks, hence reducing and packing blocks enhanced the runtime. + +Finally, an overview of all the runtimes: +``` + +**************** +** SCAN TESTS ** +**************** + [ 1 6 1 30 17 6 13 30 37 3 40 40 21 ... 29 12 ] +==== cpu scan, power-of-two ==== + elapsed time: 14.2981ms (std::chrono Measured) + [ 0 1 7 8 38 55 61 74 104 141 144 184 224 ... 410838503 410838532 ] +==== cpu scan, non-power-of-two ==== + elapsed time: 13.4163ms (std::chrono Measured) + [ 0 1 7 8 38 55 61 74 104 141 144 184 224 ... 410838422 410838469 ] + passed +==== naive scan, power-of-two ==== + elapsed time: 16.1973ms (CUDA Measured) + [ 0 1 7 8 38 55 61 74 104 141 144 184 224 ... 410838503 410838532 ] + passed +==== naive scan, non-power-of-two ==== + elapsed time: 16.1932ms (CUDA Measured) + [ 0 1 7 8 38 55 61 74 104 141 144 184 224 ... 0 0 ] + passed +==== work-efficient scan, power-of-two ==== + elapsed time: 7.42416ms (CUDA Measured) + [ 0 1 7 8 38 55 61 74 104 141 144 184 224 ... 410838503 410838532 ] + passed +==== work-efficient scan, non-power-of-two ==== + elapsed time: 7.27318ms (CUDA Measured) + [ 0 1 7 8 38 55 61 74 104 141 144 184 224 ... 410838422 410838469 ] + passed +==== thrust scan, power-of-two ==== + elapsed time: 1.18771ms (CUDA Measured) + [ 0 1 7 8 38 55 61 74 104 141 144 184 224 ... 410838503 410838532 ] + passed +==== thrust scan, non-power-of-two ==== + elapsed time: 1.35674ms (CUDA Measured) + [ 0 1 7 8 38 55 61 74 104 141 144 184 224 ... 410838422 410838469 ] + passed + +***************************** +** STREAM COMPACTION TESTS ** +***************************** + [ 0 0 3 1 2 3 0 1 2 1 0 0 0 ... 2 2 ] +==== cpu compact without scan, power-of-two ==== + elapsed time: 36.814ms (std::chrono Measured) + [ 3 1 2 3 1 2 1 2 2 3 2 2 3 ... 2 2 ] + passed +==== cpu compact without scan, non-power-of-two ==== + elapsed time: 37.0357ms (std::chrono Measured) + [ 3 1 2 3 1 2 1 2 2 3 2 2 3 ... 1 2 ] + passed +==== cpu compact with scan ==== + elapsed time: 77.7415ms (std::chrono Measured) + [ 3 1 2 3 1 2 1 2 2 3 2 2 3 ... 2 2 ] + passed +==== work-efficient compact, power-of-two ==== + elapsed time: 14.5762ms (CUDA Measured) + [ 3 1 2 3 1 2 1 2 2 3 2 2 3 ... 2 2 ] + passed +==== work-efficient compact, non-power-of-two ==== + elapsed time: 14.472ms (CUDA Measured) + [ 3 1 2 3 1 2 1 2 2 3 2 2 3 ... 1 2 ] + passed +``` diff --git a/img/blocksize.jpg b/img/blocksize.jpg new file mode 100644 index 0000000..4ac9970 Binary files /dev/null and b/img/blocksize.jpg differ diff --git a/img/nsight.jpg b/img/nsight.jpg new file mode 100644 index 0000000..483883c Binary files /dev/null and b/img/nsight.jpg differ diff --git a/img/runtime.jpg b/img/runtime.jpg new file mode 100644 index 0000000..1b832f5 Binary files /dev/null and b/img/runtime.jpg differ diff --git a/src/main.cpp b/src/main.cpp index 896ac2b..5e70abe 100644 --- a/src/main.cpp +++ b/src/main.cpp @@ -13,7 +13,7 @@ #include #include "testing_helpers.hpp" -const int SIZE = 1 << 8; // feel free to change the size of array +const int SIZE = 1 << 24; // feel free to change the size of array const int NPOT = SIZE - 3; // Non-Power-Of-Two int *a = new int[SIZE]; int *b = new int[SIZE]; @@ -27,8 +27,8 @@ int main(int argc, char* argv[]) { printf("** SCAN TESTS **\n"); printf("****************\n"); - genArray(SIZE - 1, a, 50); // Leave a 0 at the end to test that edge case - a[SIZE - 1] = 0; + genArray(SIZE, a, 50); // Leave a 0 at the end to test that edge case + // a[SIZE - 1] = 0; printArray(SIZE, a, true); // initialize b using StreamCompaction::CPU::scan you implement @@ -51,7 +51,7 @@ int main(int argc, char* argv[]) { printDesc("naive scan, power-of-two"); StreamCompaction::Naive::scan(SIZE, c, a); printElapsedTime(StreamCompaction::Naive::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); - //printArray(SIZE, c, true); + printArray(SIZE, c, true); printCmpResult(SIZE, b, c); /* For bug-finding only: Array of 1s to help find bugs in stream compaction or scan @@ -64,35 +64,35 @@ int main(int argc, char* argv[]) { printDesc("naive scan, non-power-of-two"); StreamCompaction::Naive::scan(NPOT, c, a); printElapsedTime(StreamCompaction::Naive::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); - //printArray(SIZE, c, true); + printArray(SIZE, c, true); printCmpResult(NPOT, b, c); zeroArray(SIZE, c); printDesc("work-efficient scan, power-of-two"); StreamCompaction::Efficient::scan(SIZE, c, a); printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); - //printArray(SIZE, c, true); + printArray(SIZE, c, true); printCmpResult(SIZE, b, c); zeroArray(SIZE, c); printDesc("work-efficient scan, non-power-of-two"); StreamCompaction::Efficient::scan(NPOT, c, a); printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); - //printArray(NPOT, c, true); + printArray(NPOT, c, true); printCmpResult(NPOT, b, c); zeroArray(SIZE, c); printDesc("thrust scan, power-of-two"); StreamCompaction::Thrust::scan(SIZE, c, a); printElapsedTime(StreamCompaction::Thrust::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); - //printArray(SIZE, c, true); + printArray(SIZE, c, true); printCmpResult(SIZE, b, c); zeroArray(SIZE, c); printDesc("thrust scan, non-power-of-two"); StreamCompaction::Thrust::scan(NPOT, c, a); printElapsedTime(StreamCompaction::Thrust::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); - //printArray(NPOT, c, true); + printArray(NPOT, c, true); printCmpResult(NPOT, b, c); printf("\n"); @@ -102,8 +102,8 @@ int main(int argc, char* argv[]) { // Compaction tests - genArray(SIZE - 1, a, 4); // Leave a 0 at the end to test that edge case - a[SIZE - 1] = 0; + genArray(SIZE, a, 4); // Leave a 0 at the end to test that edge case + // a[SIZE - 1] = 0; printArray(SIZE, a, true); int count, expectedCount, expectedNPOT; @@ -137,14 +137,14 @@ int main(int argc, char* argv[]) { printDesc("work-efficient compact, power-of-two"); count = StreamCompaction::Efficient::compact(SIZE, c, a); printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); - //printArray(count, c, true); + printArray(count, c, true); printCmpLenResult(count, expectedCount, b, c); zeroArray(SIZE, c); printDesc("work-efficient compact, non-power-of-two"); count = StreamCompaction::Efficient::compact(NPOT, c, a); printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)"); - //printArray(count, c, true); + printArray(count, c, true); printCmpLenResult(count, expectedNPOT, b, c); system("pause"); // stop Win32 console from closing on exit diff --git a/stream_compaction/common.cu b/stream_compaction/common.cu index 2ed6d63..7bef1f3 100644 --- a/stream_compaction/common.cu +++ b/stream_compaction/common.cu @@ -23,7 +23,10 @@ namespace StreamCompaction { * which map to 0 will be removed, and elements which map to 1 will be kept. */ __global__ void kernMapToBoolean(int n, int *bools, const int *idata) { - // TODO + int i = blockIdx.x * blockDim.x + threadIdx.x; + if (i < n) { + idata[i] > 0 ? bools[i] = 1 : bools[i] = 0; + } } /** @@ -32,7 +35,12 @@ namespace StreamCompaction { */ __global__ void kernScatter(int n, int *odata, const int *idata, const int *bools, const int *indices) { - // TODO + int i = blockIdx.x * blockDim.x + threadIdx.x; + if (i < n) { + if (bools[i]) { + odata[indices[i]] = idata[i]; + } + } } } diff --git a/stream_compaction/cpu.cu b/stream_compaction/cpu.cu index 719fa11..da389bc 100644 --- a/stream_compaction/cpu.cu +++ b/stream_compaction/cpu.cu @@ -19,7 +19,12 @@ namespace StreamCompaction { */ void scan(int n, int *odata, const int *idata) { timer().startCpuTimer(); - // TODO + int sum = 0; + for (int i = 0; i < n; i++) { + odata[i] = sum; + sum += idata[i]; + + } timer().endCpuTimer(); } @@ -30,9 +35,15 @@ namespace StreamCompaction { */ int compactWithoutScan(int n, int *odata, const int *idata) { timer().startCpuTimer(); - // TODO + int length = 0; + for (int i = 0; i < n; i++) { + if (idata[i] != 0) { + odata[length] = idata[i]; + length++; + } + } timer().endCpuTimer(); - return -1; + return length; } /** @@ -42,9 +53,27 @@ namespace StreamCompaction { */ int compactWithScan(int n, int *odata, const int *idata) { timer().startCpuTimer(); - // TODO + int* temp = new int[n]; + int* tempScanned = new int[n]; + for (int i = 0; i < n; i++) { + idata[i] > 0 ? temp[i] = 1 : temp[i] = 0; + } + + int sum = 0; + for (int i = 0; i < n; i++) { + tempScanned[i] = sum; + sum += temp[i]; + } + + int count = 0; + for (int i = 0; i < n; i++) { + if (temp[i] == 1) { + odata[tempScanned[i]] = idata[i]; + count++; + } + } timer().endCpuTimer(); - return -1; + return count; } } } diff --git a/stream_compaction/efficient.cu b/stream_compaction/efficient.cu index 2db346e..c9a3934 100644 --- a/stream_compaction/efficient.cu +++ b/stream_compaction/efficient.cu @@ -3,6 +3,8 @@ #include "common.h" #include "efficient.h" +#define logBlockSize 6 + namespace StreamCompaction { namespace Efficient { using StreamCompaction::Common::PerformanceTimer; @@ -12,13 +14,57 @@ namespace StreamCompaction { return timer; } + __global__ void kernUp(int d, int n, int* odata) { + const int p = 1 << d; + const int k = (blockIdx.x * blockDim.x + threadIdx.x) * 2 * p; + if (k < n) { + odata[2 * p + k - 1] += odata[p + k - 1]; + } + odata[n - 1] = 0; + } + + __global__ void kernDown(int d, int n, int* odata) { + const int p = 1 << d; + const int k = (blockIdx.x * blockDim.x + threadIdx.x) * 2 * p; + if (k < n) { + int t = odata[p + k - 1]; + odata[p + k - 1] = odata[2 * p + k - 1]; + odata[2 * p + k - 1] += t; + } + } + /** * Performs prefix-sum (aka scan) on idata, storing the result into odata. */ void scan(int n, int *odata, const int *idata) { + const int blockSize = 1 << logBlockSize; + int* dev_data; + + // allocate enough memory to the next power of two + const int log = ilog2ceil(n); + const int expN = 1 << log; + const int extra = expN - n; + + cudaMalloc((void**)&dev_data, expN * sizeof(int)); + + cudaMemcpy(dev_data + extra, idata, n * sizeof(int), cudaMemcpyHostToDevice); + timer().startGpuTimer(); - // TODO + + // up + for (int d = 0; d < log; d++) { + kernUp << > (d + logBlockSize + 1))), blockSize >> > (d, expN, dev_data); + } + + // down + for (int d = log - 1; d >= 0; d--) { + kernDown << > (d + logBlockSize + 1))), blockSize >> > (d, expN, dev_data); + } + timer().endGpuTimer(); + + cudaMemcpy(odata, dev_data + extra, n * sizeof(int), cudaMemcpyDeviceToHost); + cudaFree(dev_data); } /** @@ -31,10 +77,68 @@ namespace StreamCompaction { * @returns The number of elements remaining after compaction. */ int compact(int n, int *odata, const int *idata) { + const int blockSize = 1 << logBlockSize; + + int* dev_idata; + int* dev_hasValue; + int* dev_scanned; + int* dev_odata; + + int total = 0; + + const int log = ilog2ceil(n); + const int expN = 1 << log; + const int extra = expN - n; + + cudaMalloc((void**)&dev_idata, n * sizeof(int)); + cudaMalloc((void**)&dev_hasValue, n * sizeof(int)); + + cudaMemcpy(dev_idata, idata, n * sizeof(int), cudaMemcpyHostToDevice); + cudaMemcpy(dev_hasValue, idata, n * sizeof(int), cudaMemcpyHostToDevice); + + const dim3 fullBlocksPerGrid((expN + blockSize - 1) / blockSize); + timer().startGpuTimer(); - // TODO + + // get 0 and non 0 elements + StreamCompaction::Common::kernMapToBoolean <<>>(n, dev_hasValue, dev_idata); + + // scan + + cudaMalloc((void**)&dev_scanned, expN * sizeof(int)); + + cudaMemcpy(dev_scanned + extra, dev_hasValue, n * sizeof(int), cudaMemcpyDeviceToDevice); + + for (int d = 0; d < log; d++) { + kernUp << > (d + logBlockSize + 1))), blockSize >> > (d, expN, dev_scanned); + } + + for (int d = log - 1; d >= 0; d--) { + kernDown << > (d + logBlockSize + 1))), blockSize >> > (d, expN, dev_scanned); + } + + cudaMemcpy(&total, dev_scanned + expN - 1, sizeof(int), cudaMemcpyDeviceToHost); + if (idata[n - 1] > 0) { + total++; + } + + // scatter + cudaMalloc((void**)&dev_odata, expN * sizeof(int)); + + StreamCompaction::Common::kernScatter << > > (n, dev_odata, dev_idata, dev_hasValue, dev_scanned + extra); + timer().endGpuTimer(); - return -1; + + // cudaMemcpy(odata, dev_scanned + extra, n * sizeof(int), cudaMemcpyDeviceToHost); + cudaMemcpy(odata, dev_odata, expN * sizeof(int), cudaMemcpyDeviceToHost); + + cudaFree(dev_idata); + cudaFree(dev_hasValue); + cudaFree(dev_scanned); + cudaFree(dev_odata); + + + return total; } } } diff --git a/stream_compaction/naive.cu b/stream_compaction/naive.cu index 4308876..f9adbc1 100644 --- a/stream_compaction/naive.cu +++ b/stream_compaction/naive.cu @@ -3,6 +3,8 @@ #include "common.h" #include "naive.h" +#define logBlockSize 6 + namespace StreamCompaction { namespace Naive { using StreamCompaction::Common::PerformanceTimer; @@ -11,15 +13,48 @@ namespace StreamCompaction { static PerformanceTimer timer; return timer; } - // TODO: __global__ + + __global__ void kernAdd(int d, int* odata, const int* idata) { + int k = blockIdx.x * blockDim.x + threadIdx.x; + int space = 1 << (d - 1); + if (k >= space) { + odata[k] = idata[k] + idata[k - space]; + } + else { + odata[k] = idata[k]; + } + } /** * Performs prefix-sum (aka scan) on idata, storing the result into odata. */ - void scan(int n, int *odata, const int *idata) { + void scan(int n, int* odata, const int* idata) { + const int blockSize = 1 << logBlockSize; + + int* dev_odata; + int* dev_idata; + + cudaMalloc((void**)&dev_odata, n * sizeof(int)); + cudaMalloc((void**)&dev_idata, n * sizeof(int)); + + cudaMemcpy(dev_odata, idata, n * sizeof(int), cudaMemcpyHostToDevice); + cudaMemcpy(dev_idata, idata, n * sizeof(int), cudaMemcpyHostToDevice); + timer().startGpuTimer(); - // TODO + + dim3 fullBlocksPerGrid((n + blockSize - 1) / blockSize); + for (int i = 1; i <= ilog2ceil(n); i++) { + kernAdd << > > (i, dev_odata, dev_idata); + std::swap(dev_odata, dev_idata); + } + timer().endGpuTimer(); + + cudaMemcpy(odata + 1, dev_idata, (n - 1) * sizeof(int), cudaMemcpyDeviceToHost); + + cudaFree(dev_odata); + cudaFree(dev_idata); + } } } diff --git a/stream_compaction/thrust.cu b/stream_compaction/thrust.cu index 1def45e..90b24da 100644 --- a/stream_compaction/thrust.cu +++ b/stream_compaction/thrust.cu @@ -18,11 +18,27 @@ namespace StreamCompaction { * Performs prefix-sum (aka scan) on idata, storing the result into odata. */ void scan(int n, int *odata, const int *idata) { + + int* dev_odata; + int* dev_idata; + + cudaMalloc((void**)&dev_odata, n * sizeof(int)); + cudaMalloc((void**)&dev_idata, n * sizeof(int)); + + cudaMemcpy(dev_odata, odata, sizeof(int) * n, cudaMemcpyHostToDevice); + cudaMemcpy(dev_idata, idata, sizeof(int) * n, cudaMemcpyHostToDevice); + + thrust::device_ptr dev_thrust_odata(dev_odata); + thrust::device_ptr dev_thrust_idata(dev_idata); + timer().startGpuTimer(); - // TODO use `thrust::exclusive_scan` - // example: for device_vectors dv_in and dv_out: - // thrust::exclusive_scan(dv_in.begin(), dv_in.end(), dv_out.begin()); + + thrust::exclusive_scan(dev_thrust_idata, dev_thrust_idata + n, dev_thrust_odata); + timer().endGpuTimer(); + + cudaMemcpy(odata, dev_odata, sizeof(int) * n, cudaMemcpyDeviceToHost); + } } }