到目前为止,全部kernel都是在host端调用,GPU的工做彻底在CPU的控制下。CUDA Dynamic Parallelism容许GPU kernel在device端建立调用。Dynamic Parallelism使递归更容易实现和理解,因为启动的配置能够由device上的thread在运行时决定,这也减小了host和device之间传递数据和执行控制。咱们接下来会分析理解使用Dynamic Parallelism。算法
在host调用kernel和在device调用kernel的语法彻底同样。kernel的执行则被分为两种类型:parent和child。一个parent thread,parent block或者parent grid能够启动一个新的grid,即child grid。child grid必须在parent 以前完成,也就是说,parent必须等待全部child完成。ide
当parent启动一个child grid时,在parent显式调用synchronize以前,child不保证会开始执行。parent和child共享同一个global和constant memory,可是有不一样的shared 和local memory。不难理解的是,只有两个时刻能够保证child和parent见到的global memory彻底一致:child刚开始和child完成。全部parent对global memory的操做对child都是可见的,而child对global memory的操做只有在parent进行synchronize操做后对parent才是可见的。性能
为了更清晰的讲解Dynamic Parallelism,咱们改编最开始写的hello world程序。下图显示了使用Dynamic Parallelism的执行过程,host调用parent grid(每一个block八个thread)。thread 0调用一个child grid(每一个block四个thread),thread 0 的第一个thread又调用一个child grid(每一个block两个thread),依次类推。this
下面是具体的代码,每一个thread会先打印出Hello World;而后,每一个thread再检查本身是否该中止。spa
__global__ void nestedHelloWorld(int const iSize,int iDepth) { int tid = threadIdx.x; printf("Recursion=%d: Hello World from thread %d block %d\n",iDepth,tid,blockIdx.x); // condition to stop recursive execution if (iSize == 1) return; // reduce block size to half int nthreads = iSize>>1; // thread 0 launches child grid recursively if(tid == 0 && nthreads > 0) { nestedHelloWorld<<<1, nthreads>>>(nthreads,++iDepth); printf("-------> nested execution depth: %d\n",iDepth); } }
编译:线程
$ nvcc -arch=sm_35 -rdc=true nestedHelloWorld.cu -o nestedHelloWorld -lcudadevrt
-lcudadevrt是用来链接runtime库的,跟gcc链接库同样。-rdc=true使device代码可重入,这是DynamicParallelism所必须的,至于缘由则将是一个比较大的话题,之后探讨。code
代码的输出为:blog
./nestedHelloWorld Execution Configuration: grid 1 block 8 Recursion=0: Hello World from thread 0 block 0 Recursion=0: Hello World from thread 1 block 0 Recursion=0: Hello World from thread 2 block 0 Recursion=0: Hello World from thread 3 block 0 Recursion=0: Hello World from thread 4 block 0 Recursion=0: Hello World from thread 5 block 0 Recursion=0: Hello World from thread 6 block 0 Recursion=0: Hello World from thread 7 block 0 -------> nested execution depth: 1 Recursion=1: Hello World from thread 0 block 0 Recursion=1: Hello World from thread 1 block 0 Recursion=1: Hello World from thread 2 block 0 Recursion=1: Hello World from thread 3 block 0 -------> nested execution depth: 2 Recursion=2: Hello World from thread 0 block 0 Recursion=2: Hello World from thread 1 block 0 -------> nested execution depth: 3 Recursion=3: Hello World from thread 0 block 0
这里的01234….输出顺序挺诡异的,太规整了,咱们暂且认为CUDA对printf作过修改吧。还有就是,按照CPU递归程序的经验,这里的输出顺序就更怪了,固然,确定不是编译器错误或者CUDA的bug,你们能够在调用kernel后边加上cudaDeviceSynchronize,就能够看到“正常”的顺序了,缘由也就清楚了。递归
使用nvvp能够查看执行状况,空白说明parent在等待child执行结束:资源
$nvvp ./nesttedHelloWorld
接着,咱们尝试使用两个block而不是一个:
$ ./nestedHelloWorld 2
输出是:
./nestedHelloWorld 2Execution Configuration: grid 2 block 8 Recursion=0: Hello World from thread 0 block 1 Recursion=0: Hello World from thread 1 block 1 Recursion=0: Hello World from thread 2 block 1 Recursion=0: Hello World from thread 3 block 1 Recursion=0: Hello World from thread 4 block 1 Recursion=0: Hello World from thread 5 block 1 Recursion=0: Hello World from thread 6 block 1 Recursion=0: Hello World from thread 7 block 1 Recursion=0: Hello World from thread 0 block 0 Recursion=0: Hello World from thread 1 block 0 Recursion=0: Hello World from thread 2 block 0 Recursion=0: Hello World from thread 3 block 0 Recursion=0: Hello World from thread 4 block 0 Recursion=0: Hello World from thread 5 block 0 Recursion=0: Hello World from thread 6 block 0 Recursion=0: Hello World from thread 7 block 0 -------> nested execution depth: 1 -------> nested execution depth: 1 Recursion=1: Hello World from thread 0 block 0 Recursion=1: Hello World from thread 1 block 0 Recursion=1: Hello World from thread 2 block 0 Recursion=1: Hello World from thread 3 block 0 Recursion=1: Hello World from thread 0 block 0 Recursion=1: Hello World from thread 1 block 0 Recursion=1: Hello World from thread 2 block 0 Recursion=1: Hello World from thread 3 block 0 -------> nested execution depth: 2 -------> nested execution depth: 2 Recursion=2: Hello World from thread 0 block 0 Recursion=2: Hello World from thread 1 block 0 Recursion=2: Hello World from thread 0 block 0 Recursion=2: Hello World from thread 1 block 0 -------> nested execution depth: 3 -------> nested execution depth: 3 Recursion=3: Hello World from thread 0 block 0 Recursion=3: Hello World from thread 0 block 0
从上面结果来看,首先应该注意到,全部child的block的id都是0。下图是调用过程,parent有两个block了,可是全部child都只有一个blcok:
nestedHelloWorld<<<1, nthreads>>>(nthreads, ++iDepth);
注意:Dynamic Parallelism只有在CC3.5以上才被支持。经过Dynamic Parallelism调用的kernel不能执行于不一样的device(物理上实际存在的)上。调用的最大深度是24,但实际状况是,kernel要受限于memory资源,其中包括为了同步parent和child而须要的额外的memory资源。
学过算法导论之类的算法书应该知道,由于递归比较消耗资源的,因此若是能够的话最好是展开,而这里要讲的偏偏相反,咱们要实现递归,这部分主要就是再次证实DynamicParallelism的好处,有了它就能够实现像C那样写递归代码了。
下面的代码就是一份实现,和以前同样,每一个child的有一个block,block中第一个thread调用kernel,不一样的是,parent的grid有不少的block。第一步仍是讲global memory的地址g_idata转化为每一个block本地地址。而后,if判断是否该退出,退出的话,就将结果拷贝回global memory。若是不应退出,就进行本地reduction,通常的线程执行in-place(就地)reduction,而后,同步block来保证全部部分和的计算。thread0再次产生一个只有一个block和当前一半数量thread的child grid。
__global__ void gpuRecursiveReduce (int *g_idata, int *g_odata, unsigned int isize) { // set thread ID unsigned int tid = threadIdx.x; // convert global data pointer to the local pointer of this block int *idata = g_idata + blockIdx.x*blockDim.x; int *odata = &g_odata[blockIdx.x]; // stop condition if (isize == 2 && tid == 0) { g_odata[blockIdx.x] = idata[0]+idata[1]; return; } // nested invocation int istride = isize>>1; if(istride > 1 && tid < istride) { // in place reduction idata[tid] += idata[tid + istride]; } // sync at block level __syncthreads(); // nested invocation to generate child grids if(tid==0) { gpuRecursiveReduce <<<1, istride>>>(idata,odata,istride); // sync all child grids launched in this block cudaDeviceSynchronize(); } // sync at block level again __syncthreads(); }
编译运行,下面结果是运行在Kepler K40上面:
$ nvcc -arch=sm_35 -rdc=true nestedReduce.cu -o nestedReduce -lcudadevrt ./nestedReduce starting reduction at device 0: Tesla K40c array 1048576 grid 2048 block 512 cpu reduce elapsed 0.000689 sec cpu_sum: 1048576 gpu Neighbored elapsed 0.000532 sec gpu_sum: 1048576<<<grid 2048 block 512>>> gpu nested elapsed 0.172036 sec gpu_sum: 1048576<<<grid 2048 block 512>>>
相较于neighbored,nested的结果是很是差的。
从上面结果看,2048个block被初始化了。每一个block执行了8个recursion,16384个child block被建立,__syncthreads也被调用了16384次。这都是致使效率很低的缘由。
当一个child grid被调用后,他看到的memory是和parent彻底同样的,由于child只须要parent的一部分数据,block在每一个child grid的启动前的同步操做是没必要要的,修改后:
__global__ void gpuRecursiveReduceNosync (int *g_idata, int *g_odata,unsigned int isize) { // set thread ID unsigned int tid = threadIdx.x; // convert global data pointer to the local pointer of this block int *idata = g_idata + blockIdx.x * blockDim.x; int *odata = &g_odata[blockIdx.x]; // stop condition if (isize == 2 && tid == 0) { g_odata[blockIdx.x] = idata[0] + idata[1]; return; } // nested invoke int istride = isize>>1; if(istride > 1 && tid < istride) { idata[tid] += idata[tid + istride]; if(tid==0) { gpuRecursiveReduceNosync<<<1, istride>>>(idata,odata,istride); } } }
运行输出,时间减小到原来的三分之一:
./nestedReduceNoSync starting reduction at device 0: Tesla K40c array 1048576 grid 2048 block 512 cpu reduce elapsed 0.000689 sec cpu_sum: 1048576 gpu Neighbored elapsed 0.000532 sec gpu_sum: 1048576<<<grid 2048 block 512>>> gpu nested elapsed 0.172036 sec gpu_sum: 1048576<<<grid 2048 block 512>>> gpu nestedNosyn elapsed 0.059125 sec gpu_sum: 1048576<<<grid 2048 block 512>>>
不过,性能仍是比neighbour-paired要慢。接下来在作点改动,主要想法以下图所示,kernel的调用增长了一个参数iDim,这是由于每次递归调用,child block的大小就减半,parent 的blockDim必须传递给child grid,从而使每一个thread都能计算正确的global memory偏移地址。注意,全部空闲的thread都被移除了。相较于以前的实现,每次都会有一半的thread空闲下来而被移除,也就释放了一半的计算资源。
__global__ void gpuRecursiveReduce2(int *g_idata, int *g_odata, int iStride,int const iDim) { // convert global data pointer to the local pointer of this block int *idata = g_idata + blockIdx.x*iDim; // stop condition if (iStride == 1 && threadIdx.x == 0) { g_odata[blockIdx.x] = idata[0]+idata[1]; return; } // in place reduction idata[threadIdx.x] += idata[threadIdx.x + iStride]; // nested invocation to generate child grids if(threadIdx.x == 0 && blockIdx.x == 0) { gpuRecursiveReduce2 <<<gridDim.x,iStride/2>>>( g_idata,g_odata,iStride/2,iDim); } }
编译运行:
./nestedReduce2 starting reduction at device 0: Tesla K40c array 1048576 grid 2048 block 512 cpu reduce elapsed 0.000689 sec cpu_sum: 1048576 gpu Neighbored elapsed 0.000532 sec gpu_sum: 1048576<<<grid 2048 block 512>>> gpu nested elapsed 0.172036 sec gpu_sum: 1048576<<<grid 2048 block 512>>> gpu nestedNosyn elapsed 0.059125 sec gpu_sum: 1048576<<<grid 2048 block 512>>> gpu nested2 elapsed 0.000797 sec gpu_sum: 1048576<<<grid 2048 block 512>>>
从这个结果看,数据又好看了很多,能够猜想,大约是因为调用了较少的child grid,咱们能够用nvprof来验证下:
$ nvprof ./nestedReduce2
部分输出结果以下,第二列上显示了dievice kernel 的调用次数,第一个和第二个建立了16384个child grid。gpuRecursiveReduce2八层nested Parallelism只建立了8个child。
Calls (host) Calls (device) Avg Min Max Name 1 16384 441.48us 2.3360us 171.34ms gpuRecursiveReduce 1 16384 51.140us 2.2080us 57.906ms gpuRecursiveReduceNosync 1 8 56.195us 22.048us 100.74us gpuRecursiveReduce2 1 0 352.67us 352.67us 352.67us reduceNeighbored
对于一个给定的算法,咱们能够有不少种实现方式,避免大量的nested 调用能够提高不少性能。同步对算法的正确性相当重要,但也是一个消耗比较大的操做,block内部的同步操做却是能够去掉。由于在device上运行nested程序须要额外的资源,nested调用是有限的。