CUDA C++: Expected an expression in kernel.cu file
I found the problem. The newest version of VS2017 doesn't support the newest version of CUDA, so the solution was to do what it's told in here. Now everything works
Comments
-
Artyomska almost 2 years
I just started learning a bit of CUDA, and I encountered this error in the following line, at a <<< >>> expression
#include "kernels.h" #include "helpers.h" #include <iostream> #include <cmath> #include <cuda_runtime.h> #include <device_launch_parameters.h> __global__ void blur(unsigned char* input_image, unsigned char* output_image, int width, int height) { const unsigned int offset = blockIdx.x*blockDim.x + threadIdx.x; int x = offset % width; int y = (offset - x) / width; int fsize = 5; // Filter size if (offset < width*height) { float output_red = 0; float output_green = 0; float output_blue = 0; int hits = 0; for (int ox = -fsize; ox < fsize + 1; ++ox) { for (int oy = -fsize; oy < fsize + 1; ++oy) { if ((x + ox) > -1 && (x + ox) < width && (y + oy) > -1 && (y + oy) < height) { const int currentoffset = (offset + ox + oy * width) * 3; output_red += input_image[currentoffset]; output_green += input_image[currentoffset + 1]; output_blue += input_image[currentoffset + 2]; hits++; } } } output_image[offset * 3] = output_red / hits; output_image[offset * 3 + 1] = output_green / hits; output_image[offset * 3 + 2] = output_blue / hits; } } void filter(unsigned char* input_image, unsigned char* output_image, int width, int height) { unsigned char* dev_input; unsigned char* dev_output; getError(cudaMalloc((void**)&dev_input, width*height * 3 * sizeof(unsigned char))); getError(cudaMemcpy(dev_input, input_image, width*height * 3 * sizeof(unsigned char), cudaMemcpyHostToDevice)); getError(cudaMalloc((void**)&dev_output, width*height * 3 * sizeof(unsigned char))); dim3 blockDims(512, 1, 1); dim3 gridDims((unsigned int)ceil((double)(width*height * 3 / blockDims.x)), 1, 1); blur <<< gridDims, blockDims >>>(dev_input, dev_output, width, height); getError(cudaMemcpy(output_image, dev_output, width*height * 3 * sizeof(unsigned char), cudaMemcpyDeviceToHost)); getError(cudaFree(dev_input)); getError(cudaFree(dev_output)); }
In the
blur <<< gridDims, blockDims >>>(dev_input, dev_output, width, height);
line, at the third < in it, I encounter the error from the title, and because of it I can't compile the code (Other people said that it is an Intellisense error, but for other people the program compilled, while mine doesn't).
I also receive this error when I try to compile
Severity Code Description Project File Line Suppression State Error MSB3721 The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.1\bin\nvcc.exe" -gencode=arch=compute_30,code=\"sm_30,compute_30\" --use-local-env --cl-version 2017 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Tools\MSVC\14.12.25827\bin\HostX86\x64" -x cu -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.1\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.1\include" -G --keep-dir x64\Debug -maxrregcount=0 --machine 64 --compile -cudart static -g -DWIN32 -DWIN64 -D_DEBUG -D_CONSOLE -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od /FS /Zi /RTC1 /MDd " -o x64\Debug\kernel.cu.obj "C:\Users\Artyomska\Documents\Visual Studio 2017\Projects\ScreenFilter\ScreenFilter\kernel.cu"" exited with code 1. ScreenFilter C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\VC\VCTargets\BuildCustomizations\CUDA 9.1.targets 707
I am trying to run the program on Windows 10, Visual Studio 2017 (the latest version, with toolkit for 15.4 support installed so I don't receive incompatible version error). I have tried reinstalling CUDA 9.1.85, VS2017 and create a new project. I added paths in dependencies and libraries to the NVIDIA Toolkit, and that code is present in a .cu file.
The problem is that even if I create a new project, without changing anything and letting kernel.cu with how the default settings populate it, it still has the expression error at a <<< >>> line.
What should I do to resolve it? Thank you.
-
talonmies over 6 yearsSo it was an Intellisense error. If you had the verbosity level set higher you would have seen the actual error from nvcc