Test tensorflow-gpu failed with Status: CUDA driver version is insufficient for CUDA runtime version (which is not true)

9,513

I'm sorry, I've already found the solution myself. My mistake was that I created the Anaconda environment with a python version >3.5. Under these circumstances tensorflow-gpu=1.13 will be installed, if you execute the following command:

conda install -c anaconda tensorflow-gpu

However, if you create an environment with python=3.5, tensorflow-gpu=1.10 will be installed, which works for this CUDA version.

Share:
9,513

Related videos on Youtube

Stefan Renard
Author by

Stefan Renard

Updated on September 18, 2022

Comments

  • Stefan Renard
    Stefan Renard over 1 year

    I have the following configuration:

    • SUSE Linux Enterprise Server 12 SP3 (x86_64)
    • CUDA Toolkit: CUDA 9.2 (9.2.148 Update 1)
    • CUDA Driver Version: 396.37

    According to NVIDIA just right (https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#major-components).

    I set up a new environment with Anaconda and installed tensorflow-gpu in it:

    conda create -n keras python=3.6.8 anaconda
    conda install -c anaconda tensorflow-gpu
    

    But if I then want to check the installation via python console:

    import tensorflow as tf
    sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
    

    I get the following error:

    2019-04-17 15:23:45.753926: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA

    2019-04-17 15:23:45.793109: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2600180000 Hz

    2019-04-17 15:23:45.798218: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x561f42601240 executing computations on platform Host. Devices:

    2019-04-17 15:23:45.798258: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,

    2019-04-17 15:23:45.981727: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x561f426ad9b0 executing computations on platform CUDA. Devices:

    2019-04-17 15:23:45.981777: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): Tesla K40c, Compute Capability 3.5

    2019-04-17 15:23:45.982175: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:

    name: Tesla K40c major: 3 minor: 5 memoryClockRate(GHz): 0.745 pciBusID: 0000:06:00.0 totalMemory: 11.17GiB freeMemory: 11.09GiB

    2019-04-17 15:23:45.982206: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0

    Traceback (most recent call last): File "", line 1, in File "/home/fuchs/.conda/envs/keras/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1551, in init super(Session, self).init(target, graph, config=config) File "/home/fuchs/.conda/envs/keras/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 676, in init self._session = tf_session.TF_NewSessionRef(self._graph._c_graph, opts) tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version

    I've been looking for solutions from others with this problem, but for most of them it was because the CUDA Toolkit and Driver version didn't match. Which is not the case with me.

    I'd really appreciate the help.