java.io.IOException: Cannot run program "python" using Spark in Pycharm (Windows)

21,666

Solution 1

After struggling with this for two days, I figured what the problem is. I added the followings to the "PATH" variable as windows environment variable:

C:/Spark/spark-1.4.1-bin-hadoop2.6/python/pyspark
C:\Python27

Remember, You need to change the directory to wherever your spark is installed and also the same thing for python. On the other hand, I have to mention that I am using prebuild version of spark which has Hadoop included.

Best of luck to you all.

Solution 2

I had the same problem as you, and then I made the following changes: set PYSPARK_PYTHON as environment variable to point to python.exe in Edit Configurations of Pycharm, here is my example:

PYSPARK_PYTHON = D:\Anaconda3\python.exe

SPARK_HOME = D:\spark-1.6.3-bin-hadoop2.6

PYTHONUNBUFFERED = 1

Solution 3

I have faced this problem, it's caused by python version conflicts on diff nodes of cluster, so, it can be solved by

export PYSPARK_PYTHON=/usr/bin/python

which are the same version on diff nodes. and then start:

pyspark
Share:
21,666
ahajib
Author by

ahajib

In God we trust, all others must bring data !

Updated on July 05, 2022

Comments

  • ahajib
    ahajib almost 2 years

    I am trying to write a very simple code using Spark in Pycharm and my os is Windows 8. I have been dealing with several problems which somehow managed to fix except for one. When I run the code using pyspark.cmd everything works smoothly but I have had no luck with the same code in pycharm. There was a problem with SPARK_HOME variable which I fixed using the following code:

    import sys
    import os
    os.environ['SPARK_HOME'] = "C:/Spark/spark-1.4.1-bin-hadoop2.6"
    sys.path.append("C:/Spark/spark-1.4.1-bin-hadoop2.6/python")
    sys.path.append('C:/Spark/spark-1.4.1-bin-hadoop2.6/python/pyspark')
    

    So now when I import the pyspark and everything is fine:

    from pyspark import SparkContext
    

    The problem rises when I want to run the rest of my code:

    logFile = "C:/Spark/spark-1.4.1-bin-hadoop2.6/README.md"
    sc = SparkContext()
    logData = sc.textFile(logFile).cache()
    logData.count()
    

    When I receive the following error:

    15/08/27 12:04:15 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
    java.io.IOException: Cannot run program "python": CreateProcess error=2, The system cannot find the file specified
    

    I have added the python path as an environment variable and it's working properly using the command line but I could not figure out what my problem is with this code. Any help or comment is much appreciated.

    Thanks

  • Adithya
    Adithya over 8 years
    i tried adding the above line to my PATH variable but still no success. I am still getting the error while executing from eclipse. My Path variable is like : %{EXISTING_PATH}%;%PY_HOME%;%PY_HOME%\Scripts;%SPARK_HOME%\b‌​in;%SPARK_HOME%\pyth‌​on\pyspark I tried putting spark variable first and pythong second as in %{EXISTING_PATH}%;%SPARK_HOME%\bin;%SPARK_HOME%\python\pyspa‌​rk;%{EXISTING_PATH}%‌​;%PY_HOME%;%PY_HOME%‌​\Scripts;%SPARK_HOME‌​%\bin;%SPARK_HOME%\p‌​ython\pyspark %PY_HOME% = C:\Python2.7.11 %SPARK_HOME% = C:\Spark1.6Hadoop2.6
  • Adithya
    Adithya over 8 years
    I have done everything from the following link: enahwe.blogspot.in/p/…