Unable to use an existing Hive permanent UDF from Spark SQL

15,206

Issue is Spark 2.0 is not able to execute the functions whose JARs are located on HDFS.

Spark SQL: Thriftserver unable to run a registered Hive UDTF

One workaround is to define the function as a temporary function in Spark job with jar path pointing to a local edge-node path. Then call the function in same Spark job.

CREATE TEMPORARY FUNCTION functionName as 'com.test.HiveUDF' USING JAR '/user/home/dir1/functions.jar'
Share:
15,206

Related videos on Youtube

Rob Cowie
Author by

Rob Cowie

Software developer working with python, apache spark, PostgreSQL, AWS, Google Cloud.

Updated on June 04, 2022

Comments

  • Rob Cowie
    Rob Cowie almost 2 years

    I have previously registered a UDF with hive. It is permanent not TEMPORARY. It works in beeline.

    CREATE FUNCTION normaliseURL AS 'com.example.hive.udfs.NormaliseURL' USING JAR 'hdfs://udfs/hive-udfs.jar';
    

    I have spark configured to use the hive metastore. The config is working as I can query hive tables. I can see the UDF;

    In [9]: spark.sql('describe function normaliseURL').show(truncate=False)
    +-------------------------------------------+
    |function_desc                              |
    +-------------------------------------------+
    |Function: default.normaliseURL             |
    |Class: com.example.hive.udfs.NormaliseURL  |
    |Usage: N/A.                                |
    +-------------------------------------------+
    

    However I cannot use the UDF in a sql statement;

    spark.sql('SELECT normaliseURL("value")')
    AnalysisException: "Undefined function: 'default.normaliseURL'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'.; line 1 pos 7"
    

    If I attempt to register the UDF with spark (bypassing the metastore) it fails to register it, suggesting that it does already exist.

    In [12]: spark.sql("create function normaliseURL as 'com.example.hive.udfs.NormaliseURL'")
    AnalysisException: "Function 'default.normaliseURL' already exists in database 'default';"
    

    I'm using Spark 2.0, hive metastore 1.1.0. The UDF is scala, my spark driver code is python.

    I'm stumped.

    • Am I correct in my assumption that Spark can utilise metastore-defined permanent UDFs?
    • Am I creating the function correctly in hive?
    • Thiago Baldim
      Thiago Baldim over 7 years
      In your SparkSession.builder did you defined enableHiveSupport()?
    • Rob Cowie
      Rob Cowie over 7 years
      Yeah I did. I can see, and query hive-defined tables from spark so I assume hive support is enabled appropriately.
    • Thiago Baldim
      Thiago Baldim over 7 years
      Humm... Did you defined your UDF jar in spark-submit or spark-shell call? Like: ./bin/spark-shell --jars <path-to-your-hive-udf>.jar
    • Shankar
      Shankar over 7 years
      @RobCowie: are you sure the DataType of value column is same in both UDF Class(in Scala) and the query what it returns?
    • mrsrinivas
      mrsrinivas over 7 years
      can you paste com.example.hive.udfs.NormaliseURL code ?
    • mrsrinivas
      mrsrinivas over 7 years
      try spark.sql('SELECT normaliseURL("value") from values(1)')
    • eugenio calabrese
      eugenio calabrese almost 7 years
      Hi, i suppose that the jar your are using for the udf is not available to spark and you get that error, try to check this answer as it seems to me to be your problem: stackoverflow.com/questions/43272446/…