How to get the value of the location for a Hive table using a Spark object?

10,073

Solution 1

First approach

You can use input_file_name with dataframe.

it will give you absolute file-path for a part file.

spark.read.table("zen.intent_master").select(input_file_name).take(1)

And then extract table path from it.

Second approach

Its more of hack you can say.

package org.apache.spark.sql.hive

import java.net.URI

import org.apache.spark.sql.catalyst.catalog.{InMemoryCatalog, SessionCatalog}
import org.apache.spark.sql.catalyst.parser.ParserInterface
import org.apache.spark.sql.internal.{SessionState, SharedState}
import org.apache.spark.sql.SparkSession

class TableDetail {
  def getTableLocation(table: String, spark: SparkSession): URI = {
    val sessionState: SessionState = spark.sessionState
    val sharedState: SharedState = spark.sharedState
    val catalog: SessionCatalog = sessionState.catalog
    val sqlParser: ParserInterface = sessionState.sqlParser
    val client = sharedState.externalCatalog match {
      case catalog: HiveExternalCatalog => catalog.client
      case _: InMemoryCatalog => throw new IllegalArgumentException("In Memory catalog doesn't " +
        "support hive client API")
    }

    val idtfr = sqlParser.parseTableIdentifier(table)

    require(catalog.tableExists(idtfr), new IllegalArgumentException(idtfr + " done not exists"))
    val rawTable = client.getTable(idtfr.database.getOrElse("default"), idtfr.table)
    rawTable.location
  }
}

Solution 2

You can also use .toDF method on desc formatted table then filter from dataframe.

DataframeAPI:

scala> :paste
spark.sql("desc formatted data_db.part_table")
.toDF //convert to dataframe will have 3 columns col_name,data_type,comment
.filter('col_name === "Location") //filter on colname
.collect()(0)(1)
.toString

Result:

String = hdfs://nn:8020/location/part_table

(or)

RDD Api:

scala> :paste
spark.sql("desc formatted data_db.part_table")
.collect()
.filter(r => r(0).equals("Location")) //filter on r(0) value
.map(r => r(1)) //get only the location
.mkString //convert as string
.split("8020")(1) //change the split based on your namenode port..etc

Result:

String = /location/part_table

Solution 3

Here is the correct answer:

import org.apache.spark.sql.catalyst.TableIdentifier

lazy val tblMetadata = spark.sessionState.catalog.getTableMetadata(new TableIdentifier(tableName,Some(schema)))

Solution 4

Here is how to do it in PySpark:

 (spark.sql("desc formatted mydb.myschema")
       .filter("col_name=='Location'")
       .collect()[0].data_type)   

Solution 5

Use this as re-usable function in your scala project

  def getHiveTablePath(tableName: String, spark: SparkSession):String =
    {
       import org.apache.spark.sql.functions._
      val sql: String = String.format("desc formatted %s", tableName)
      val result: DataFrame = spark.sql(sql).filter(col("col_name") === "Location")
      result.show(false) // just for debug purpose
      val info: String = result.collect().mkString(",")
      val path: String = info.split(',')(1)
      path
    }

caller would be

    println(getHiveTablePath("src", spark)) // you can prefix schema if you have

Result (I executed in local so file:/ below if its hdfs hdfs:// will come):

+--------+------------------------------------+-------+
|col_name|data_type                           |comment|
+--------+--------------------------------------------+
|Location|file:/Users/hive/spark-warehouse/src|       |
+--------+------------------------------------+-------+

file:/Users/hive/spark-warehouse/src

Share:
10,073
code
Author by

code

Updated on July 18, 2022

Comments

  • code
    code almost 2 years

    I am interested in being able to retrieve the location value of a Hive table given a Spark object (SparkSession). One way to obtain this value is by parsing the output of the location via the following SQL query:

    describe formatted <table name>
    

    I was wondering if there is another way to obtain the location value without having to parse the output. An API would be great in case the output of the above command changes between Hive versions. If an external dependency is needed, which would it be? Is there some sample spark code that can obtain the location value?

  • code
    code over 5 years
    What if the hive table doesn't have any files? How can I get the location value for it?
  • Kaushal
    Kaushal over 5 years
    @codeshark I have updated answer with second approach, hope this will work in your case.
  • Guillaume
    Guillaume almost 5 years
    What is "input_file_name" ?
  • Kaushal
    Kaushal almost 5 years
    It is a spark function. You can use with import org.apache.spark.sql.functions._
  • Kaushal
    Kaushal almost 5 years
    You can look for documentation for more detail. spark.apache.org/docs/2.0.0/api/scala/…
  • pltc
    pltc over 4 years
    @GuilhermedeLazari here it is spark._jsparkSession.sessionState().catalog().getTableMetada‌​ta(spark.sparkContex‌​t._jvm.org.apache.sp‌​ark.sql.catalyst.Tab‌​leIdentifier('table'‌​, spark.sparkContext._jvm.scala.Some('database'))).storage().l‌​ocationUri().get()
  • Pavan_Obj
    Pavan_Obj about 4 years
    Thank you for the paste mode :)
  • Vikas Saxena
    Vikas Saxena about 3 years
    @Kaushal is it possible to fetch sizes for each of these files fetched using input_file_name
  • Ravindra
    Ravindra almost 2 years
    I prefer this over submitting a new spark job (describe table).