SparkSQL vs Hive on Spark - Difference and pros and cons?

46,655

Solution 1

  1. When SparkSQL uses hive

    SparkSQL can use HiveMetastore to get the metadata of the data stored in HDFS. This metadata enables SparkSQL to do better optimization of the queries that it executes. Here Spark is the query processor.

  2. When Hive uses Spark See the JIRA entry: HIVE-7292

    Here the the data is accessed via spark. And Hive is the Query processor. So we have all the deign features of Spark Core to take advantage of. But this is a Major Improvement for Hive and is still "in progress" as of Feb 2 2016.

  3. There is a third option to process data with SparkSQL

    Use SparkSQL without using Hive. Here SparkSQL does not have access to the metadata from the Hive Metastore. And the queries run slower. I have done some performance tests comparing options 1 and 3. The results are here.

Solution 2

SparkSQL vs Spark API you can simply imagine you are in RDBMS world:

SparkSQL is pure SQL, and Spark API is language for writing stored procedure

Hive on Spark is similar to SparkSQL, it is a pure SQL interface that use spark as execution engine, SparkSQL uses Hive's syntax, so as a language, i would say they are almost the same.

but Hive on Spark has a much better support for hive features, especially hiveserver2 and security features, hive features in SparkSQL is really buggy, there is a hiveserver2 impl in SparkSQL, but in latest release version (1.6.x), hiveserver2 in SparkSQL doesn't work with hivevar and hiveconf argument anymore, and the username for login via jdbc doesn't work either...

see https://issues.apache.org/jira/browse/SPARK-13983

i believe hive support in spark project is really very low priority stuff...

sadly Hive on spark integration is not that easy, there are a lot of dependency conflicts... such as https://issues.apache.org/jira/browse/HIVE-13301

and, when i'm trying hive with spark integration, for debug purpose, i'm always starting hive cli like this:

export HADOOP_USER_CLASSPATH_FIRST=true
bin/hive --hiveconf hive.root.logger=DEBUG,console

our requirement is using spark with hiveserver2 in a secure way (with authentication and authorization), currently SparkSQL alone can not provide this, we are using ranger/sentry + Hive on Spark.

hope this can help you to get a better idea which direction you should go.

Solution 3

here is related answer I find in the hive official site:
1.3 Comparison with Shark and Spark SQL  There are two related projects in the Spark ecosystem that provide Hive QL support on Spark: Shark and Spark SQL.  ●The Shark project translates query plans generated by Hive into its own representation and executes them over Spark.   ●Spark SQL is a feature in Spark. It uses Hive’s parser as the frontend to provide Hive QL support. Spark application developers can easily express their data processing logic in SQL, as well as the other Spark operators, in their code. Spark SQL supports a different use case than Hive. 

Compared with Shark and Spark SQL, our approach by design supports all existing Hive features, including Hive QL (and any future extension), and Hive’s integration with authorization, monitoring, auditing, and other operational tools. 

3. Hive­-Level Design  As noted in the introduction, this project takes a different approach from that of Shark or Spark SQL in the sense that we are not going to implement SQL semantics using Spark's primitives. On the contrary, we will implement it using MapReduce primitives. The only new thing here is that these MapReduce primitives will be executed in Spark. In fact, only a few of Spark's primitives will be used in this design. 

The approach of executing Hive’s MapReduce primitives on Spark that is different from what Shark or Spark SQL does has the following direct advantages:  1.Spark users will automatically get the whole set of Hive’s rich features, including any new features that Hive might introduce in the future.  2.This approach avoids or reduces the necessity of any customization work in Hive’s Spark execution engine.
3.It will also limit the scope of the project and reduce longterm maintenance by keeping Hive­-on­-Spark congruent to Hive MapReduce and Tez. 

Share:
46,655
Gaurav Khare
Author by

Gaurav Khare

Email : [email protected]

Updated on September 19, 2020

Comments

  • Gaurav Khare
    Gaurav Khare over 3 years

    SparkSQL CLI internally uses HiveQL and in case Hive on spark(HIVE-7292) , hive uses spark as backend engine. Can somebody throw some more light, how exactly these two scenarios are different and pros and cons of both approaches?

  • Gaurav Khare
    Gaurav Khare about 8 years
    In 3rd option if SparkSQL does not use hive metastore(which by default is derby or can be mysql), then do we need to setup metastore for SparkSQL as well just like we use to do for Hive?
  • prajod
    prajod about 8 years
    No need to set up a metastore for SparkSQL. SparkSQL can be used in many ways. But in this option, a metastore setup is not required. Even without setting up the Hive Metastore connection, we can use the HiveContext class from the sparksql-cli. Another way to look at this concept: SparkSQL can use other data sources like csv files, where the Metastore concept is not relevant.
  • Gaurav Khare
    Gaurav Khare about 8 years
    If metastore is not required then how Spark SQL keep track of metadata of tables. It has to store that information somewhere. If it takes help of HiveContext then it means it still rely on Hive metastore.
  • prajod
    prajod about 8 years
    Please see an example code for Option 3. There you register a java/scala class with Spark. This class has the data types of your file based data. Similar to what you would do with hibernate in a JEE program. Search for registerTempTable in this link
  • shriyog
    shriyog about 7 years
    I'm using spark sql to run a join query. I assume spark sql uses spark as it's execution engine, yet why am I getting this line INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr. in my logs.
  • lxyscls
    lxyscls about 5 years
    @shriyog I think you use the hive as datawarehouse. And the SparkSQL does not limit it to use Tez as data engine. SparkSQL queries data from Hive, and Hive uses Spark as data engine. It is a headache mess.