Hadoop: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected

24,634

Solution 1

Hadoop has gone through a huge code refactoring from Hadoop 1.0 to Hadoop 2.0. One side effect is that code compiled against Hadoop 1.0 is not compatible with Hadoop 2.0 and vice-versa. However source code is mostly compatible and thus one just need to recompile code with target Hadoop distribution.

The exception "Found interface X, but class was expected" is very common when you're running code that is compiled for Hadoop 1.0 on Hadoop 2.0 or vice-versa.

You can find the correct hadoop version used in the cluster, then specify that hadoop version in the pom.xml file Build your project with the same version of hadoop used in the cluster and deploy it.

Solution 2

You need to recompile "hcatalog-core" to support Hadoop 2.0.0. Currently "hcatalog-core" only supports Hadoop 1.0

Share:
24,634
dokondr
Author by

dokondr

Updated on July 09, 2022

Comments

  • dokondr
    dokondr almost 2 years

    My MapReduce jobs runs ok when assembled in Eclipse with all possible Hadoop and Hive jars included in Eclipse project as dependencies. (These are the jars that come with single node, local Hadoop installation).

    Yet when trying to run the same program assembled using Maven project (see below) I get:

     Exception in thread "main" java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
    

    This exception happens when program is assembled using the following Maven project:

    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
      <modelVersion>4.0.0</modelVersion>
    
      <groupId>com.bigdata.hadoop</groupId>
      <artifactId>FieldCounts</artifactId>
      <version>0.0.1-SNAPSHOT</version>
      <packaging>jar</packaging>
    
      <name>FieldCounts</name>
      <url>http://maven.apache.org</url>
    
      <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
      </properties>
    
      <dependencies>
        <dependency>
          <groupId>junit</groupId>
          <artifactId>junit</artifactId>
          <version>3.8.1</version>
          <scope>test</scope>
        </dependency>
         <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>2.2.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>2.2.0</version>
        </dependency>
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-mapreduce-client-jobclient</artifactId>
        <version>2.2.0</version>
    </dependency>
    <dependency>
        <groupId>org.apache.hive.hcatalog</groupId>
        <artifactId>hcatalog-core</artifactId>
        <version>0.12.0</version>
    </dependency>
    <dependency>
        <groupId>com.google.guava</groupId>
        <artifactId>guava</artifactId>
        <version>16.0.1</version>
    </dependency>
      </dependencies>     
        <build>
        <plugins>
          <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>2.3.2</version>
            <configuration>
                <source>${jdk.version}</source>
                <target>${jdk.version}</target>
            </configuration>
          </plugin>       
      <plugin>
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-assembly-plugin</artifactId>
        <executions>
          <execution>
             <goals>
               <goal>attached</goal>
             </goals>
             <phase>package</phase>
             <configuration>
               <descriptorRefs>
                 <descriptorRef>jar-with-dependencies</descriptorRef>
              </descriptorRefs>
              <archive>
                <manifest>
                  <mainClass>com.bigdata.hadoop.FieldCounts</mainClass>
                </manifest>
              </archive>
            </configuration>
         </execution>
      </executions>
    </plugin>
     </plugins>
     </build>       
    </project>
    

    * Please advise where and how to find compatible Hadoop jars? *

    [update_1] I am running Hadoop 2.2.0.2.0.6.0-101

    As I have found here: https://github.com/kevinweil/elephant-bird/issues/247

    Hadoop 1.0.3: JobContext is a class

    Hadoop 2.0.0: JobContext is an interface

    In my pom.xml I have three jars with version 2.2.0

    hadoop-hdfs 2.2.0
    hadoop-common 2.2.0
    hadoop-mapreduce-client-jobclient 2.2.0
    hcatalog-core 0.12.0
    

    The only exception is hcatalog-core which version is 0.12.0, I could not find any more recent version of this jar and I need it!

    How can I find which of these 4 jars produces java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected ?

    Please, give me an idea how to solve this. (The only solution I see is to compile everything from source!)

    [/update_1]

    Full text of my MarReduce Job:

    package com.bigdata.hadoop;
    
    import java.io.IOException;
    import java.util.*;
    
    import org.apache.hadoop.conf.*;
    import org.apache.hadoop.io.*;
    import org.apache.hadoop.mapreduce.*;
    import org.apache.hadoop.util.*;
    import org.apache.hcatalog.mapreduce.*;
    import org.apache.hcatalog.data.*;
    import org.apache.hcatalog.data.schema.*;
    import org.apache.log4j.Logger;
    
    public class FieldCounts extends Configured implements Tool {
    
        public static class Map extends Mapper<WritableComparable, HCatRecord, TableFieldValueKey, IntWritable> {
    
            static Logger logger = Logger.getLogger("com.foo.Bar");
    
            static boolean firstMapRun = true;
            static List<String> fieldNameList = new LinkedList<String>();
            /**
             * Return a list of field names not containing `id` field name
             * @param schema
             * @return
             */
            static List<String> getFieldNames(HCatSchema schema) {
                // Filter out `id` name just once
                if (firstMapRun) {
                    firstMapRun = false;
                    List<String> fieldNames = schema.getFieldNames();
                    for (String fieldName : fieldNames) {
                        if (!fieldName.equals("id")) {
                            fieldNameList.add(fieldName);
                        }
                    }
                } // if (firstMapRun)
                return fieldNameList;
            }
    
            @Override
          protected void map( WritableComparable key,
                              HCatRecord hcatRecord,
                              //org.apache.hadoop.mapreduce.Mapper
                              //<WritableComparable, HCatRecord, Text, IntWritable>.Context context)
                              Context context)
                throws IOException, InterruptedException {
    
                HCatSchema schema = HCatBaseInputFormat.getTableSchema(context.getConfiguration());
    
               //String schemaTypeStr = schema.getSchemaAsTypeString();
               //logger.info("******** schemaTypeStr ********** : "+schemaTypeStr);
    
               //List<String> fieldNames = schema.getFieldNames();
                List<String> fieldNames = getFieldNames(schema);
                for (String fieldName : fieldNames) {
                    Object value = hcatRecord.get(fieldName, schema);
                    String fieldValue = null;
                    if (null == value) {
                        fieldValue = "<NULL>";
                    } else {
                        fieldValue = value.toString();
                    }
                    //String fieldNameValue = fieldName+"."+fieldValue;
                    //context.write(new Text(fieldNameValue), new IntWritable(1));
                    TableFieldValueKey fieldKey = new TableFieldValueKey();
                    fieldKey.fieldName = fieldName;
                    fieldKey.fieldValue = fieldValue;
                    context.write(fieldKey, new IntWritable(1));
                }
    
            }       
        }
    
        public static class Reduce extends Reducer<TableFieldValueKey, IntWritable,
                                           WritableComparable, HCatRecord> {
    
            protected void reduce( TableFieldValueKey key,
                                   java.lang.Iterable<IntWritable> values,
                                   Context context)
                                   //org.apache.hadoop.mapreduce.Reducer<Text, IntWritable,
                                   //WritableComparable, HCatRecord>.Context context)
                throws IOException, InterruptedException {
                Iterator<IntWritable> iter = values.iterator();
                int sum = 0;
                // Sum up occurrences of the given key 
                while (iter.hasNext()) {
                    IntWritable iw = iter.next();
                    sum = sum + iw.get();
                }
    
                HCatRecord record = new DefaultHCatRecord(3);
                record.set(0, key.fieldName);
                record.set(1, key.fieldValue);
                record.set(2, sum);
    
                context.write(null, record);
            }
        }
    
        public int run(String[] args) throws Exception {
            Configuration conf = getConf();
            args = new GenericOptionsParser(conf, args).getRemainingArgs();
    
            // To fix Hadoop "META-INFO" (http://stackoverflow.com/questions/17265002/hadoop-no-filesystem-for-scheme-file)
            conf.set("fs.hdfs.impl",
                    org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
            conf.set("fs.file.impl",
                    org.apache.hadoop.fs.LocalFileSystem.class.getName());
    
            // Get the input and output table names as arguments
            String inputTableName = args[0];
            String outputTableName = args[1];
            // Assume the default database
            String dbName = null;
    
            Job job = new Job(conf, "FieldCounts");
    
            HCatInputFormat.setInput(job,
                    InputJobInfo.create(dbName, inputTableName, null));
            job.setJarByClass(FieldCounts.class);
            job.setMapperClass(Map.class);
            job.setReducerClass(Reduce.class);
    
            // An HCatalog record as input
            job.setInputFormatClass(HCatInputFormat.class);
    
            // Mapper emits TableFieldValueKey as key and an integer as value
            job.setMapOutputKeyClass(TableFieldValueKey.class);
            job.setMapOutputValueClass(IntWritable.class);
    
            // Ignore the key for the reducer output; emitting an HCatalog record as
            // value
            job.setOutputKeyClass(WritableComparable.class);
            job.setOutputValueClass(DefaultHCatRecord.class);
            job.setOutputFormatClass(HCatOutputFormat.class);
    
            HCatOutputFormat.setOutput(job,
                    OutputJobInfo.create(dbName, outputTableName, null));
            HCatSchema s = HCatOutputFormat.getTableSchema(job);
            System.err.println("INFO: output schema explicitly set for writing:"
                    + s);
            HCatOutputFormat.setSchema(job, s);
            return (job.waitForCompletion(true) ? 0 : 1);
        }
    
        public static void main(String[] args) throws Exception {
            String classpath = System.getProperty("java.class.path");
            //System.out.println("*** CLASSPATH: "+classpath);       
            int exitCode = ToolRunner.run(new FieldCounts(), args);
            System.exit(exitCode);
        }
    }
    

    And class for complex key:

    package com.bigdata.hadoop;
    
    import java.io.DataInput;
    import java.io.DataOutput;
    import java.io.IOException;
    
    import org.apache.hadoop.io.WritableComparable;
    
    import com.google.common.collect.ComparisonChain;
    
    public class TableFieldValueKey  implements WritableComparable<TableFieldValueKey> {
    
          public String fieldName;
          public String fieldValue;
    
          public TableFieldValueKey() {} //must have a default constructor
          //
    
          public void readFields(DataInput in) throws IOException {
            fieldName = in.readUTF();
            fieldValue = in.readUTF();
          }
    
          public void write(DataOutput out) throws IOException {
            out.writeUTF(fieldName);
            out.writeUTF(fieldValue);
          }
    
          public int compareTo(TableFieldValueKey o) {
            return ComparisonChain.start().compare(fieldName, o.fieldName)
                .compare(fieldValue, o.fieldValue).result();
          }
    
        }