Confluent Kafka Hdfs Sink With Hive Integration

Exception :

[2017-11-10 08:32:32,183] ERROR Task hdfs-sink-prqt-stndln-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSinkTask:455)

java.lang.RuntimeException: java.util.concurrent.ExecutionException: io.confluent.connect.hdfs.errors.HiveMetaStoreException: Invalid partition for default.srp-oracle-jdbc-stdln-raw-KFK_SRP_HDFS_SINK_TEST: partition=0

at io.confluent.connect.hdfs.DataWriter.write(DataWriter.java:226)

at io.confluent.connect.hdfs.HdfsSinkTask.put(HdfsSinkTask.java:103)

at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:435)

at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)

at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)

at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)

at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)

at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

Caused by: java.util.concurrent.ExecutionException: io.confluent.connect.hdfs.errors.HiveMetaStoreException: Invalid partition for default.srp-oracle-jdbc-stdln-raw-KFK_SRP_HDFS_SINK_TEST: partition=0

at java.util.concurrent.FutureTask.report(FutureTask.java:122)

at java.util.concurrent.FutureTask.get(FutureTask.java:192)

at io.confluent.connect.hdfs.DataWriter.write(DataWriter.java:220)

… 12 more

Caused by: io.confluent.connect.hdfs.errors.HiveMetaStoreException: Invalid partition for default.srp-oracle-jdbc-stdln-raw-KFK_SRP_HDFS_SINK_TEST: partition=0

at io.confluent.connect.hdfs.hive.HiveMetaStore.addPartition(HiveMetaStore.java:107)

at io.confluent.connect.hdfs.TopicPartitionWriter$3.call(TopicPartitionWriter.java:662)

at io.confluent.connect.hdfs.TopicPartitionWriter$3.call(TopicPartitionWriter.java:659)

… 4 more

Caused by: InvalidObjectException(message:default.srp-oracle-jdbc-stdln-raw-KFK_SRP_HDFS_SINK_TEST table not found)

at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$append_partition_by_name_with_environment_context_result$append_partition_by_name_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:51619)

at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$append_partition_by_name_with_environment_context_result$append_partition_by_name_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:51596)

at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$append_partition_by_name_with_environment_context_result.read(ThriftHiveMetastore.java:51519)

at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)

at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_append_partition_by_name_with_environment_context(ThriftHiveMetastore.java:1667)

at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.append_partition_by_name_with_environment_context(ThriftHiveMetastore.java:1651)

at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.appendPartition(HiveMetaStoreClient.java:606)

at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.appendPartition(HiveMetaStoreClient.java:600)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)

at com.sun.proxy.$Proxy49.appendPartition(Unknown Source)

at io.confluent.connect.hdfs.hive.HiveMetaStore$1.call(HiveMetaStore.java:97)

at io.confluent.connect.hdfs.hive.HiveMetaStore$1.call(HiveMetaStore.java:91)

at io.confluent.connect.hdfs.hive.HiveMetaStore.doAction(HiveMetaStore.java:87)

at io.confluent.connect.hdfs.hive.HiveMetaStore.addPartition(HiveMetaStore.java:103)

… 6 more

 

Solution :

Hive doesn’t support any special character other than “_” (underscore). you should good to go once change the topic name at kafka source, like srp_oracle_jdbc_stdln_raw_KFK_SRP_HDFS_SINK_TEST

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s