Solved : Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try

Caused by: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[172.27.10.191:50010,DS-51d68378-35de-4c70-b27e-7a98a53919cc,DISK], DatanodeInfoWithStorage[172.27.10.223:50010,DS-f172c682-713d-4a8f-b8af-69198ddc6756,DISK]], original=[DatanodeInfoWithStorage[172.27.10.191:50010,DS-51d68378-35de-4c70-b27e-7a98a53919cc,DISK], DatanodeInfoWithStorage[172.27.10.223:50010,DS-f172c682-713d-4a8f-b8af-69198ddc6756,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via ‘dfs.client.block.write.replace-datanode-on-failure.policy’ in its configuration. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:925) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:988) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1156) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454)…

Unable to connect to Hbase shell

Problem : smartechies.mac $ hbase shell ArgumentError: wrong number of arguments (0 for 1) method_added at file:/usr/local/Cellar/hbase/1.2.6/libexec/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:10 method_added at file:/usr/local/Cellar/hbase/1.2.6/libexec/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:129 Pattern at file:/usr/local/Cellar/hbase/1.2.6/libexec/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:2 (root) at file:/usr/local/Cellar/hbase/1.2.6/libexec/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:1 require at org/jruby/RubyKernel.java:1062 (root) at file:/usr/local/Cellar/hbase/1.2.6/libexec/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:42 (root) at /usr/local/Cellar/hbase/1.2.6/libexec/bin/../bin/hirb.rb:38 Solution :  Point the JAVA_HOME variable to correct value in  conf/hbase-env.sh Example : smartechies.mac $ cat ~/.profile | grep JAVA_HOME export…

Create Superuser in HUE

[ec2-user@ip-123-45-67-890 ~]$ ls -ltr /usr/lib/hue/build/env/bin/hue -rwxr-xr-x 1 root root 523 Sep 22 22:09 /usr/lib/hue/build/env/bin/hue [ec2-user@ip-123-45-67-890 ~]$ sudo /usr/lib/hue/build/env/bin/hue createsuperuser Username (leave blank to use ‘root’): sudhir Email address: mail2sudhir.online@gmail.com Password: Password (again): Superuser created successfully. [ec2-user@ip-123-45-67-890 ~]$

Solved: permission denied, open ‘/home/ubuntu/.config/configstore/bower-github.json’ in ubuntu

Error : Error: EACCES: permission denied, open ‘/home/ubuntu/.config/configstore/bower-github.json’ You don’t have access to this file. at Error (native) at Object.fs.openSync (fs.js:549:18) at Object.fs.readFileSync (fs.js:397:15) at Object.create.all.get (/usr/local/lib/node_modules/bower/lib/node_modules/configstore/index.js:35:26) at Object.Configstore (/usr/local/lib/node_modules/bower/lib/node_modules/configstore/index.js:28:44) at readCachedConfig (/usr/local/lib/node_modules/bower/lib/config.js:19:23) at defaultConfig (/usr/local/lib/node_modules/bower/lib/config.js:11:12) at Object.<anonymous> (/usr/local/lib/node_modules/bower/lib/index.js:16:32) at Module._compile (module.js:410:26) at Object.Module._extensions..js (module.js:417:10)   Solution : sudo chown -R $USER:$GROUP ~/.npm sudo chown…

Solved : Could not determine java version when running gradle

Error : 21:04:36.791 [INFO] [org.gradle.internal.nativeintegration.services.NativeServices] Initialized native services in: /home/ubuntu/.gradle/native 21:04:36.817 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] 21:04:36.819 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] FAILURE: Build failed with an exception. 21:04:36.819 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] 21:04:36.819 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] * What went wrong: 21:04:36.822 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] Could not determine java version from ‘9.0.4’. 21:04:36.822 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] 21:04:36.822 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] * Exception is: 21:04:36.823 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] java.lang.IllegalArgumentException:…

ERROR when writing file to S3 bucket from EMRFS enabled Spark cluster

ERROR : 18/03/02 01:42:17 INFO RetryInvocationHandler: Exception while invoking ConsistencyCheckerS3FileSystem.mkdirs over null. Retrying after sleeping for 10000ms. com.amazon.ws.emr.hadoop.fs.consistency.exception.ConsistencyException: Directory ‘bucket/folder/_temporary’ present in the metadata but not s3 at com.amazon.ws.emr.hadoop.fs.consistency.ConsistencyCheckerS3FileSystem.getFileStatus(ConsistencyCheckerS3FileSystem.java:506)   Root cause : Mostly the consistent problem comes due to Manual deletion of files and directory from S3 console retry logic in spark and hadoop…