Working with Postgres SQL in macOS

Install / upgrade Postgresql

smartechie-macos :~ $  brew <install | upgrade> postgresql

Get installation details

smartechie-macos :~ $  brew info postgres

 result

postgresql: stable 10.4 (bottled), HEAD

Object-relational database system

https://www.postgresql.org/

Conflicts with:

  postgres-xc (because postgresql and postgres-xc install the same binaries.)

/usr/local/Cellar/postgresql/9.6.3 (3,260 files, 36.6MB)

  Poured from bottle on 2017-06-05 at 20:47:39

/usr/local/Cellar/postgresql/10.4 (3,389 files, 39.2MB) *

  Poured from bottle on 2018-07-09 at 14:15:06

From: https://github.com/Homebrew/homebrew-core/blob/master/Formula/postgresql.rb

==> Dependencies

Required: openssl , readline

Optional: python , python@2

==> Options

–with-dtrace

Build with DTrace support

–with-python

Enable PL/Python3 (incompatible with –with-python@2)

–with-python@2

Enable PL/Python2

–without-perl

Build without Perl support

–without-tcl

Build without Tcl support

–HEAD

Install HEAD version

==> Caveats

To migrate existing data from a previous major version of PostgreSQL run:

  brew postgresql-upgrade-database

To have launchd start postgresql now and restart at login:

  brew services start postgresql

Or, if you don’t want/need a background service you can just run:

  pg_ctl -D /usr/local/var/postgres start

Start Postgresql service

If you notice from the info command output , its indicates the data directory for the DB with full start command. Just run that to start the postgresql.

To have launchd start postgresql now and restart at login:

smartechie-macos :~ $ brew services start postgresql

if you don’t want/need a background service you can just run:

smartechie-macos :~ $  pg_ctl -D /usr/local/var/postgres start

Sample commands and example

DDL Commands :

  • create database
  • create table
  • drop table
  • Alter table

 

Trouble shooting

  1. Error
    1. smartechie-macos ~ $ pg_ctl -D /usr/local/var/postgres start && brew services start postgresql
      waiting for server to start….2018-07-16 11:31:59.391 PDT [57964] LOG: could not bind IPv4 address “127.0.0.1”: Address already in use
      2018-07-16 11:31:59.391 PDT [57964] HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
      2018-07-16 11:31:59.392 PDT [57964] LOG: could not bind IPv6 address “::1”: Address already in use
      2018-07-16 11:31:59.392 PDT [57964] HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
      2018-07-16 11:31:59.392 PDT [57964] WARNING: could not create listen socket for “localhost”
      2018-07-16 11:31:59.392 PDT [57964] FATAL: could not create any TCP/IP sockets
      2018-07-16 11:31:59.392 PDT [57964] LOG: database system is shut down
      stopped waiting
      pg_ctl: could not start server
      Examine the log output.
      spradhan-macos-2:/ sudhir.pradhan$ psql postgres
      psql: FATAL: role “sudhir.pradhan” does not exist

    2. Solution
      1. smartechie-macos ~ $ sudo -u postgres createuser -s $(whoami); createdb $(whoami)
        Password:Error

      2. Result
        1. smartechie-macos ~$ psql postgres
          psql (10.4, server 9.6.2)
          Type “help” for help.

          postgres=#

 

Advertisements

Solved : Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try

Caused by: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[172.27.10.191:50010,DS-51d68378-35de-4c70-b27e-7a98a53919cc,DISK], DatanodeInfoWithStorage[172.27.10.223:50010,DS-f172c682-713d-4a8f-b8af-69198ddc6756,DISK]], original=[DatanodeInfoWithStorage[172.27.10.191:50010,DS-51d68378-35de-4c70-b27e-7a98a53919cc,DISK], DatanodeInfoWithStorage[172.27.10.223:50010,DS-f172c682-713d-4a8f-b8af-69198ddc6756,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via ‘dfs.client.block.write.replace-datanode-on-failure.policy’ in its configuration.

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:925)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:988)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1156)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454)

[2017-11-10 07:08:03,097] ERROR Task hdfs-sink-prqt-stndln-rpro-fct-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSinkTask:455)

java.lang.RuntimeException: java.util.concurrent.ExecutionException: io.confluent.connect.hdfs.errors.HiveMetaStoreException: Invalid partition for default.revpro-perf-oracle-jdbc-stdln-raw-KFK_RPRO_RI_WF_SUMM_GV: partition=0

at io.confluent.connect.hdfs.DataWriter.write(DataWriter.java:226)

at io.confluent.connect.hdfs.HdfsSinkTask.put(HdfsSinkTask.java:103)

at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:435)

at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:251)

at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:180)

at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:148)

at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:146)

at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:190)

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

Caused by: java.util.concurrent.ExecutionException: io.confluent.connect.hdfs.errors.HiveMetaStoreException: Invalid partition for default.revpro-perf-oracle-jdbc-stdln-raw-KFK_RPRO_RI_WF_SUMM_GV: partition=0

at java.util.concurrent.FutureTask.report(FutureTask.java:122)

at java.util.concurrent.FutureTask.get(FutureTask.java:192)

at io.confluent.connect.hdfs.DataWriter.write(DataWriter.java:220)

… 12 more

Caused by: io.confluent.connect.hdfs.errors.HiveMetaStoreException: Invalid partition for default.revpro-perf-oracle-jdbc-stdln-raw-KFK_RPRO_RI_WF_SUMM_GV: partition=0

at io.confluent.connect.hdfs.hive.HiveMetaStore.addPartition(HiveMetaStore.java:107)

at io.confluent.connect.hdfs.TopicPartitionWriter$3.call(TopicPartitionWriter.java:662)

at io.confluent.connect.hdfs.TopicPartitionWriter$3.call(TopicPartitionWriter.java:659)

… 4 more

Caused by: InvalidObjectException(message:default.revpro-perf-oracle-jdbc-stdln-raw-KFK_RPRO_RI_WF_SUMM_GV table not found)

at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$append_partition_by_name_with_environment_context_result$append_partition_by_name_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:51619)

at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$append_partition_by_name_with_environment_context_result$append_partition_by_name_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:51596)

at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$append_partition_by_name_with_environment_context_result.read(ThriftHiveMetastore.java:51519)

at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)

at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_append_partition_by_name_with_environment_context(ThriftHiveMetastore.java:1667)

at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.append_partition_by_name_with_environment_context(ThriftHiveMetastore.java:1651)

at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.appendPartition(HiveMetaStoreClient.java:606)

at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.appendPartition(HiveMetaStoreClient.java:600)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:152)

at com.sun.proxy.$Proxy51.appendPartition(Unknown Source)

at io.confluent.connect.hdfs.hive.HiveMetaStore$1.call(HiveMetaStore.java:97)

at io.confluent.connect.hdfs.hive.HiveMetaStore$1.call(HiveMetaStore.java:91)

at io.confluent.connect.hdfs.hive.HiveMetaStore.doAction(HiveMetaStore.java:87)

at io.confluent.connect.hdfs.hive.HiveMetaStore.addPartition(HiveMetaStore.java:103)

… 6 more

Root cause :

This error specially occures when clusters with just 3 or less Datanodes may experience Append failures under heavy load. Here’s the config parameter to fix it.

Solution :

Append the following set of configuration to your client (not name node or data node) in hdfs-site.xml file

 

<property>
<name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
<value>true</value>
<description>
If there is a datanode/network failure in the write pipeline,
DFSClient will try to remove the failed datanode from the pipeline
and then continue writing with the remaining datanodes. As a result,
the number of datanodes in the pipeline is decreased. The feature is
to add new datanodes to the pipeline.

This is a site-wide property to enable/disable the feature.

When the cluster size is extremely small, e.g. 3 nodes or less, cluster
administrators may want to set the policy to NEVER in the default
configuration file or disable this feature. Otherwise, users may
experience an unusually high rate of pipeline failures since it is
impossible to find new datanodes for replacement.

See also dfs.client.block.write.replace-datanode-on-failure.policy
</description>
</property>

<property>
<name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
<value>DEFAULT</value>
<description>
This property is used only if the value of
dfs.client.block.write.replace-datanode-on-failure.enable is true.

ALWAYS: always add a new datanode when an existing datanode is removed.

NEVER: never add a new datanode.

DEFAULT:
Let r be the replication number.
Let n be the number of existing datanodes.
Add a new datanode only if r is greater than or equal to 3 and either
(1) floor(r/2) is greater than or equal to n; or
(2) r is greater than n and the block is hflushed/appended.
</description>
</property>

<property>
<name>dfs.client.block.write.replace-datanode-on-failure.best-effort</name>
<value>false</value>
<description>
This property is used only if the value of
dfs.client.block.write.replace-datanode-on-failure.enable is true.

Best effort means that the client will try to replace a failed datanode
in write pipeline (provided that the policy is satisfied), however, it
continues the write operation in case that the datanode replacement also
fails.

Suppose the datanode replacement fails.
false: An exception should be thrown so that the write will fail.
true : The write should be resumed with the remaining datandoes.

Note that setting this property to true allows writing to a pipeline
with a smaller number of datanodes. As a result, it increases the
probability of data loss.
</description>
</property>

This article explanation more regarding the properties.

Unable to connect to Hbase shell

Problem :

smartechies.mac $ hbase shell

ArgumentError: wrong number of arguments (0 for 1)
method_added at file:/usr/local/Cellar/hbase/1.2.6/libexec/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:10
method_added at file:/usr/local/Cellar/hbase/1.2.6/libexec/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:129
Pattern at file:/usr/local/Cellar/hbase/1.2.6/libexec/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:2
(root) at file:/usr/local/Cellar/hbase/1.2.6/libexec/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:1
require at org/jruby/RubyKernel.java:1062
(root) at file:/usr/local/Cellar/hbase/1.2.6/libexec/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:42
(root) at /usr/local/Cellar/hbase/1.2.6/libexec/bin/../bin/hirb.rb:38

Solution : 

Point the JAVA_HOME variable to correct value in  conf/hbase-env.sh

Example :

smartechies.mac $ cat ~/.profile | grep JAVA_HOME
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home
#export JAVA_HOME=/usr/libexec/java_home

smartechies.mac $ cat /usr/local/Cellar/hbase/1.2.6/libexec/conf/hbase-env.sh | grep JAVA_HOME
#export JAVA_HOME=”$(/usr/libexec/java_home)”

 

smartechies.mac$ hbase shell
2018-04-11 12:14:48,806 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/Cellar/hbase/1.2.6/libexec/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/Cellar/hadoop/2.8.0/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter ‘help<RETURN>’ for list of supported commands.
Type “exit<RETURN>” to leave the HBase Shell
Version 1.2.6, rUnknown, Mon May 29 02:25:32 CDT 2017

hbase(main) :001 >

Create Superuser in HUE

[ec2-user@ip-123-45-67-890 ~]$ ls -ltr /usr/lib/hue/build/env/bin/hue

-rwxr-xr-x 1 root root 523 Sep 22 22:09 /usr/lib/hue/build/env/bin/hue

[ec2-user@ip-123-45-67-890 ~]$ sudo /usr/lib/hue/build/env/bin/hue createsuperuser

Username (leave blank to use ‘root’): sudhir

Email address: mail2sudhir.online@gmail.com

Password:

Password (again):

Superuser created successfully.

[ec2-user@ip-123-45-67-890 ~]$

Solved : error “/usr/bin/env: node: No such file or directory

Error :

“/usr/bin/env: node: No such file or directory

Solution :

ln -s /usr/bin/nodejs /usr/bin/node

 

Followup issue :

Error: EACCES: permission denied, open ‘/home/ubuntu/.config/configstore/bower-github.json’

Solved (click here for solution)

Solved: permission denied, open ‘/home/ubuntu/.config/configstore/bower-github.json’ in ubuntu

Error :

Error: EACCES: permission denied, open ‘/home/ubuntu/.config/configstore/bower-github.json’
You don’t have access to this file.

at Error (native)
at Object.fs.openSync (fs.js:549:18)
at Object.fs.readFileSync (fs.js:397:15)
at Object.create.all.get (/usr/local/lib/node_modules/bower/lib/node_modules/configstore/index.js:35:26)
at Object.Configstore (/usr/local/lib/node_modules/bower/lib/node_modules/configstore/index.js:28:44)
at readCachedConfig (/usr/local/lib/node_modules/bower/lib/config.js:19:23)
at defaultConfig (/usr/local/lib/node_modules/bower/lib/config.js:11:12)
at Object.<anonymous> (/usr/local/lib/node_modules/bower/lib/index.js:16:32)
at Module._compile (module.js:410:26)
at Object.Module._extensions..js (module.js:417:10)

 

Solution :

sudo chown -R $USER:$GROUP ~/.npm
sudo chown -R $USER:$GROUP ~/.config

Solved : Could not determine java version when running gradle

Error :

21:04:36.791 [INFO] [org.gradle.internal.nativeintegration.services.NativeServices] Initialized native services in: /home/ubuntu/.gradle/native
21:04:36.817 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter]
21:04:36.819 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] FAILURE: Build failed with an exception.
21:04:36.819 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter]
21:04:36.819 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] * What went wrong:
21:04:36.822 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] Could not determine java version from ‘9.0.4’.
21:04:36.822 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter]
21:04:36.822 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] * Exception is:
21:04:36.823 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] java.lang.IllegalArgumentException: Could not determine java version from ‘9.0.4’.
21:04:36.823 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at org.gradle.api.JavaVersion.toVersion(JavaVersion.java:70)
21:04:36.823 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at org.gradle.api.JavaVersion.current(JavaVersion.java:80)
21:04:36.823 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at org.gradle.internal.jvm.UnsupportedJavaRuntimeException.assertUsingVersion(UnsupportedJavaRuntimeException.java:29)
21:04:36.824 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at org.gradle.launcher.cli.JavaRuntimeValidationAction.execute(JavaRuntimeValidationAction.java:32)
21:04:36.824 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at org.gradle.launcher.cli.JavaRuntimeValidationAction.execute(JavaRuntimeValidationAction.java:24)
21:04:36.824 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:33)
21:04:36.824 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:22)
21:04:36.824 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:210)
21:04:36.824 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:174)
21:04:36.824 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at org.gradle.launcher.Main.doAction(Main.java:33)
21:04:36.824 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at org.gradle.launcher.bootstrap.EntryPoint.run(EntryPoint.java:45)
21:04:36.824 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
21:04:36.824 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
21:04:36.824 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
21:04:36.825 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at java.base/java.lang.reflect.Method.invoke(Method.java:564)
21:04:36.825 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at org.gradle.launcher.bootstrap.ProcessBootstrap.runNoExit(ProcessBootstrap.java:60)
21:04:36.825 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at org.gradle.launcher.bootstrap.ProcessBootstrap.run(ProcessBootstrap.java:37)
21:04:36.825 [ERROR] [org.gradle.internal.buildevents.BuildExceptionReporter] at org.gradle.launcher.GradleMain.main(GradleMain.java:23)

 

Solution :

spradhan $ vi gradle/wrapper/gradle-wrapper.properties

distributionBase=GRADLE_USER_HOME distributionPath=wrapper/dists zipStoreBase=GRADLE_USER_HOME zipStorePath=wrapper/dists distributionUrl=https\://services.gradle.org/distributions/gradle-4.3-src.zip

Update the part marked as RED to -all or -bin

spradhan $ vi gradle/wrapper/gradle-wrapper.properties

distributionBase=GRADLE_USER_HOME distributionPath=wrapper/dists zipStoreBase=GRADLE_USER_HOME zipStorePath=wrapper/dists distributionUrl=https\://services.gradle.org/distributions/gradle-4.3all.zip