博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Greenplum 数据库安装部署(生产环境)
阅读量:6371 次
发布时间:2019-06-23

本文共 22908 字,大约阅读时间需要 76 分钟。

Greenplum 数据库安装部署(生产环境)

硬件配置:

16 台 IBM X3650,
节点配置:CPU 2 * 8core,内存 128GB,硬盘 16 * 900GB,万兆网卡。
万兆交换机。

安装需求:

1台Master,1台Standby Master,14台Segment计算节点。

安装步骤:

1. Master节点安装

先确认各节点目录结构保持一致:

[root@XXXGPM01 db]# df -hFilesystem      Size  Used Avail Use% Mounted on/dev/sda6        20G  440M   18G   3% /tmpfs            63G   76K   63G   1% /dev/shm/dev/sda2       485M   39M  421M   9% /boot/dev/sda1       200M  260K  200M   1% /boot/efi/dev/sda4       5.5T  3.1G  5.5T   1% /data1/dev/sdb1       5.8T   34M  5.8T   1% /data2/dev/sda8       9.7G  150M  9.0G   2% /home/dev/sda5        49G  3.1G   43G   7% /opt/dev/sda9       9.7G  151M  9.0G   2% /tmp/dev/sda7        20G  5.8G   13G  32% /usr/dev/sda10      9.7G  338M  8.8G   4% /var

主节点/etc/hosts文件,追加内容(万兆网卡的IP地址和主机名):

172.16.99.18 XXXGPM01172.16.99.19 XXXGPM02172.16.99.20 XXXGPD01172.16.99.21 XXXGPD02172.16.99.22 XXXGPD03172.16.99.23 XXXGPD04172.16.99.24 XXXGPD05172.16.99.25 XXXGPD06172.16.99.26 XXXGPD07172.16.99.27 XXXGPD08172.16.99.28 XXXGPD09172.16.99.29 XXXGPD10172.16.99.30 XXXGPD11172.16.99.31 XXXGPD12172.16.99.32 XXXGPD13172.16.99.33 XXXGPD14
  1. 解压安装介质
    GP的安装包:/opt/db.zip
    cd /data1; unzip /opt/db.zip
  2. 安装目录
    mkdir -p /data1/gpinstall

在 XXXGPM1 节点上,将 greenplum-db-4.x.x.x-build-5-RHEL5-x86_64.zip 解开, 以 root 用户执行得到的.bin 文件。按照提示进行安装。

# /bin/bash greenplum-db-4.3.2.0-build-1-RHEL5-x86_64.bin

选择自定义安装目录: /data1/gpinstall/greenplum-db-4.3.2.0

安装完成之后,可以看到如下目录

[root@XXXGPM01 gpinstall]# pwd/data1/gpinstall[root@XXXGPM01 gpinstall]# ls -lh总用量 4.0Klrwxrwxrwx.  1 root root   22 6月   4 18:51 greenplum-db -> ./greenplum-db-4.3.2.0drwxr-xr-x. 11 root root 4.0K 6月   4 18:51 greenplum-db-4.3.2.0

2. 创建GP安装配置文件并配置ssh互信

mkdir -p /data1/gpinstall/config

创建两个配置文件

1. allnodes.txt

XXXGPM01XXXGPM02XXXGPD01XXXGPD02XXXGPD03XXXGPD04XXXGPD05XXXGPD06XXXGPD07XXXGPD08XXXGPD09XXXGPD10XXXGPD11XXXGPD12XXXGPD13XXXGPD14

2. nodes.txt

XXXGPD01XXXGPD02XXXGPD03XXXGPD04XXXGPD05XXXGPD06XXXGPD07XXXGPD08XXXGPD09XXXGPD10XXXGPD11XXXGPD12XXXGPD13XXXGPD14

3. 配置所有GP节点root用户的ssh互信:

source /data1/gpinstall/greenplum-db/greenplum_path.sh gpssh-exkeys -f /data1/gpinstall/config/allnodes.txt

测试可以正常互信。此时就可以方便的去同步配置所有的参数。

先写脚本同步各节点的/etc/hosts文件:

#!/bin/bash#Usage: copy files to other hosts in cluster.#ex: sh bulkcp.sh /etc/hosts /etc/hosts#Author: AlfredZhao#Version: 1.0.0for((i=18;i<=33;i++))do scp $1 172.16.99.$i:$2 echo "scp $1 172.16.99.$i:$2"done

gpssh -f /data1/gpinstall/config/allnodes.txt -e '' 回车即可进入交互性界面。

3. 关闭防火墙及开启自启动

检查是否关闭及开机启动项设置:

service iptables statusservice ip6tables statusservice libvirtd statusservice iptables stopservice ip6tables stopservice libvirtd stopchkconfig libvirtd offchkconfig iptables offchkconfig ip6tables off

4. 关闭SELinux

检查SELinux的当前状态和配置:

getenforcemore /etc/selinux/config | grep SELINUX=

临时关闭SELinux和永久禁用SELinux:

setenforce 0sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

5. 磁盘调度算法

检查当前的磁盘调度算法:

cat /sys/block/sd*/queue/scheduler

原厂给的设置方法:

修改/boot/grub/menu.lst 找到 kernel /vmlinuz-xxx 这一行,在最后添加 elevator=deadline

检查/验证方法:

系统启动正常后,执行 cat /sys/block/*/queue/scheduler 应能看到: noop anticipatory [deadline] cfq

实际遇到问题:RHEL6.5系统 没有/boot/grub/menu.lst 这个文件。

nl /boot/grub/menu.lst
换一个思路,利用rc.local配置加两行信息如下:

echo deadline > /sys/block/sda/queue/schedulerecho deadline > /sys/block/sdb/queue/scheduler

这里为了快速同步集群的配置统一,将这两行写入临时配置文件/tmp/gpconfig/1deadline.conf,然后同步到各节点。

mkdir -p /tmp/gpconfig
最后在交互性界面执行命令:
cat /tmp/gpconfig/1deadline.conf >> /etc/rc.d/rc.local

6. 磁盘预读取配置

检查磁盘预读取设置,应都是 16384:

blockdev --getra /dev/sd* blockdev --getra /dev/dm-*

设置办法:

修改/etc/rc.d/rc.local 增加两行

blockdev --setra 16384 /dev/sd* blockdev --setra 16384 /dev/dm-*

检查办法:

系统重启后,运行
blockdev --getra /dev/sd* blockdev --getra /dev/dm-* 应都是 16384

/etc/rc.d/rc.local

sh bulkcp.sh /tmp/gpconfig/2rclocal.conf /tmp/gpconfig/2rclocal.conf
gpssh -f /data1/gpinstall/config/allnodes.txt -e ''
cat /tmp/gpconfig/2rclocal.conf >> /etc/rc.d/rc.local

7. 语言与字符集

检查语言和字符集:echo $LANG

en_US.UTF-8
设置办法:系统安装时指定。
检查办法:登录系统,执行命令 locale 结果应该是 en_US.UTF-8

修改:

sed -i 's/zh_CN.UTF-8/en_US.UTF-8/g' /etc/sysconfig/i18n

8. Sysctl.conf增加配置

配置sysctl.conf

sysctl -p
有时需要 modprobe bridge

设置办法: 修改/etc/sysctl.conf,

增加以下内容:

net.ipv4.ip_forward = 0  net.ipv4.conf.default.accept_source_route = 0  kernel.sysrq = 0  kernel.core_uses_pid = 1  net.ipv4.tcp_syncookies = 1  kernel.msgmnb = 65536  kernel.msgmax = 65536  kernel.sem = 250 64000 100 2048  kernel.shmmax = 5000000000  kernel.shmmni = 40960  kernel.shmall = 40000000000  net.ipv4.tcp_tw_recycle=1  net.ipv4.tcp_max_syn_backlog=4096  net.core.netdev_max_backlog=10000  vm.overcommit_memory=2  net.ipv4.conf.all.arp_filter = 1  net.ipv4.ip_local_port_range = 1025 65535

然后执行 sysctl -p。

vi /tmp/gpconfig/4sysctl.confsh bulckcp.sh /tmp/gpconfig/4sysctl.conf /tmp/gpconfig/4sysctl.confgpssh -f /data1/gpinstall/config/allnodes.txt -e ''cat /tmp/gpconfig/4sysctl.conf >> /etc/sysctl.conf

检查办法: 用 sysctl ,检查上述各参数是否匹配。

9. 用户资源限额配置

查看用户资源限额:

ulimit -a

设置办法:

修改 /etc/security/limits.d/90-nproc.conf
Replace with:

* soft nofile 1048576 * hard nofile 1048576 * soft nproc 1048576 * hard nproc 1048576

同步修改:

vi /tmp/gpconfig/5limits.confsh bulkcp.sh /tmp/gpconfig/5limits.conf /tmp/gpconfig/5limits.confnl /etc/security/limits.d/90-nproc.confcat /tmp/gpconfig/5limits.conf >> /etc/security/limits.d/90-nproc.conf

检查办法: 用任意普通用户登录,执行 ulimit -a,检查上述参数是否匹配。

10. 时间校对

检查时间:

date

校对时间:

--2015年6月5日9:17
date 060509172015
hwclock -w

11. 设定网卡自启动

确定需要启用的网卡:eth2

ifconfig eth2 | grep 172
more /etc/sysconfig/network-scripts/ifcfg-eth2 |grep ONBOOT

修改eth2网卡配置,开启自启动

sed -i 's/ONBOOT=no/ONBOOT=yes/g' /etc/sysconfig/network-scripts/ifcfg-eth2

以上步骤修改完毕后统一重启下机器,再次验证以上各个步骤是否修改成功。

12. 创建用户(可选)

这个也可以在gpseginstall时指定选项创建,这里为了规范业务用户的uid和gid事先先创建好组和用户:

写一个小脚本,分发到各节点,然后在交互式界面下统一执行脚本建立组,用户,密码:

#!/bin/bash#Usage: create gpadmin#Author: AlfredZhao#version: 1.0.0groupadd gpadmin -g 3030useradd gpadmin -u 3030 -g 3030passwd gpadmin <
gpadminpwd> gpadminpwd> EOF

13. 创建目录并赋权

Master 目录:

gpssh -h XXXGPM01 -e 'mkdir -p /data1/master' gpssh -h XXXGPM02 -e 'mkdir -p /data1/master'gpssh -h XXXGPM01 -e 'chown gpadmin:gpadmin /data1/master' gpssh -h XXXGPM02 -e 'chown gpadmin:gpadmin /data1/master'

数据库数据文件存储目录:

注意这里配置文件变为nodes.txt,主节点并不需要创建这些目录。

gpssh -f /data1/gpinstall/config/nodes.txt -e ''mkdir -p /data1/primary mkdir -p /data1/mirror     mkdir -p /data2/primary mkdir -p /data2/mirrorchown gpadmin:gpadmin /data1/primary chown gpadmin:gpadmin /data1/mirror     chown gpadmin:gpadmin /data2/primary chown gpadmin:gpadmin /data2/mirror

14. 各节点GP软件的安装

gpseginstall -f allnodes.txt -c csv

如果前面没有手工提前建立gpadmin用户,这里可以指定用户密码(-u -p),统一建立gpadmin用户和密码

[root@XXXGPM01 data1]# gpseginstall -f /data1/gpinstall/config/allnodes.txt -c csv20150605:00:01:07:005656 gpseginstall:XXXGPM01:root-[INFO]:-Installation Info:link_name greenplum-dbbinary_path /data1/gpinstall/greenplum-db-4.3.2.0binary_dir_location /data1/gpinstallbinary_dir_name greenplum-db-4.3.2.020150605:00:01:07:005656 gpseginstall:XXXGPM01:root-[INFO]:-check cluster password access20150605:00:01:11:005656 gpseginstall:XXXGPM01:root-[INFO]:-de-duplicate hostnames20150605:00:01:11:005656 gpseginstall:XXXGPM01:root-[INFO]:-master hostname: XXXGPM0120150605:00:01:13:005656 gpseginstall:XXXGPM01:root-[INFO]:-chown -R gpadmin:gpadmin /data1/gpinstall/greenplum-db20150605:00:01:13:005656 gpseginstall:XXXGPM01:root-[INFO]:-chown -R gpadmin:gpadmin /data1/gpinstall/greenplum-db-4.3.2.020150605:00:01:14:005656 gpseginstall:XXXGPM01:root-[INFO]:-rm -f /data1/gpinstall/greenplum-db-4.3.2.0.tar; rm -f /data1/gpinstall/greenplum-db-4.3.2.0.tar.gz20150605:00:01:14:005656 gpseginstall:XXXGPM01:root-[INFO]:-cd /data1/gpinstall; tar cf greenplum-db-4.3.2.0.tar greenplum-db-4.3.2.020150605:00:01:17:005656 gpseginstall:XXXGPM01:root-[INFO]:-gzip /data1/gpinstall/greenplum-db-4.3.2.0.tar20150605:00:01:36:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: mkdir -p /data1/gpinstall20150605:00:01:37:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: rm -rf /data1/gpinstall/greenplum-db-4.3.2.020150605:00:01:39:005656 gpseginstall:XXXGPM01:root-[INFO]:-scp software to remote location20150605:00:01:41:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: gzip -f -d /data1/gpinstall/greenplum-db-4.3.2.0.tar.gz20150605:00:01:46:005656 gpseginstall:XXXGPM01:root-[INFO]:-md5 check on remote location20150605:00:01:48:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: cd /data1/gpinstall; tar xf greenplum-db-4.3.2.0.tar20150605:00:01:51:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: rm -f /data1/gpinstall/greenplum-db-4.3.2.0.tar20150605:00:01:52:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: cd /data1/gpinstall; rm -f greenplum-db; ln -fs greenplum-db-4.3.2.0 greenplum-db20150605:00:01:54:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: chown -R gpadmin:gpadmin /data1/gpinstall/greenplum-db20150605:00:01:55:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: chown -R gpadmin:gpadmin /data1/gpinstall/greenplum-db-4.3.2.020150605:00:01:57:005656 gpseginstall:XXXGPM01:root-[INFO]:-rm -f /data1/gpinstall/greenplum-db-4.3.2.0.tar.gz20150605:00:01:57:005656 gpseginstall:XXXGPM01:root-[INFO]:-version string on master: gpssh version 4.3.2.0 build 120150605:00:01:57:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: . /data1/gpinstall/greenplum-db/./greenplum_path.sh; /data1/gpinstall/greenplum-db/./bin/gpssh --version20150605:00:01:59:005656 gpseginstall:XXXGPM01:root-[INFO]:-remote command: . /data1/gpinstall/greenplum-db-4.3.2.0/greenplum_path.sh; /data1/gpinstall/greenplum-db-4.3.2.0/bin/gpssh --version20150605:00:02:06:005656 gpseginstall:XXXGPM01:root-[INFO]:-SUCCESS -- Requested commands completed

15. 初始化GP数据库

以下工作,在Master节点上以gpadmin用户登陆完成。

cd /data1/gpinstall/config
新建gpinitsystem_config配置文件,内容如下:

ARRAY_NAME="XXXGPDB"  SEG_PREFIX=gpseg  PORT_BASE=40000  declare -a DATA_DIRECTORY=(/data1/primary /data1/primary /data1/primary /data1/primary /data1/primary /data1/primary /data2/primary /data2/primary /data2/primary /data2/primary /data2/primary /data2/primary)  MASTER_HOSTNAME=XXXGPM01 MASTER_DIRECTORY=/data1/master  MASTER_PORT=5432  TRUSTED_SHELL=ssh CHECK_POINT_SEGMENTS=256 ENCODING=UNICODE MIRROR_PORT_BASE=50000  REPLICATION_PORT_BASE=41000  MIRROR_REPLICATION_PORT_BASE=51000  declare -a MIRROR_DATA_DIRECTORY=(/data1/mirror /data1/mirror /data1/mirror /data1/mirror /data1/mirror /data1/mirror /data2/mirror /data2/mirror /data2/mirror /data2/mirror /data2/mirror /data2/mirror)

15.1 配置下gpadmin用户的各节点间互信:

gpssh-exkeys -f /data1/gpinstall/config/allnodes.txt

15.2 初始化数据库:

gpinitsystem -c gpinitsystem_config -h nodes.txt -B 8

15.3 配置环境变量:

source /usr/local/greenplum-db/greenplum_path.shexport MASTER_DATA_DIRECTORY=/data1/master/gpseg-1

15.4 建立冗余的master节点:

gpinitstandby -s XXXGPM02

15.5 调整数据库参数:

以下数据库参数调整后必须重新启动数据库。

调整方法:执行命令 gpconfig -c 参数名 -v 参数值 -m Master节点值

检查方法:重启数据库后,执行命令 gpconfig -s 参数名

参数名 参数值 master节点值

gpconfig -c shared_buffers -v 128MB -m 128MBgpconfig -c gp_vmem_protect_limit -v 15360 -m 15360gpconfig -c max_connections -v 1000 -m 200gpconfig --skipvalidation -c wal_send_client_timeout -v 60s -m 60s [gpadmin@XXXGPM01 greenplum-db]$  gpconfig -c shared_buffers -v 128MB -m 128MB20150605:14:50:53:017038 gpconfig:XXXGPM01:gpadmin-[INFO]:-completed successfully[gpadmin@XXXGPM01 greenplum-db]$ gpconfig -c gp_vmem_protect_limit -v 15360 -m 1536020150605:14:52:51:017179 gpconfig:XXXGPM01:gpadmin-[INFO]:-completed successfully[gpadmin@XXXGPM01 greenplum-db]$ gpconfig -c max_connections -v 1000 -m 20020150605:14:53:08:017271 gpconfig:XXXGPM01:gpadmin-[INFO]:-completed successfully[gpadmin@XXXGPM01 greenplum-db]$ gpconfig --skipvalidation -c wal_send_client_timeout -v 60s -m 60s 20150605:14:53:23:017363 gpconfig:XXXGPM01:gpadmin-[INFO]:-completed successfully

15.6 关闭数据库

[gpadmin@XXXGPM01 greenplum-db]$ gpstop -a20150605:14:54:40:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Starting gpstop with args: -a20150605:14:54:40:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Gathering information and validating the environment...20150605:14:54:40:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information20150605:14:54:40:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Obtaining Segment details from master...20150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.3.2.0 build 1'20150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-There are 0 connections to the database20150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'20150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Master host=XXXGPM0120150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=smart20150605:14:54:42:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Master segment instance directory=/data1/master/gpseg-120150605:14:54:43:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Stopping master standby host XXXGPM02 mode=fast20150605:14:54:45:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Successfully shutdown standby process on XXXGPM0220150605:14:54:45:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Commencing parallel primary segment instance shutdown, please wait.............................. 20150605:14:55:12:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Commencing parallel mirror segment instance shutdown, please wait................... 20150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------20150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-   Segments stopped successfully      = 33620150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-   Segments with errors during stop   = 020150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------20150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Successfully shutdown 336 of 336 segment instances 20150605:14:55:28:017533 gpstop:XXXGPM01:gpadmin-[INFO]:-Database successfully shutdown with no errors reported

15.7 启动数据库:

[gpadmin@XXXGPM01 greenplum-db]$ gpstart -a20150605:14:56:39:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Starting gpstart with args: -a20150605:14:56:39:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Gathering information and validating the environment...20150605:14:56:39:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.3.2.0 build 1'20150605:14:56:39:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Greenplum Catalog Version: '201310150'20150605:14:56:39:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Starting Master instance in admin mode20150605:14:56:40:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information20150605:14:56:40:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Obtaining Segment details from master...20150605:14:56:42:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Setting new master era20150605:14:56:42:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Master Started...20150605:14:56:42:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Shutting down master20150605:14:56:46:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait..................... 20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Process results...20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-   Successful segment starts                                            = 33620150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-   Failed segment starts                                                = 020150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-   Skipped segment starts (segments are marked down in configuration)   = 020150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Successfully started 336 of 336 segment instances 20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------20150605:14:57:04:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Starting Master instance XXXGPM01 directory /data1/master/gpseg-1 20150605:14:57:05:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Command pg_ctl reports Master XXXGPM01 instance active20150605:14:57:06:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Starting standby master20150605:14:57:06:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Checking if standby master is running on host: XXXGPM02  in directory: /data1/master/gpseg-120150605:14:57:09:017669 gpstart:XXXGPM01:gpadmin-[INFO]:-Database successfully started[gpadmin@XXXGPM01 greenplum-db]$

15.8 检查数据库的参数

gpconfig -s shared_buffers

gpconfig -s gp_vmem_protect_limit
gpconfig -s max_connections
gpconfig -s wal_send_client_timeout

[gpadmin@XXXGPM01 greenplum-db]$ gpconfig -s shared_buffers20150605:11:37:40:011591 gpconfig:XXXGPM01:gpadmin-[ERROR]:-Failed to retrieve GUC information: error 'ERROR:  function gp_toolkit.gp_param_setting(unknown) does not existLINE 1: select * from gp_toolkit.gp_param_setting('shared_buffers')                      ^HINT:  No function matches the given name and argument types. You may need to add explicit type casts.' in 'select * from gp_toolkit.gp_param_setting('shared_buffers')'

psql postgres

show shared_buffers;show gp_vmem_protect_limit;show max_connections;show wal_send_client_timeout;

15.9 创建业务数据库XXX

项目内规范:所有后期创建数据库,都必须在psql 登陆到postgres数据库后创建!!

psql postgrespostgres=# create database XXX;CREATE DATABASE

psql XXX

15.10 调整连接控制参数

修改文件 $MASTER_DATA_DIRECTORY/pg_hba.conf

增加一行:
host all all 0/0 md5
修改standby master上的文件 $MASTER_DATA_DIRECTORY/pg_hba.conf
增加一行:
host all all 0/0 md5

15.11 查看数据库的状态(gpstate)

[gpadmin@XXXGPM01 ~]$ gpstate20150605:13:48:23:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-Starting gpstate with args: 20150605:13:48:24:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.2.0 build 1'20150605:13:48:24:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.2.0 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on Jul 12 2014 17:02:40'20150605:13:48:24:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-Obtaining Segment details from master...20150605:13:48:25:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-Gathering data from segments................... 20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-Greenplum instance status summary20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Master instance                                           = Active20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Master standby                                            = XXXGPM0220150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Standby master state                                      = Standby host passive20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total segment instance count from metadata                = 33620150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Primary Segment Status20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total primary segments                                    = 16820150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total primary segment valid (at master)                   = 16820150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total primary segment failures (at master)                = 020150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of postmaster.pid files missing              = 020150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 16820150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing               = 020150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 16820150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of /tmp lock files missing                   = 020150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of /tmp lock files found                     = 16820150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number postmaster processes missing                 = 020150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number postmaster processes found                   = 16820150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Mirror Segment Status20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------20150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total mirror segments                                     = 16820150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total mirror segment valid (at master)                    = 16820150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total mirror segment failures (at master)                 = 020150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of postmaster.pid files missing              = 020150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of postmaster.pid files found                = 16820150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing               = 020150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found                 = 16820150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of /tmp lock files missing                   = 020150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number of /tmp lock files found                     = 16820150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number postmaster processes missing                 = 020150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number postmaster processes found                   = 16820150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number mirror segments acting as primary segments   = 020150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-   Total number mirror segments acting as mirror segments    = 16820150605:13:48:41:015288 gpstate:XXXGPM01:gpadmin-[INFO]:-----------------------------------------------------[gpadmin@XXXGPM01 ~]$

转载地址:http://lsuqa.baihongyu.com/

你可能感兴趣的文章
《Java EE 7精粹》—— 第1章 Java EE 1.1 简介
查看>>
《Exchange Server 2013 SP1管理实践》——导读
查看>>
syslog:类Unix系统常用的log服务
查看>>
使用Annotation设计持久层
查看>>
深入实践Spring Boot2.4.1 Neo4j依赖配置
查看>>
Zen Cart 如何添加地址栏上的小图标
查看>>
SecureCrt 连接Redhat linux
查看>>
[NHibernate]持久化类(Persistent Classes)
查看>>
如何在Hive中使用Json格式数据
查看>>
linux如何恢复被删除的热文件
查看>>
Eclipse(MyEclipse) 自动补全
查看>>
Struts2中dispatcher与redirect的区别
查看>>
zabbix agentd configure
查看>>
地图点聚合优化方案
查看>>
Google Chrome 快捷方式
查看>>
备考PMP心得体会
查看>>
vue proxy匹配规则
查看>>
线上应用故障排查之一:高CPU占用
查看>>
Extend Volume 操作 - 每天5分钟玩转 OpenStack(56)
查看>>
IronPython教程
查看>>