1. ZK集群,Hadoop集群,Hbase集群安装
Linux121Linux122Linux123Hadoop✔✔✔MySQL✔ZK✔✔✔HBASE✔✔✔1.1 安装Vmware,安装虚拟机集群
1.1.1 安装 (VMware-workstation-full-15.5.5-16285975)
许可证:
UY758-0RXEQ-M81WP-8ZM7Z-Y3HDA
1.1.2 安装 centos7
1.1.3 配置静态IP
- vi /etc/sysconfig/network-scripts/ifcfg-ens33
复制代码- :wq
- systemctl restart network
- ip addr
复制代码- vim /etc/profile
- export HBASE_HOME=/opt/lagou/servers/hbase-1.3.1
- export PATH=$PATH:$HBASE_HOME/bin
复制代码- mkdir -p /opt/lagou/software --软件安装包存放目录
- mkdir -p /opt/lagou/servers --软件安装目录
复制代码- rpm -qa | grep java
- 清理上面显示的包名
- sudo yum remove java-1.8.0-openjdk
- 上传文件jdk-8u421-linux-x64.tar.gz
- chmod 755 jdk-8u421-linux-x64.tar.gz
- 解压文件到/opt/lagou/servers目录下
- tar -zxvf jdk-8u421-linux-x64.tar.gz -C /opt/lagou/servers
- cd /opt/lagou/servers
- ll
复制代码- export JAVA_HOME=/opt/lagou/servers/jdk1.8.0_421
- export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar
- export PATH=$PATH:${JAVA_HOME}/bin
复制代码- source /etc/profile
- java -version
复制代码 1.1.4 安装Xmanager
- 连接192.168.49.121:22
- 密码:123456
复制代码 1.1.5 克隆2台机器,并配置
- vi /etc/sysconfig/network-scripts/ifcfg-ens33
复制代码- systemctl restart network
- ip addr
- hostnamectl
- hostnamectl set-hostname linux121
复制代码- systemctl status firewalld
- systemctl stop firewalld
- systemctl disable firewalld
复制代码- 192.168.49.121 linux121
- 192.168.49.122 linux122
- 192.168.49.123 linux123
复制代码- 第一步: ssh-keygen -t rsa 在centos7-1和centos7-2和centos7-3上面都要执行,产生公钥
- 和私钥
- ssh-keygen -t rsa
- 第二步:在centos7-1 ,centos7-2和centos7-3上执行:
- ssh-copy-id linux121 将公钥拷贝到centos7-1上面去
- ssh-copy-id linux122 将公钥拷贝到centos7-2上面去
- ssh-copy-id linux123 将公钥拷贝到centos7-3上面去
- ssh-copy-id linux121
- ssh-copy-id linux122
- ssh-copy-id linux123
- 第三步:
- centos7-1执行:
- scp /root/.ssh/authorized_keys linux121:$PWD
- scp /root/.ssh/authorized_keys linux122:$PWD
- scp /root/.ssh/authorized_keys linux123:$PWD
复制代码- sudo cp -a /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
- sudo curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
- sudo yum clean all
- sudo yum makecache
- sudo yum install ntpdate
- ntpdate us.pool.ntp.org
- crontab -e
- */1 * * * * /usr/sbin/ntpdate us.pool.ntp.org;
复制代码- vim /etc/profile
- export HBASE_HOME=/opt/lagou/servers/hbase-1.3.1
- export PATH=$PATH:$HBASE_HOME/bin
复制代码 1.2 安装ZK,Hadoop,Hbase集群,安装mysql
1.2.1 安装hadoop集群
- mkdir -p /opt/lagou/software --软件安装包存放目录
- mkdir -p /opt/lagou/servers --软件安装目录
复制代码- 上传hadoop安装文件到/opt/lagou/software
复制代码- https://archive.apache.org/dist/hadoop/common/hadoop-2.9.2/
- hadoop-2.9.2.tar.gz
复制代码- tar -zxvf hadoop-2.9.2.tar.gz -C /opt/lagou/servers
- ll /opt/lagou/servers/hadoop-2.9.2
复制代码- yum install -y vim
- 添加环境变量
- vim /etc/profile
复制代码- ##HADOOP_HOME
- export HADOOP_HOME=/opt/lagou/servers/hadoop-2.9.2
- export PATH=$PATH:$HADOOP_HOME/bin
- export PATH=$PATH:$HADOOP_HOME/sbin
复制代码- source /etc/profile
- hadoop version
复制代码- cd /opt/lagou/servers/hadoop-2.9.2/etc/hadoop
复制代码- vim hadoop-env.sh
- export JAVA_HOME=/opt/lagou/servers/jdk1.8.0_421
复制代码- vim core-site.xml
-
- <property>
- <name>fs.defaultFS</name>
- <value>hdfs://linux121:9000</value>
- </property>
-
- <property>
- <name>hadoop.tmp.dir</name>
- <value>/opt/lagou/servers/hadoop-2.9.2/data/tmp</value>
- </property>
复制代码- vim slaves
-
- linux121
- linux122
- linux123
复制代码- vim mapred-env.sh
- export JAVA_HOME=/opt/lagou/servers/jdk1.8.0_421
复制代码- mv mapred-site.xml.template mapred-site.xml
- vim mapred-site.xml
-
- <property>
- <name>mapreduce.framework.name</name>
- <value>yarn</value>
- </property>
复制代码- vi mapred-site.xml
- 在该文件里面增加如下配置。
- <property>
- <name>mapreduce.jobhistory.address</name>
- <value>linux121:10020</value>
- </property>
-
- <property>
- <name>mapreduce.jobhistory.webapp.address</name>
- <value>linux121:19888</value>
- </property>
复制代码- vim yarn-env.sh
- export JAVA_HOME=/opt/lagou/servers/jdk1.8.0_421
复制代码- vim yarn-site.xml
-
- <property>
- <name>yarn.resourcemanager.hostname</name>
- <value>linux123</value>
- </property>
-
- <property>
- <name>yarn.nodemanager.aux-services</name>
- <value>mapreduce_shuffle</value>
- </property>
复制代码- vi yarn-site.xml
- 在该文件里面增加如下配置。
- <property>
- <name>yarn.log-aggregation-enable</name>
- <value>true</value>
- </property>
-
- <property>
- <name>yarn.log-aggregation.retain-seconds</name>
- <value>604800</value>
- </property>
- <property>
- <name>yarn.log.server.url</name>
- <value>http://linux121:19888/jobhistory/logs</value>
- </property>
复制代码- chown -R root:root /opt/lagou/servers/hadoop-2.9.2
复制代码- 三台都要
- sudo yum install -y rsync
复制代码- touch rsync-script
- vim rsync-script
- #!/bin/bash
- #1 获取命令输入参数的个数,如果个数为0,直接退出命令
- paramnum=$#
- if((paramnum==0)); then
- echo no params;
- exit;
- fi
- #2 根据传入参数获取文件名称
- p1=$1
- file_name=`basename $p1`
- echo fname=$file_name
- #3 获取输入参数的绝对路径
- pdir=`cd -P $(dirname $p1); pwd`
- echo pdir=$pdir
- #4 获取用户名称
- user=`whoami`
- #5 循环执行rsync
- for((host=121; host<124; host++)); do
- echo ------------------- linux$host --------------
- rsync -rvl $pdir/$file_name $user@linux$host:$pdir
- done
复制代码 1.2.3 安装Hbase集群(先启动Hadoop和zk才能启动Hbase)
- chmod 777 rsync-script
- ./rsync-script /home/root/bin
- ./rsync-script /opt/lagou/servers/hadoop-2.9.2
- ./rsync-script /opt/lagou/servers/jdk1.8.0_421
- ./rsync-script /etc/profile
复制代码 修改配置文件
把hadoop中的配置core-site.xml 、hdfs-site.xml拷贝到hbase安装目录下的conf文件夹中- 在namenode,linux121上格式化节点
- hadoop namenode -format
- ssh localhost
- 集群群起
- stop-dfs.sh
- stop-yarn.sh
- sbin/start-dfs.sh
复制代码 修改conf目录下配置文件- datanode可能起不来
- sudo rm -rf /opt/lagou/servers/hadoop-2.9.2/data/tmp/*
- hadoop namenode -format
- sbin/start-dfs.sh
复制代码- 注意:NameNode和ResourceManger不是在同一台机器,不能在NameNode上启动 YARN,应该
- 在ResouceManager所在的机器上启动YARN
- sbin/start-yarn.sh
- linux121:
- sbin/mr-jobhistory-daemon.sh start historyserver
复制代码- 地址:
- hdfs:
- http://linux121:50070/dfshealth.html#tab-overview
- 日志:
- http://linux121:19888/jobhistory
复制代码- hdfs dfs -mkdir /wcinput
- cd /root/
- touch wc.txt
- vi wc.txt
- hadoop mapreduce yarn
- hdfs hadoop mapreduce
- mapreduce yarn lagou
- lagou
- lagou
-
- 保存退出
- : wq!
- hdfs dfs -put wc.txt /wcinput
- hadoop jar share/hadoop/mapreduce/hadoop mapreduce-examples-2.9.2.jar wordcount /wcinput /wcoutput
复制代码- 上传并解压zookeeper-3.4.14.tar.gz
- tar -zxvf zookeeper-3.4.14.tar.gz -C ../servers/
复制代码- #创建zk存储数据⽬目录
- mkdir -p /opt/lagou/servers/zookeeper-3.4.14/data
- #创建zk⽇日志⽂文件⽬目录
- mkdir -p /opt/lagou/servers/zookeeper-3.4.14/data/logs
- #修改zk配置⽂文件
- cd /opt/lagou/servers/zookeeper-3.4.14/conf
- #⽂文件改名
- mv zoo_sample.cfg zoo.cfg
- mkdir -p /opt/lagou/servers/zookeeper-3.4.14/data
- mkdir -p /opt/lagou/servers/zookeeper-3.4.14/data/logs
- cd /opt/lagou/servers/zookeeper-3.4.14/conf
- mv zoo_sample.cfg zoo.cfg
- vim zoo.cfg
-
- #更更新datadir
- dataDir=/opt/lagou/servers/zookeeper-3.4.14/data
- #增加logdir
- dataLogDir=/opt/lagou/servers/zookeeper-3.4.14/data/logs
- #增加集群配置
- ##server.服务器器ID=服务器器IP地址:服务器器之间通信端⼝口:服务器器之间投票选举端⼝口
- server.1=linux121:2888:3888
- server.2=linux122:2888:3888
- server.3=linux123:2888:3888
- #打开注释
- #ZK提供了了⾃自动清理理事务⽇日志和vim /etc/profile
- export HBASE_HOME=/opt/lagou/servers/hbase-1.3.1
- export PATH=$PATH:$HBASE_HOME/bin⽂文件的功能,这个参数指定了了清理理频率,单位是⼩小时
- autopurge.purgeInterval=1
复制代码- cd /opt/lagou/servers/zookeeper-3.4.14/data
- echo 1 > myid
-
- 安装包分发并修改myid的值
- cd /opt/lagou/servers/hadoop-2.9.2/etc/hadoop
- ./rsync-script /opt/lagou/servers/zookeeper-3.4.14
-
- 修改myid值 linux122
- echo 2 >/opt/lagou/servers/zookeeper-3.4.14/data/myid
-
- 修改myid值 linux123
- echo 3 >/opt/lagou/servers/zookeeper-3.4.14/data/myid
-
- 依次启动三个zk实例例
- 启动命令(三个节点都要执⾏行行)
- /opt/lagou/servers/zookeeper-3.4.14/bin/zkServer.sh start
- 查看zk启动情况
- /opt/lagou/servers/zookeeper-3.4.14/bin/zkServer.sh status
- 集群启动停⽌止脚本
- vim zk.sh
- #!/bin/sh
- echo "start zookeeper server..."
- if(($#==0));then
- echo "no params";
- exit;
- fi
- hosts="linux121 linux122 linux123"
- for host in $hosts
- do
- ssh $host "source /etc/profile; /opt/lagou/servers/zookeeper-3.4.14/bin/zkServer.sh $1"
- done
- chmod 777 zk.sh
- ./zk.sh start
- ./zk.sh stop
- ./zk.sh status
复制代码 1.2.4 安装mysql
- 解压安装包到指定的规划目录 hbase-1.3.1-bin.tar.gz
- tar -zxvf hbase-1.3.1-bin.tar.gz -C /opt/lagou/servers
复制代码- ln -s /opt/lagou/servers/hadoop-2.9.2/etc/hadoop/core-site.xml /opt/lagou/servers/hbase-1.3.1/conf/core-site.xml
- ln -s /opt/lagou/servers/hadoop-2.9.2/etc/hadoop/hdfs-site.xml /opt/lagou/servers/hbase-1.3.1/conf/hdfs-site.xml
复制代码- cd /opt/lagou/servers/hbase-1.3.1/conf
- vim hbase-env.sh
- #添加java环境变量
- export JAVA_HOME=/opt/lagou/servers/jdk1.8.0_421
- #指定使用外部的zk集群
- export HBASE_MANAGES_ZK=FALSE
-
复制代码- vim hbase-site.xml
- <configuration>
-
- <property>
- <name>hbase.rootdir</name>
- <value>hdfs://linux121:9000/hbase</value>
- </property>
-
- <property>
- <name>hbase.cluster.distributed</name>
- <value>true</value>
- </property>
-
- <property>
- <name>hbase.zookeeper.quorum</name>
- <value>linux121:2181,linux122:2181,linux123:2181</value>
- </property>
- </configuration>
复制代码- vim regionservers
- linux121
- linux122
- linux123
- vim backup-masters
- linux122
复制代码- vim /etc/profile
- export HBASE_HOME=/opt/lagou/servers/hbase-1.3.1
- export PATH=$PATH:$HBASE_HOME/bin
复制代码 图片不显示无伤大雅,就是示例,步骤都在。
非常重要:
如果后续需要接入spark,flink,hive,需要提前找到匹配的版本,这是原生部署的通病。
如果服务器资源足够,建议直接使用cdh部署。
来源:程序园用户自行投稿发布,如果侵权,请联系站长删除
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作! |