hadoop集群搭建之HA模式

2023-12-15 04:50:04

1.Hadoop环境变量相关配置

? ? 1.在node01上创建hadoop安装路径:
? ? ?mkdir /opt/bigdata

? ? ?2.解压hadoop软件包
?? ?tar xf hadoop-2.6.5.tar.gz
?? ?mv hadoop-2.6.5 ?/opt/bigdata/
? ? 3.配置hadoop环境便令
?? ?vi /etc/profile?? ?
?? ??export ?JAVA_HOME=/usr/java/default
?? ??export HADOOP_HOME=/opt/bigdata/hadoop-2.6.5
?? ??export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

? ? 4.使配置生效
? ? ?source /etc/profile

? ? ? 5.检测

? ? ? ?在任意目录输入hd按Tab建看是否能联想出hdfs命令,有侧安装配置成功

? ? ?6.从node01分发到其他节点(node02,node03,node04)

? ? ????????cd /opt
?? ??? ??? ?scp -r ./bigdata/ ?node02:`pwd`
?? ??? ??? ?scp -r ./bigdata/ ?node03:`pwd`
?? ??? ??? ?scp -r ./bigdata/ ?node04:`pwd`

? ? ? ? 2.安装zookeeper(node02,node03,node04)

node02:
?? ??? ??? ?tar xf zook....tar.gz
?? ??? ??? ?mv zoo... ? ?/opt/bigdata
?? ??? ??? ?cd /opt/bigdata/zoo....
?? ??? ??? ?cd conf
?? ??? ??? ?cp zoo_sample.cfg ?zoo.cfg
?? ??? ??? ?vi zoo.cfg
?? ??? ??? ??? ?datadir=/var/bigdata/hadoop/zk
?? ??? ??? ??? ?server.1=node02:2888:3888
?? ??? ??? ??? ?server.2=node03:2888:3888
?? ??? ??? ??? ?server.3=node04:2888:3888
?? ??? ??? ?mkdir /var/bigdata/hadoop/zk
?? ??? ??? ?echo 1 > ?/var/bigdata/hadoop/zk/myid?
?? ??? ??? ?vi /etc/profile
?? ??? ??? ??? ?export ZOOKEEPER_HOME=/opt/bigdata/zookeeper-3.4.6
?? ??? ??? ??? ?export ????????????????PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin
?? ??? ??? ?. /etc/profile
?? ??? ??? ?cd /opt/bigdata
?? ??? ??? ?scp -r ./zookeeper-3.4.6 ?node03:`pwd`
?? ??? ??? ?scp -r ./zookeeper-3.4.6 ?node04:`pwd`
?? ??? ?node03:
?? ??? ??? ?mkdir /var/bigdata/hadoop/zk
?? ??? ??? ?echo 2 > ?/var/bigdata/hadoop/zk/myid
?? ??? ??? ?*环境变量
?? ??? ??? ?. /etc/profile
?? ??? ?node04:
?? ??? ??? ?mkdir /var/bigdata/hadoop/zk
?? ??? ??? ?echo 3 > ?/var/bigdata/hadoop/zk/myid
?? ??? ??? ?*环境变量
?? ??? ??? ?. /etc/profile

?? ??? ?node02~node04:
?? ??? ??? ?zkServer.sh start

? ?3.hadoop文件相关配置

? ? ? ?1.修改hadoop-env.sh

????????vi hadoop-env.sh
?? ??? ?export JAVA_HOME=/usr/java/default

? ? ? ? ?2.修改core-site.xml

? ? ? ? ? ? vi? core-site.xml

<property>
    <name>fs.defaultFS</name>
	<value>hdfs://mycluster</value>
</property>

<property>
	<name>ha.zookeeper.quorum</name>
	<value>node02:2181,node03:2181,node04:2181</value>
</property>

? ? ? 3.修改hdfs-site.xml

????????vi hdfs-site.xml

<property>
		  <name>dfs.nameservices</name>
		  <value>mycluster</value>
		</property>
		<property>
		  <name>dfs.ha.namenodes.mycluster</name>
		  <value>nn1,nn2</value>
		</property>
		<property>
		  <name>dfs.namenode.rpc-address.mycluster.nn1</name>
		  <value>node01:8020</value>
		</property>
		<property>
		  <name>dfs.namenode.rpc-address.mycluster.nn2</name>
		  <value>node02:8020</value>
		</property>
		<property>
		  <name>dfs.namenode.http-address.mycluster.nn1</name>
		  <value>node01:50070</value>
		</property>
		<property>
		  <name>dfs.namenode.http-address.mycluster.nn2</name>
		  <value>node02:50070</value>
		</property>

		#以下是JN在哪里启动,数据存那个磁盘
		<property>
		  <name>dfs.namenode.shared.edits.dir</name>
		  <value>qjournal://node01:8485;node02:8485;node03:8485/mycluster</value>
		</property>
		<property>
		  <name>dfs.journalnode.edits.dir</name>
		  <value>/var/bigdata/hadoop/ha/dfs/jn</value>
		</property>
		
		#HA角色切换的代理类和实现方法,我们用的ssh免密
		<property>
		  <name>dfs.client.failover.proxy.provider.mycluster</name>
		  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
		</property>
		<property>
		  <name>dfs.ha.fencing.methods</name>
		  <value>sshfence</value>
		</property>
		<property>
		  <name>dfs.ha.fencing.ssh.private-key-files</name>
		  <value>/root/.ssh/id_dsa</value>
		</property>
		
		#开启自动化: 启动zkfc
		 <property>
		   <name>dfs.ha.automatic-failover.enabled</name>
		   <value>true</value>
		 </property>

? ? ? ? 4.修改vi slaves

????????vi slaves

node02
node03
node04

文章来源:https://blog.csdn.net/dongwen000/article/details/134929060
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。