大数据

Hadoop3环境部署,权威干货

时间:2017/5/1 23:07:22  作者:solgle  来源:solgle.com  查看:1457  评论:0
内容摘要:一、 搭建操作系统,或者vm虚拟环境1. 配置主机名及网络配置修改主机名称 /etc/sysconfig/network配置网路 /etc/sysconfig/network-scripts/ifcfg-eth0添加或修改 BOOTPROTO=static           
 
一、 搭建操作系统,或者vm虚拟环境
 
1. 配置主机名及网络配置
修改主机名称 /etc/sysconfig/network
配置网路 /etc/sysconfig/network-scripts/ifcfg-eth0
 
添加或修改 BOOTPROTO=static
           ONBOOT=yes
IPADDR=129.16.10.21
NETMASK=255.255.255.0
重启网卡服务生效:service network restart
 
同理 主机节点配置ip  129.16.10.22 
主机节点配置ip  129.16.10.23
 
[root@rac1 ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=dataNode1
 
[root@rac1 network-scripts]# vi ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="static"
HWADDR="00:0C:29:05:80:48"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"
UUID="0e8a4434-18c4-484f-bfe1-4f39b6b3768b"
IPADDR=129.16.10.21
NETMASK=255.255.255.0
[root@rac1 network-scripts]#
 
[root@rac3 network-scripts]# service network restart
Shutting down interface eth0:                              [  OK  ]
Shutting down interface eth1:                              [  OK  ]
Shutting down loopback interface:                          [  OK  ]
Bringing up loopback interface:                            [  OK  ]
Bringing up interface eth0:                                [  OK  ]
Bringing up interface eth1:  
Determining IP information for eth1... done.                  [  OK  ]
 
2:配置每台主机hosts、
 [root@rac2 network-scripts]# vi /etc/hosts
cat: /etc/hsots: No such file or directory
[root@rac2 network-scripts]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
 
129.16.10.21  dataNode1
129.16.10.22  dataNode2
129.16.10.23  nameNode
 
[root@rac2 network-scripts]#
 
 
二、 创建用户
 
[root@rac3 ~]# mkdir /u01
[root@rac3 ~]# useradd hodp
[root@rac3 ~]# chown hodp -R /u01
[root@rac3 ~]#
 
 
三、 配置ssh
[hodp@dataNode1 ~]$ mkdir .ssh
[hodp@dataNode1 ~]$ chmod 700 .ssh
[hodp@dataNode1 ~]$ cd .ssh
[hodp@dataNode1 .ssh]$ ssh-keygen -t rsa
 
[hodp@dataNode1 .ssh]$ ssh-keygen -t dsa
 
。。。
 
四、 安装java虚拟机
 
[hodp@dataNode1 ~]$ tar -vxf server-jre-8u131-linux-x64.tar.gz
[hodp@dataNode1 ~]$ mv jdk1.8.0_131/ /u01/
[root@dataNode1 jdk1.8.0_131]# vi /etc/profile
##append code
export JAVA_HOME=/u01/jdk1.8.0_131
 
export CLASSPATH=$JAVA_HOME/lib:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
 
export PATH=$JAVA_HOME/bin:$PATH
 
[hodp@dataNode2 jdk1.8.0_131]$ source /etc/profile
[hodp@dataNode2 jdk1.8.0_131]$ java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
 
 
五、 下载hadoop
 
1. 配置core-site.xml,用于mapReduce和hdfs常用io配置
[hodp@dataNode1 hadoop]$ pwd
/u01/hadoop-3.0.0-alpha2/etc/hadoop
[hodp@dataNode1 hadoop]$ vi core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 
<configuration>
 
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hodp/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://nameNode:9000</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>4096</value>
    </property>
 
 
</configuration>
 
2. 配置hdfs-site.xml,守护进程,包括dataNode,nameNode
 
[hodp@dataNode1 hadoop]$ vi hdfs-site.xml
[hodp@dataNode1 hadoop]$ cat  hdfs-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 
<configuration>
 
<property>  
        <name>dfs.nameservices</name>  
        <value>hadoop-cluster1</value>  
    </property>  
    <property>  
        <name>dfs.namenode.secondary.http-address</name>  
        <value>nameNode:50090</value>  
    </property>  
    <property>  
        <name>dfs.namenode.name.dir</name>  
        <value>file:///home/hodp/dfs/name</value>  
    </property>  
    <property>  
        <name>dfs.datanode.data.dir</name>  
        <value>file:///home/hodp/dfs/data</value>  
    </property>  
    <property>  
        <name>dfs.replication</name>  
        <value>2</value>  
    </property>  
    <property>  
        <name>dfs.webhdfs.enabled</name>  
        <value>true</value>  
    </property>  
 
</configuration>
 
 
3. 配置mapred-site.xml 配置jobtracker,tasktracker守护进程
 
[hodp@dataNode1 hadoop]$ cp mapred-site.xml.template mapred-site.xml
[hodp@dataNode1 hadoop]$ vi mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 
<configuration>
 
    <property>  
        <name>mapreduce.framework.name</name>  
        <value>yarn</value>  
    </property>  
    <property>  
        <name>mapreduce.jobtracker.http.address</name>  
        <value>nameNode:50030</value>  
    </property>  
    <property>  
        <name>mapreduce.jobhistory.address</name>  
        <value>nameNode:10020</value>  
    </property>  
    <property>  
        <name>mapreduce.jobhistory.webapp.address</name>  
        <value>nameNode:19888</value>  
    </property> 
 
</configuration>
 
 
4. yarn-site.xml  mapReduce站点配置
 
[hodp@dataNode1 hadoop]$ vi yarn-site.xml
<?xml version="1.0"?>
<configuration>
 
<!-- Site specific YARN configuration properties -->
 
<property>  
        <name>yarn.nodemanager.aux-services</name>  
        <value>mapreduce_shuffle</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.address</name>  
        <value>nameNode:8032</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.scheduler.address</name>  
        <value>nameNode:8030</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.resource-tracker.address</name>  
        <value>nameNode:8031</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.admin.address</name>  
        <value>nameNode:8033</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.webapp.address</name>  
        <value>nameNode:8088</value>  
    </property>  
 
</configuration>
 
5. 配置workers
 
dataNode1
dataNode2
 
 
6. 增加配置JAVA_HOME
分别在文件hadoop-env.sh和yarn-env.sh中添加JAVA_HOME配置
JAVA_HOME=/u01/jdk1.8.0_131
 
 
7. 拷贝hadoop安装目录一起 拷贝到其他服务器上
 
[hodp@dataNode1 u01]$ scp -r  hadoop-3.0.0-alpha2/ dataNode2:/u01/
[hodp@dataNode1 u01]$ scp -r  hadoop-3.0.0-alpha2/ nameNode:/u01/
 
 
六、 格式化文件系统
[hodp@nameNode bin]$ ./hdfs namenode -format
2017-04-29 13:22:07,291 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   user = hodp
STARTUP_MSG:   host = nameNode/129.16.10.23
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 3.0.0-alpha2
STARTUP_MSG:   classpath = /u01/hadoop-3.0.0-alpha2/etc/hadoop:/u01/hadoop-3.0.0- 
alpha2/share/hadoop/common/lib/commons-lang3-3.3.2.jar:/u01/hadoop-3.0.0- 
alpha2/share/hadoop/common/lib/curator-client-2.7.1.jar:/u01/hadoop-3.0.0- 
alpha2/share/hadoop/common/lib/json-smart-1.1.1.jar:/u01/hadoop-3.0.0- 
alpha2/share/hadoop/common/lib/commons-cli-1.2.jar:/u01/hadoop-3.0.0- 
alpha2/share/hadoop/common/lib/zookeeper-3.4.6.jar:/u01/hadoop-3.0.0-
 
。。。
 
七、 启动及环境测试
 
1. 启动报错
[hodp@nameNode dfs]$ /u01/hadoop-3.0.0-alpha2/sbin/start-dfs.sh 
Starting namenodes on [nameNode]
Starting datanodes
localhost: datanode is running as process 2396.  Stop it first.
2017-04-29 12:56:39,248 WARN util.NativeCodeLoader: Unable to load nativere applicable
 
---错误诊断过程
[hodp@nameNode native]$ ldd  libhadoop.so.1.0.0 
./libhadoop.so.1.0.0: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by ./libhadoop.so.1.0.0)
linux-vdso.so.1 =>  (0x00007fff38dff000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f78a2ceb000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f78a2acd000)
libc.so.6 => /lib64/libc.so.6 (0x00007f78a273b000)
/lib64/ld-linux-x86-64.so.2 (0x00007f78a3116000)
[hodp@nameNode native]$ 
[hodp@nameNode native]$ ldd --version
ldd (GNU libc) 2.12
Copyright (C) 2010 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.
[hodp@nameNode native]$
 
对比发现GLIBC_2.14 跟现在系统的版本不一致
有两个办法,重新编译glibc.2.14版本,安装后专门给hadoop使用,这个有点危险。第二个办法直接在log4j日志中去除告警信息。在/u01/hadoop-3.0.0-alpha2/etc/hadoop/log4j.properties文件中添加
log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
 
八、 测试守护
 
[hodp@nameNode sbin]$ ./start-yarn.sh 
Starting resourcemanager
Starting nodemanagers
[hodp@nameNode sbin]$
 
 
查看服务
[hodp@nameNode sbin]$ jps
4562 Jps
4200 NodeManager
4107 ResourceManager
3756 DataNode
3663 NameNode
[hodp@nameNode sbin]$
 
 
九、 dataNode启动测试
 
[hodp@dataNode2 sbin]$ ./start-dfs.sh 
Starting namenodes on [nameNode]
nameNode: namenode is running as process 3663.  Stop it first.
Starting datanodes
localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
localhost: WARNING: /u01/hadoop-3.0.0-alpha2/logs does not exist. Creating.
[hodp@dataNode2 sbin]$
[hodp@dataNode2 sbin]$ ./start-yarn.sh 
Starting resourcemanager
Starting nodemanagers
[hodp@dataNode2 sbin]$
 
 
十、 关闭防火墙
[hodp@dataNode1 sbin]$ chkconfig iptables off
You do not have enough privileges to perform this operation.
[hodp@dataNode1 sbin]$ su - root
Password: 
[root@dataNode1 ~]# chkconfig iptables off
[root@dataNode1 ~]# service iptables stop
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Unloading modules:                               [  OK  ]
 
 
 
十一、 打开网页查看
 
http://129.16.10.23:9870/
 
 
 
标签:Hadoop环境部署 权威干货 

solgle.com 版权所有,欢迎分享!!!

相关文章
    相关评论
     img1 img2 img3 img4 img5 img6 img7 img8 img9 img10
    评论者:      验证码:  点击获取验证码
       Copyright © 2013-2028 solgle.com,All rights reserved.[solgle.com] 公安机关备案号:51010802000219
    Email:solgle@solgle.com; weixin:cd1008610000 ICP:蜀ICP备14011070号-1