CentOS 7.4 安装部署 hadoop 2.6 文档 V1.3

2018-02-08 10:21:16来源:oschina作者:一切皆数字人点击

分享
###################################################################
################
####### CentOS 7.4 安装部署 hadoop 2.6 文档V1.3#########
####### auther: li_chunli #########
####### datetime: 2017.12.22-18:44 #########
####### note:添加设置系统资源限制,添加NTP服务配置 #########
################
###################################################################
=============================================
目标:
CentOS 7.4 搭建 hadoop2.6 环境
使用1台机器做 Hadoop 2.6 HDFS NameNode
使用3台机器做 Hadoop 2.6 HDFS DataNode

效果:
搭建完成后,能够使用 HDFS 进行文件的创建,删除,上传,下载。=============================================
1,必要的准备:
1.1 操作系统版本,mini安装即可
[root@NameNode ~]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
[root@NameNode ~]#
1.2 操作系统时区选择:中国·上海,后续集群时间同步需要。1.2 确信能够访问公网
[root@NameNode ~]# ping www.oracle.com
[root@NameNode ~]# ping hadoop.apache.org 1.3 节点数目功能一览
Hadoop NameNode: 172.16.10.103
Hadoop DataNode1 : 172.16.10.93
Hadoop DataNode2 : 172.16.10.94
Hadoop DataNode3 : 172.16.10.1021.3 NameNode 节点磁盘状态
[hadoop@NameNode ~]$ df -hT
文件系统类型容量已用可用 已用% 挂载点
/dev/mapper/centos-root xfs 50G4.8G 46G 10% /
devtmpfsdevtmpfs7.9G 07.9G0% /dev
tmpfs tmpfs 7.9G 07.9G0% /dev/shm
tmpfs tmpfs 7.9G9.1M7.9G1% /run
tmpfs tmpfs 7.9G 07.9G0% /sys/fs/cgroup
/dev/sda1 xfs1014M161M854M 16% /boot
/dev/mapper/centos-home xfs441G 39M441G1% /home
tmpfs tmpfs 1.6G8.0K1.6G1% /run/user/1000
tmpfs tmpfs 1.6G 12K1.6G1% /run/user/42
tmpfs tmpfs 1.6G 01.6G0% /run/user/1001
[hadoop@NameNode ~]$
1.4 NameNode 各节点内存状态
[hadoop@NameNode ~]$ free -m
total used freesharedbuff/cache available
Mem: 16047 1282128309 193314394
Swap: 80630 8063
[hadoop@NameNode ~]$ 1.5 DataNode 各节点磁盘空间状态,当前环境每个 DataNode 节点 /home 分区空间较大,将用来存储 数据,重要的话说三遍
DataNode 各节点磁盘空间状态,当前环境每个 DataNode 节点 /home 分区空间较大,将用来存储 数据,重要的话说三遍
DataNode 各节点磁盘空间状态,当前环境每个 DataNode 节点 /home 分区空间较大,将用来存储 数据,重要的话说三遍
[hadoop@DataNode1 ~]$ df -hT
文件系统类型容量已用可用 已用% 挂载点
/dev/mapper/centos-root xfs 50G5.1G 45G 11% /
devtmpfsdevtmpfs 16G 0 16G0% /dev
tmpfs tmpfs16G 0 16G0% /dev/shm
tmpfs tmpfs16G9.1M 16G1% /run
tmpfs tmpfs16G 0 16G0% /sys/fs/cgroup
/dev/sda2 xfs1014M199M816M 20% /boot
/dev/mapper/centos-home xfs4.0T201M4.0T1% /home# hadoop 将会安装在这里,空间较大,将用来存储数据
tmpfs tmpfs 3.2G8.0K3.2G1% /run/user/1000
tmpfs tmpfs 3.2G 12K3.2G1% /run/user/42
tmpfs tmpfs 3.2G 03.2G0% /run/user/1001
[hadoop@DataNode1 ~]$
1.6 DataNode 节点 内存状态信息:
[root@DataNode1 ~]# free -m
total used freesharedbuff/cache available
Mem: 32175 1199287909 218530513
Swap:16127016127
[root@DataNode1 ~]#
1.7 能够访问互联网就行,无需准备其他特殊软件。=============================================
2.1,在所有节点,关闭firewall/iptables。
先学会怎么搭建并用起来 ,然后再去优化安全,避免掉到坑了去了。
[root@NameNode ~]# systemctl stopfirewalld.service
[root@NameNode ~]# systemctl disable firewalld.service
[root@NameNode ~]# systemctl stopiptables.service#可选操作:CentOS 7 默认没有安装 iptables
[root@NameNode ~]# systemctl disable iptables.service#可选操作:CentOS 7 默认没有安装 iptables
2.2,关闭selinux
[root@NameNode ~]# setenforce 0#使配置立即生效
[root@NameNode ~]# echo 'SELINUX=disabled' >> /etc/selinux/config #永久关闭SElinux 2.3 重复上述操作为每一个节点。
=============================================
3,在所有节点安装Java环境
[root@NameNode ~]# yum install -y wget
[root@NameNode ~]# jdk_url='http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.rpm'
[root@NameNode ~]# wget --no-cookies --no-check-certificate --header "Cookie:oraclelicense=accept-securebackup-cookie" $jdk_url
[root@NameNode ~]# md5sum jdk-8u151-linux-x64.rpm #验证文件完整性
7f09893e12aadef39e0751ec657cc7d8jdk-8u151-linux-x64.rpm
[root@NameNode ~]# yum autoremove -y java#卸载自带的Open-JDK
[root@NameNode ~]# yum localinstall -y jdk-8u151-linux-x64.rpm
3.2验证Java
[root@NameNode ~]# java -version#验证Java 安装结果
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
[root@NameNode ~]#
3.3 重复上述操作为每一个节点。=============================================
4,创建hadoop用户
4.1在所有节点创建普通hadoop用户
[root@NameNode ~]# useradd hadoop && passwd hadoop #密码设定:hadoop
4.2验证添加的用户
[root@NameNode ~]# id hadoop
uid=1001(hadoop) gid=1001(hadoop) 组=1001(hadoop)
[root@NameNode ~]# 4.3 重复上述操作为每一个节点。=============================================
5,设置host 映射文件,简化集群主机与IP关系
5.1 在所有节点添加 FQDN ,hadoop集群内部调度需要。在/etc/hosts文件后追加
[root@NameNode ~]# vim /etc/hosts#尾行追加
172.16.10.103 NameNode
172.16.10.93DataNode1
172.16.10.94DataNode2
172.16.10.102 DataNode3
5.2 验证写入结果:
[root@NameNode ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.10.103 NameNode
172.16.10.93DataNode1
172.16.10.94DataNode2
172.16.10.102 DataNode3
5.3 重复上述操作为每一个节点。
=============================================
6,在所有节点配置集群内SSH免密登录
6.1 生成公钥与私钥,并将公钥传给每一个节点
[root@NameNode ~]# su - hadoop
[hadoop@NameNode ~]$ ssh-keygen -t rsa #一直回车
[hadoop@NameNode ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@NameNode
[hadoop@NameNode ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@DataNode1
[hadoop@NameNode ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@DataNode2
[hadoop@NameNode ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@DataNode3
[hadoop@NameNode ~]$ chmod 0600 ~/.ssh/authorized_keys6.2 免密登录验证:
[hadoop@NameNode ~]$ ssh NameNode
[hadoop@NameNode ~]$ ifconfig
[hadoop@NameNode ~]$ logout#退出NameNode shell
[hadoop@NameNode ~]$ ssh DataNode1
[hadoop@NameNode ~]$ ifconfig
[hadoop@NameNode ~]$ logout #退出DataNode1 shell
[hadoop@NameNode ~]$ ssh DataNode2
[hadoop@NameNode ~]$ ifconfig
[hadoop@NameNode ~]$ logout #退出DataNode2 shell
[hadoop@NameNode ~]$ ssh DataNode3
[hadoop@NameNode ~]$ ifconfig
[hadoop@NameNode ~]$ logout #退出DataNode3 shell
退出Hadoop用户 shell
[hadoop@NameNode ~]$ logout
[root@NameNode ~]# 6.3 重复上述操作为每一个节点。=============================================
7,在 NameNode 节点下载解压hadoop
7.1
[root@NameNode ~]# su - hadoop
[hadoop@NameNode ~]$ wget http://mirror.rise.ph/apache/hadoop/common/hadoop-2.6.5/hadoop-2.6.5.tar.gz
[hadoop@NameNode ~]$ md5sum hadoop-2.6.5.tar.gz
967c24f3c15fcdd058f34923e92ce8achadoop-2.6.5.tar.gz
[hadoop@NameNode ~]$ tar xf hadoop-2.6.5.tar.gz
[hadoop@NameNode ~]$ mv hadoop-2.6.5 hadoop
[hadoop@NameNode ~]$ 7.2 其他节点不用上述操作。
=============================================
8,在 NameNode 节点配置,非常关键,hadoop能不能跑起来就看这里了!!!
8.1 配置hadoop ,就像下面这样!
[hadoop@NameNode ~]$ cp ~/hadoop/etc/hadoop/core-site.xml ~/hadoop/etc/hadoop/core-site.xml.install #备份原来的配置文件
[hadoop@NameNode ~]$ > ~/hadoop/etc/hadoop/core-site.xml
[hadoop@NameNode ~]$ vim ~/hadoop/etc/hadoop/core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


fs.default.name
hdfs://NameNode:9000/


hadoop.tmp.dir
/home/hadoop/data/


[hadoop@NameNode hadoop]$
# 注意 NameNode 注意这里的主机名要填 NameNode 。。。重要的话说三遍
# 注意 NameNode 注意这里的主机名要填 NameNode 。。。重要的话说三遍
# 注意 NameNode 注意这里的主机名要填 NameNode 。。。重要的话说三遍
8.2 配置HDFS的配置文件,就像下面这样!
[hadoop@NameNode ~]$ cp ~/hadoop/etc/hadoop/hdfs-site.xml ~/hadoop/etc/hadoop/hdfs-site.xml.install #备份原来的配置文件
[hadoop@NameNode ~]$ > ~/hadoop/etc/hadoop/hdfs-site.xml
[hadoop@NameNode ~]$ vim ~/hadoop/etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


dfs.data.dir
/home/hadoop/hadoop/dfs/name/data
true


dfs.name.dir
file:/home/hadoop/hadoop/dfs/name
true


dfs.replication
2


[hadoop@NameNode ~]$
# 设定文件副本数 2份
# 设定数据存储路径:/home/hadoop/hadoop/dfs/name/data
8.3 配置mapred,创建文件mapred-site.xml,文件内容如下:
[hadoop@NameNode ~]$ vim ~/hadoop/etc/hadoop/mapred-site.xml


mapred.job.tracker
NameNode:9001


[hadoop@NameNode ~]$
# 注意 NameNode 注意这里的主机名要填 NameNode 。。。重要的话说三遍
# 注意 NameNode 注意这里的主机名要填 NameNode 。。。重要的话说三遍
# 注意 NameNode 注意这里的主机名要填 NameNode 。。。重要的话说三遍
8.4 配置Hadoop运行环境
8.4.1 查看JDK安装的路径
[root@NameNode hadoop]# rpm -qa | grep -i jdk#查找JAVA_HOME
jdk1.8-1.8.0_151-fcs.x86_64
[root@NameNode hadoop]# rpm -ql jdk1.8-1.8.0_151-fcs.x86_64| more
/usr/java/jdk1.8.0_151/bin#可以看出 JAVA_HOME就是/usr/java/jdk1.8.0_151/
/usr/java/jdk1.8.0_151/bin/java
/usr/java/jdk1.8.0_151/bin/javac
8.4.2 将JDK路径写入Hadoop 脚本
[hadoop@NameNode ~]$ vim ~/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_151/
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
export HADOOP_CONF_DIR=/home/hadoop/hadoop/etc/hadoop/
[hadoop@NameNode ~]$
#注意JAVA_HOME 为JDK安装的实际路径
#注意hadoop的配置文件实际路径8.5 其他节点无需上述操作。=============================================
9 配置集群,使 NameNameNode 可以知道有几个DataNode
9.1 配置 SecondaryNameNode 为本机
[hadoop@NameNode ~]$ vim ~/hadoop/etc/hadoop/master
NameNode
[hadoop@NameNode ~]$
9.2 使 NameNameNode 可以知道有几个DataNode
[hadoop@NameNode ~]$ > ~/hadoop/etc/hadoop/slaves
[hadoop@NameNode ~]$ vim ~/hadoop/etc/hadoop/slaves
DataNode1
DataNode2
DataNode3
[hadoop@NameNode hadoop]$ 9.3 其他节点无需上述操作。=============================================
10,将 NameNode 的程序和配置文件下发到各 DataNode 节点
本环境 DataNode 节点的 /home 目录空间非常大,hadoop 用户的家目录就在/home 分区。
执行以下操作,将NameNode节点程序与配置,下发到DataNode节点
[hadoop@NameNode ~]$ scp -r ~/hadoop DataNode1:~/
[hadoop@NameNode ~]$ scp -r ~/hadoop DataNode2:~/
[hadoop@NameNode ~]$ scp -r ~/hadoop DataNode3:~/
=============================================
11 在 NameNode 节点,初始化HDFS文件系统:
[hadoop@NameNode ~]$ ~/hadoop/bin/hdfs namenode -format cluster_test
************************************************************/
17/12/21 18:32:37 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/12/21 18:32:37 INFO namenode.NameNode: createNameNode [-format, cluster_test]
Formatting using clusterid: CID-7f55fd7c-e3ac-429a-985f-c7652158a219
17/12/21 18:32:38 INFO namenode.FSNamesystem: No KeyProvider found.
17/12/21 18:32:38 INFO namenode.FSNamesystem: fsLock is fair:true
17/12/21 18:32:38 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/12/21 18:32:38 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/12/21 18:32:38 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
17/12/21 18:32:38 INFO blockmanagement.BlockManager: The block deletion will start around 2017 十二月 21 18:32:38
17/12/21 18:32:38 INFO util.GSet: Computing capacity for map BlocksMap
17/12/21 18:32:38 INFO util.GSet: VM type= 64-bit
17/12/21 18:32:38 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
17/12/21 18:32:38 INFO util.GSet: capacity= 2^21 = 2097152 entries
17/12/21 18:32:38 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/12/21 18:32:38 INFO blockmanagement.BlockManager: defaultReplication= 2
17/12/21 18:32:38 INFO blockmanagement.BlockManager: maxReplication= 512
17/12/21 18:32:38 INFO blockmanagement.BlockManager: minReplication= 1
17/12/21 18:32:38 INFO blockmanagement.BlockManager: maxReplicationStreams= 2
17/12/21 18:32:38 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/12/21 18:32:38 INFO blockmanagement.BlockManager: encryptDataTransfer = false
17/12/21 18:32:38 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
17/12/21 18:32:38 INFO namenode.FSNamesystem: fsOwner= hadoop (auth:SIMPLE)
17/12/21 18:32:38 INFO namenode.FSNamesystem: supergroup = supergroup
17/12/21 18:32:38 INFO namenode.FSNamesystem: isPermissionEnabled = true
17/12/21 18:32:38 INFO namenode.FSNamesystem: HA Enabled: false
17/12/21 18:32:38 INFO namenode.FSNamesystem: Append Enabled: true
17/12/21 18:32:38 INFO util.GSet: Computing capacity for map INodeMap
17/12/21 18:32:38 INFO util.GSet: VM type= 64-bit
17/12/21 18:32:38 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
17/12/21 18:32:38 INFO util.GSet: capacity= 2^20 = 1048576 entries
17/12/21 18:32:38 INFO namenode.NameNode: Caching file names occuring more than 10 times
17/12/21 18:32:38 INFO util.GSet: Computing capacity for map cachedBlocks
17/12/21 18:32:38 INFO util.GSet: VM type= 64-bit
17/12/21 18:32:38 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
17/12/21 18:32:38 INFO util.GSet: capacity= 2^18 = 262144 entries
17/12/21 18:32:38 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/12/21 18:32:38 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/12/21 18:32:38 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
17/12/21 18:32:38 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/12/21 18:32:38 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/12/21 18:32:38 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/12/21 18:32:38 INFO util.GSet: VM type= 64-bit
17/12/21 18:32:38 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
17/12/21 18:32:38 INFO util.GSet: capacity= 2^15 = 32768 entries
17/12/21 18:32:38 INFO namenode.NNConf: ACLs enabled? false
17/12/21 18:32:38 INFO namenode.NNConf: XAttrs enabled? true
17/12/21 18:32:38 INFO namenode.NNConf: Maximum size of an xattr: 16384
17/12/21 18:32:40 INFO namenode.FSImage: Allocated new BlockPoolId: BP-900658624-127.0.0.1-1513852360055
17/12/21 18:32:40 INFO common.Storage: Storage directory /home/hadoop/hadoop/dfs/name has been successfully formatted.
17/12/21 18:32:40 INFO namenode.FSImageFormatProtobuf: Saving image file /home/hadoop/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
17/12/21 18:32:40 INFO namenode.FSImageFormatProtobuf: Image file /home/hadoop/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
17/12/21 18:32:40 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/12/21 18:32:40 INFO util.ExitUtil: Exiting with status 0
17/12/21 18:32:40 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at NameNode/127.0.0.1
************************************************************/
[hadoop@NameNode ~]$
# 可以看出,没有ERR之类的错误。。。=============================================
12 启动hadoop
12.1 启动hadoop
[hadoop@NameNode ~]$ ~/hadoop/sbin/start-all.sh#启动hadoop
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [NameNode]
NameNode: starting namenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-namenode-NameNode.localdomain.out
DataNode3: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-NameNode.localdomain.out
DataNode1: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-NameNode.out
DataNode2: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-NameNode.localdomain.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-NameNode.localdomain.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-resourcemanager-NameNode.localdomain.out
DataNode1: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-NameNode.out
DataNode3: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-NameNode.localdomain.out
DataNode2: starting nodemanager, logging to /home/hadoop/hadoop/logs/yarn-hadoop-nodemanager-NameNode.localdomain.out
[hadoop@NameNode ~]$ [hadoop@NameNode ~]$ jps# Java 程序 进程
21601 NameNode
21971 ResourceManager
21803 SecondaryNameNode
22237 Jps
[hadoop@NameNode ~]$ 12.2 查看服务监听状态
[hadoop@NameNode ~]$ netstat -tnlp| grep java
tcp 00 0.0.0.0:500700.0.0.0:* LISTEN21601/java
tcp 00 172.16.10.103:90000.0.0.0:* LISTEN21601/java #可以看到 9000 端口监听成功了
tcp 00 0.0.0.0:500900.0.0.0:* LISTEN21803/java
tcp600 :::8088 :::*LISTEN21971/java
tcp600 :::8030 :::*LISTEN21971/java
tcp600 :::8031 :::*LISTEN21971/java
tcp600 :::8032 :::*LISTEN21971/java
tcp600 :::8033 :::*LISTEN21971/java
[hadoop@NameNode ~]$ 12.3 关闭hadoop
[hadoop@NameNode ~]$ ~/hadoop/sbin/stop-all.sh#关闭hadoop
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [NameNode]
NameNode: stopping namenode
DataNode3: stopping datanode
DataNode2: stopping datanode
DataNode1: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
DataNode1: stopping nodemanager
DataNode3: stopping nodemanager
DataNode2: stopping nodemanager
DataNode1: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
DataNode3: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
DataNode2: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
no proxyserver to stop
[hadoop@NameNode ~]$ [hadoop@NameNode ~]$ jps
23187 Jps
[hadoop@NameNode ~]$=============================================
13 , 验证 HDFS 可用性:
以下将演示 文件的上传、下载、查看、删除,目录的创建,删除。
[hadoop@NameNode ~]$ ~/hadoop/sbin/start-all.sh#启动hadoop
[hadoop@NameNode ~]$ ~/hadoop/bin/hadoop fs -df -h #查看HDFS的容量状况
FilesystemSizeUsedAvailableUse%
hdfs://NameNode:900011.8 T12 K 11.8 T0%
[hadoop@NameNode ~]$
[hadoop@NameNode ~]$ ~/hadoop/bin/hadoophadoop fs -ls /#显示HDFS根目录
[hadoop@NameNode ~]$ ~/hadoop/bin/hadoophadoop fs -mkdir /test_directory#在 HDFS 中创建一个目录
[hadoop@NameNode ~]$ echo 'Hello HDFS!' > /tmp/test_file #将本地文件上传到 HDFS
[hadoop@NameNode ~]$ ~/hadoop/bin/hadoophadoop fs -put /tmp/test_file /test_directory
[hadoop@NameNode ~]$ ~/hadoop/bin/hadoophadoop fs -cat /test_directory/test_file #查看 HDFS 中文件的内容
Hello HDFS!
[hadoop@NameNode ~]$ rm -rf /tmp/test_file#将 HDFS 中文件 下载到本地
[hadoop@NameNode ~]$ ~/hadoop/bin/hadoop fs -get /test_directory/test_file /tmp/
[hadoop@NameNode ~]$ cat /tmp/test_file
Hello HDFS!
[hadoop@NameNode ~]$~/hadoop/bin/hadoop fs -ls /test_directory/#删除HDFS中的文件
Found 1 items
-rw-r--r-- 1 hadoop supergroup12 2017-12-21 15:16 /test_directory/test_file
[hadoop@NameNode ~]$~/hadoop/bin/hadoop fs -rm -f /test_directory/test_file
Deleted /test_directory/test_file
[hadoop@NameNode ~]$
[hadoop@NameNode ~]$~/hadoop/bin/hadoop fs -ls /test_directory/
[hadoop@NameNode ~]$
[hadoop@NameNode ~]$~/hadoop/bin/hadoop fs -ls / #删除HDFS中的目录
Found 1 items
drwxr-xr-x - hadoop supergroup 0 2017-12-21 15:20 /test_directory
[hadoop@NameNode ~]$
[hadoop@NameNode ~]$~/hadoop/bin/hadoop fs -rm -r -f /test_directory
17/12/21 15:21:34 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /test_directory
[hadoop@NameNode ~]$~/hadoop/bin/hadoop fs -ls /
=============================================
14, 系统优化:设定系统资源限制
14.1 查看各节点 系统资源限制
[root@NameNode ~]$ ulimit -Sn#查看文件描述符软限制
1024
[root@NameNode ~]$ ulimit -Hn#查看文件描述符硬限制
4096
[root@NameNode ~]$ 14.2 修改各节点 系统资源限制
[root@NameNode ~]# ulimit -n 10000
[root@NameNode ~]# ulimit -Sn #查看文件描述符软限制
10000
[root@NameNode ~]# ulimit -Hn#查看文件描述符硬限制
10000
[root@NameNode ~]# 14.3 修改linux Kernel 启动参数
[root@NameNode ~]# vim /etc/security/limits.conf #尾行追加
* soft nofile 10000
* hard nofile 10000 14.4 重启机器,验证上述设定效果
[root@NameNode ~]# reboot
[root@NameNode ~]# ulimit -Sn#查看文件描述符软限制
10000
[root@NameNode ~]# ulimit -Hn#查看文件描述符硬限制
10000
[root@NameNode ~]# 14.5 重复上述操作为每一个节点。
=============================================
15, 系统优化:安装NTP时间同步服务器15.1 NameNode 节点,安装NTP时间同步服务器
[root@NameNode ~]# yum install -y ntp
[root@NameNode ~]# yum install -y ntpdate
[root@NameNode ~]# ntpdate -u cn.pool.ntp.org#与国内时间服务器同步
[root@NameNode ~]# date #显示当前时间
2017年 12月 22日 星期五 10:24:07 CST
[root@NameNode ~]# 15.1 NameNode 节点,修改配置文件
[root@NameNode ~]# cp /etc/ntp.conf /etc/ntp.conf.install
[root@NameNode ~]# > /etc/ntp.conf #清空文件内容
[root@NameNode ~]# vim /etc/ntp.conf #文件内写入新的内容
driftfile /var/lib/ntp/drift
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict -6 ::1
# 允许内网 172.16.10.0/24 网段机器同步时间
restrict 172.16.10.0 mask 255.255.255.0 nomodify notrap
# 允许上层时间服务器主动修改本机时间
restrict time1.aliyun.comnomodify notrap noquery
restrict ntp1.aliyun.comnomodify notrap noquery
# 定义使用的上游 ntp服务器
server time1.aliyun.com
server ntp1.aliyun.com
# 外部时间服务器不可用时,以本地时间作为时间服务
server127.127.1.0
fudge 127.127.1.0 stratum 10
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
[root@NameNode ~]# 15.3 NameNode 节点, 启动NTP服务
[root@NameNode ~]# /bin/systemctl enablentpd.service#NTP服务开机自启
[root@NameNode ~]# /bin/systemctl restart ntpd.service#重启NTP服务
[root@NameNode ~]# ps -ef| grep -i ntp
root4569 10 11:00 ? 00:00:00 /usr/sbin/ntpd -u ntp:ntp -g
[root@NameNode ~]# 15.4 NameNode 节点,显示时间服务器节点列表
[root@NameNode ~]# ntpq -p
remoterefidst t when poll reach delay offsetjitter
time6.aliyun.co 10.137.38.86 2 u 50 641 25.753 16.920 0.000
time5.aliyun.co 10.137.38.86 2 u 49 641 27.693 17.074 0.000
*LOCAL(0) .LOCL. 10 l 48 6410.0000.000 0.000
[root@NameNode ~]# 15.5 在其他节点 都操作一次:
[root@NameNode ~]# yum install -y ntpdate
[root@NameNode ~]# ntpdate NameNode
[root@NameNode ~]# date
2017年 12月 22日 星期五 11:09:13 CST
[root@NameNode ~]# 15.6 验证时间同步
在 NameNode 节点,写一个几句命令的脚本,内容就4句,
以微秒级显示各节点时间信息:
[hadoop@NameNode ~]$ vim test.sh
ssh NameNode'date +%s.%N' &
ssh DataNode1 'date +%s.%N' &
ssh DataNode2 'date +%s.%N' &
ssh DataNode3 'date +%s.%N' &
[hadoop@NameNode ~]$ 15.7 运行这个脚本:
[hadoop@NameNode ~]$ bash test.sh
[hadoop@NameNode ~]$ 1513913513.057121708
1513913513.039545463
1513913513.097342009
1513913513.093183005
[hadoop@NameNode ~]$ 15.8 可以看出,各节点时间显示非常接近,这是以微秒级进行显示的.
==========================================================================================
=============================================
=============================================

最新文章

123

最新摄影

闪念基因

微信扫一扫

第七城市微信公众平台