搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive)(三)

2017-12-30 11:33:26来源:oschina作者:xiaomin0322人点击

分享
搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive)(三) 博客分类: hadoop

续上一篇:

搭建Hadoop2.7.3+Hive2.1.1及MySQL(配置Hive)(二)准备工作下载最新连接器地址

https://dev.mysql.com/downloads/connector/j/


例子:下载mysql-connector-java-5.1.41.tar

1、解压连接器connector文件1.1、解压
[root@localhost Software]# tar xzfmysql-connector-java-5.1.41.tar.gz
[root@localhost Software]# cd mysql-connector-java-5.1.41/1.2、查看文件夹
[root@localhostmysql-connector-java-5.1.41]# ll1.3、Copy到hive/lib路径下
[root@localhost Software]# cpmysql-connector-java-5.1.41/mysql-connector-java-5.1.41-bin.jar/usr/hive/lib/mysql-connector-java-5.1.41-bin.jar2、登陆MySQL创建数据库:hive_db(注意配置hive-site.xml时有指定)2.1、用户名:root 密码:password,另开一个终端登陆MySQL,创建数据库hive_db
[root@localhost hive]# mysql -u root -ppassword

mysql> create database hive_db;

3、改配置文件hive-site.xml以下只列出改动的配置项,其它保留默认 hive.metastore.warehouse.dir /usr/hive/warehouse location of default database for the warehouse hive.metastore.local true Use false if a production metastore server is used hive.exec.scratchdir /tmp/hive HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/ is created, with ${hive.scratch.dir.permission}. javax.jdo.option.ConnectionURL jdbc:mysql://localhost:3306/hive_db?createDatabaseIfNoExist=true Roy JDBC connect string for a JDBC metastore. To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL. For example, jdbc:postgresql://myhost/db?ssl=true for postgres database. javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver User-Defined(Roy) Driver class name for a JDBC metastore javax.jdo.option.ConnectionUserName root User-defined(Roy)Username to use against metastore database javax.jdo.option.ConnectionPassword password User-defined(Roy)password to use against metastore database

4、使用schematool初始化

[root@localhost hive]# schematool -dbTypemysql -initSchema

--显示成功


schemaTool completed5、启动hive服务端程序5.1、启动hive服务端
[root@localhost hive]# hive --servicemetastore &

--屏幕提示信息不显示时,按ctrl+c退出

5.2、查看进程信息
[root@localhost hive]# jps

--显示进程信息多了(RunJar)


51280 Jps
5985 SecondaryNameNode
6226 ResourceManager
45766 DataNode
5753 NameNode
51194 RunJar
6348 NodeManager

5.3、有需要时,可启动hive远程服务 (端口号10000)


[root@localhost hive]# hive --servicehiveserver &6、测试环境配置是否成功

6.1、准备导入文本文件/root/桌面/Test/wc-in/a.txt


格式:


1,h
2,i
3,v
4,e6.2、登陆hive成功后,测试创建表
root@localhost hadoop]# hive6.2.1、创建表及指定逗号(,)为分隔符
hive> create table a(id int,name string) > row format delimited fields terminated by ',';

--显示信息


OK
Time taken: 0.288 seconds6.2.2、导入文件a.txt
hive> load data local inpath '/root/桌面/Test/wc-in/a.txt' into table a;

--显示信息


Loading data to table default.a
OK
Time taken: 0.763 seconds6.2.3、查看效果
hive> select * from a;

--显示信息


OK
1 h
2 i
3 v
4 e
Time taken: 0.309 seconds, Fetched: 4row(s)6.3、在Hive内使用dfs命令6.3.1、查看a表dfs存储路径
hive> dfs -ls /usr/hive/warehouse/a;

--显示信息


Found 1 items
-rw-r--r-- 1 root supergroup 16 2017-03-08 17:46/usr/hive/warehouse/a/a.txt6.3.2、查看文件内容
hive> dfs -cat /usr/hive/warehouse/a/*;

--显示信息


1,h
2,i
3,v
4,e7、登陆MySQL查看创建表
[root@localhost conf]# mysql -u root -ppassword
mysql> use hive_db;
mysql> select TBL_ID, CREATE_TIME,DB_ID, OWNER, TBL_NAME,TBL_TYPE from TBLS;

--显示信息


+--------+-------------+-------+-------+----------+---------------+
| TBL_ID | CREATE_TIME | DB_ID | OWNER |TBL_NAME | TBL_TYPE| +--------+-------------+-------+-------+----------+---------------+ |37 | 1488966386 | 1 | root | a | MANAGED_TABLE |
+--------+-------------+-------+-------+----------+---------------+
1 row in set (0.03 sec)8、在hdfs查看生成文件(同上步骤[6.3])8.1、查看a表存储路径
[root@localhost hadoop]# hdfs dfs -ls/usr/hive/warehouse/a

--显示信息


Found 1 items
-rw-r--r-- 1 root supergroup 162017-03-08 17:46 /usr/hive/warehouse/a/a.txt8.2、查看内容
[root@localhost hadoop]# hdfs dfs -cat /usr/hive/warehouse/a/*

--显示信息


1,h
2,i
3,v
4,e

常见问题处理:


1、启动hive时报错


[root@localhost hive]# hive

--显示报错信息


Caused by:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):Cannot create directory /tmp/hive/root/24f1d91f-f32b-47e1-824d-ba26b02bd13e.Name node is in safe mode.

原因:hadoop为安全模式


--解决方法:


关闭安全模式


[root@localhost hadoop]# hadoop dfsadmin-safemode leave

--显示信息


DEPRECATED: Use of this script to executehdfs command is deprecated.
Instead use the hdfs command for it.Safe mode is OFF

2、在导入数据时出错信息


hive> load data local inpath '/root/桌面/Test/wc-in/a.txt' into table a;

--显示报错信息


FAILED: Execution Error, return code 1 fromorg.apache.hadoop.hive.ql.exec.MoveTask.org.apache.hadoop.ipc.RemoteException(java.io.IOException): File/usr/hive/warehouse/a/a_copy_2.txt could only be replicated to 0 nodes insteadof minReplication (=1).There are 0datanode(s) running and no node(s) are excluded in this operation.

原因:hadoop没有启动datanote


解决方法:


[root@localhost hive]# start-dfs.sh
[root@localhost hive]# jps

--显示信息


51152 Jps 5985 SecondaryNameNode 6226 ResourceManager 45766 DataNode 5753 NameNode 6348 NodeManager http://www.tuicool.com/articles/fQZnAnf

最新文章

123

最新摄影

闪念基因

微信扫一扫

第七城市微信公众平台