ITKeyword,专注技术干货聚合推荐

注册 | 登录

Hive 安装配置、HWI

分享于 2016-03-15

推荐:Hive安装配置

1.Hive安装 Hive的安装配置比较简单。首先,确定HDFS和MapReduce已经正确安装并且可以运行。 首先,到Apache下载Hive,本次安装使用的Hive版本为hive-0.9.0。下

2018阿里云全部产品优惠券(新购或升级都可以使用,强烈推荐)
领取地址https://promotion.aliyun.com/ntms/yunparter/invite.html?userCode=82lei0yp

一、安装Hive

  • 环境: CentOS7
  • 已安装Hadoop-2.7.3.tar
wget https://mirrors.cnnic.cn/apache/hive/hive-2.1.1/apache-hive-2.1.1-bin.tar.gz --no-check-certificate
mkdir -p /usr/local/hadoop
mv apache-hive-2.1.1-bin.tar.gz /usr/local/hadoop
cd /usr/local/hadoop
tar -zxvf apache-hive-2.1.1-bin.tar.gz
mv apache-hive-2.1.1-bin apache-hive-2.1.1

echo export HIVE_HOME=/usr/local/hadoop/apache-hive-2.1.1/ >> /etc/profile
echo export PATH=$PATH:$HIVE_HOME/bin >> /etc/profile

source /etc/profile
chown -R hadoop:hadoop /usr/local/hadoop/apache-hive-2.1.1/

安装mysql

wget http://dev.mysql.com/get/mysql-community-release-el7-5.noarch.rpm
rpm -ivh mysql-community-release-el7-5.noarch.rpm
yum install mysql-community-server
service mysqld status
service mysqld start
mysql -uroot -p
use mysql; 
update user set password=PASSWORD("hadoop")where user="root"; 
flush privileges; 
quit 
service mysqld restart 
mysql -uroot -phadoop 
或者mysql –uroot -hmaster –phadoop 
如果可以登录成功,则表示MySQL数据库已经安装成功。 
创建Hive用户: 
mysql>CREATE USER 'hive' IDENTIFIED BY 'hive'; 
mysql>GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%' WITH GRANT OPTION; 
mysql>GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%' IDENTIFIED BY 'hive'; 
mysql>flush privileges; 
创建Hive数据库: 
mysql>create database hive;
mysql>quit;
service mysqld restart
cd /usr/local/hadoop/apache-hive-2.1.1/conf
cp hive-default.xml.template hive-site.xml
vi hive-site.xml
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://localhost:3306/hive</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>hive(mysql用户名)</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>hive(mysql用户密码)</value>
    </property>
  • 配置mysql信息
cd /usr/local/hadoop/apache-hive-2.1.1/lib
wget http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jar
分发(这一步不要运行)
scp -r /usr/local/hadoop/apache-hive-2.1.1 slave1.whr.com:/usr/local/hadoop/
scp -r /usr/local/hadoop/apache-hive-2.1.1 slave2.whr.com:/usr/local/hadoop/
scp -r /usr/local/hadoop/apache-hive-2.1.1 slave3.whr.com:/usr/local/hadoop/
  • 配置slave环境变量…(这一步不要运行)

  • 下面启动hive

cd /usr/local/hadoop/apache-hive-2.1.1/bin
./hive --service metastore &
./hive --service hiveserver2 &
# 启动命令行
./hive
hive>show tables;

其它启动方式

# 命令行模式
./hive --service cli
# web
./hive --service hwi
# jdbc
nohup hive --service hiveserver &

二、常见问题处理

Required table missing : "`DBS`" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.schema.autoCreateTables"

处理

推荐:在Hadoop2.2下安装配置Hive

1、下载apache-hive-0.13.1-bin.tar.gz 2、在/root/install目录解压 3、配置HIVE_HOME及PATH <span style="font-family:KaiTi_GB2312;">export HADOOP_HOME=/roo

cd /usr/local/hadoop/apache-hive-2.1.1/bin
./schematool -dbType mysql -initSchema
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/apache-hive-2.1.1/lib/log4j-slf4j-impl-2.4.1.jj/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-org/slf4j/impl/StaticLoggerBinder.class]

处理

rm /usr/local/hadoop/apache-hive-2.1.1/lib/log4j-slf4j-impl-2.4.1.jar
Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D

处理
新建文件夹

cd /usr/local/hadoop/apache-hive-2.1.1/
mkdir tmpdir

vi conf/hive-site.xml

  <property>
    <name>hive.querylog.location</name>
    <value>${system:java.io.tmpdir}/${system:user.name}</value>
    <description>Location of Hive run time structured log file</description>
  </property>

把文件中包含的

${system:java.io.tmpdir}/${system:user.name}都改掉
  <property>
    <name>hive.querylog.location</name>
    <value>/usr/local/hadoop/apache-hive-2.1.1/tmpdir</value>
    <description>Location of Hive run time structured log file</description>
  </property>

三、基本操作

hive> show tables;    #查看所有表
hive> dfs -ls /;      #查看目录列表

四、安装Hive HWI

wget http://apache.fayea.com/hive/hive-2.1.1/apache-hive-2.1.1-src.tar.gz
tar -xzf apache-hive-2.1.1-src.tar.gz
cd apache-hive-2.1.1-src/hwi
jar cfM hive-hwi-1.2.0.war -C web . 
cp hive-hwi-2.1.1.war /usr/local/hadoop/apache-hive-2.1.1/lib
wget https://www.apache.org/dist/ant/binaries/apache-ant-1.10.1-bin.tar.gz
tar -xzf apache-ant-1.10.1-bin.tar.gz
cp apache-ant-1.10.1/lib/ant.jar ${HIVE_HOME}/lib
chmod 777 ${HIVE_HOME}/lib/ant.jar
vim /usr/local/hadoop/apache-hive-2.1.1/conf/hive-site.xml
  <property>
    <name>hive.hwi.war.file</name>
    <value>lib/hive-hwi-2.1.1.war</value>
    <description>This sets the path to the HWI war file, relative to ${HIVE_HOME}. </description>
  </property>
ln -s $JAVA_HOME/lib/tools.jar $HIVE_HOME/lib/
cd /usr/local/hadoop/apache-hive-2.1.1/bin
./hive --service hwi 2>/tmp/hwi2.log &

访问:
http://192.10.200.81:9999/hwi
基本每个页面都需要多刷新几次。

一些问题处理
参考:http://www.cnblogs.com/xinlingyoulan/p/6025692.html
官网地址:https://cwiki.apache.org/confluence/display/Hive/HiveWebInterface

推荐:Hive安装配置详解

Hive安装配置详解 1. 安装MySQL  sudo apt-get install mysql-server mysql-client  1). 建立数据库hive, create database hive;   2). 创建用户hive create us

一、安装Hive 环境: CentOS7 已安装Hadoop-2.7.3.tar wget https://mirrors.cnnic.cn/apache/hive/hive-2.1.1/apache-hive-2.1.1-bin.tar.gz --no-check-certificatemkdir -p /usr/local/hado

相关阅读排行


用户评论

游客

相关内容推荐

阿里云RDS

最新文章

×

×

请激活账号

为了能正常使用评论、编辑功能及以后陆续为用户提供的其他产品,请激活账号。

您的注册邮箱: 修改

重新发送激活邮件 进入我的邮箱

如果您没有收到激活邮件,请注意检查垃圾箱。