Hbase的伪分布式安装

诚如分为三种:

Hbase安装格局介绍

  1. proxy
    sharding,最近由cobar,mycat,drds,atlas修改,那多少个产品的根源一般是mysqlproxy

    ameoba,特点是mysql协议基本匹配,业务不须求做太多修改,缺点是分库分表的算法很烂,业务要团结做大堆配置

单机方式
1> Hbase不使用HDFS,仅使用当和姑件系统
2> ZooKeeper与Hbase运维在同1个JVM中

2.
jdbc中间件sharding,这么些和商业事务大概,就是把劳务完结为了三当中间件,好处是协商损失时间足以补回来,坏处是只有java能够运用,开源的有当当的jdbc
sharding

分布式方式
– 伪分布式格局
1> 全部进度运营在同三个节点上,分歧进度运维在不相同的JVM其中
2> 相比较适合实验测试
– 完全分布式情势
1> 进度运营在多个服务器集群中
2>
分布式注重于HDFS系统,因而计划Hbase在此之前一定要有三个常规办事的HDFS集群

3.mysql的ndbcluster和fabric,那是mysql
引擎层面做的sharding,直接用mysql的说道层和布置生成,那一个做的裨益是原生的协议层都以百分之百支撑,事务用mysql
xa帮助也算马虎马虎,坏处是单机引擎,不能支撑大数据的sql

 

mysql分布式数据存在的标题

Linux环境准备

1.非分片字段查询 2.分页排序 3.即兴表JOIN 4.分布式事务

关闭防火墙和SELinux

# service iptables stop

# chkconfig iptables off

# vim /etc/sysconfig/selinux

SELINUX=disabled

 

安插主机名及主机名绑定

# vim /etc/sysconfig/network

NETWORKING=yes
HOSTNAME=hbase

# vim /etc/hosts

192.168.244.30 hbase

 

SSH免密码登录

# ssh-keygen

一直按Enter键

# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.244.30

 

安装jdk

# tar xvf jdk-7u79-linux-x64.tar.gz -C /usr/local/

# vim /etc/profile

export JAVA_HOME=/usr/local/jdk1.7.0_79
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

# source /etc/profile

 

Hadoop的设置配备

下载地址:http://hadoop.apache.org/releases.html

在这里,选择2.5.2版本

# wget
http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.5.2/hadoop-2.5.2.tar.gz

# tar xvf hadoop-2.5.2.tar.gz -C /usr/local/

# cd /usr/local/hadoop-2.5.2/

# vim etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/local/jdk1.7.0_79

 

配置HDFS

# mkdir /usr/local/hadoop-2.5.2/data/

# vim etc/hadoop/core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://192.168.244.30:8020</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/usr/local/hadoop-2.5.2/data/tmp</value>
    </property>
</configuration>

 

# vim etc/hadoop/hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

 

配置YARN

# mv etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml

# vim etc/hadoop/mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

 

# vim etc/hadoop/yarn-site.xml

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>

 

启动HDFS

伊始化文件系统

# bin/hdfs namenode -format

...
16/09/25 20:33:02 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted.
...

出口上述音讯即意味着文件系统开始化成功

 

启动NameNod和DataNode进程

# sbin/start-dfs.sh

16/10/09 19:35:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes wh
ere applicableStarting namenodes on [hbase]
The authenticity of host 'hbase (192.168.244.30)' can't be established.
RSA key fingerprint is 1a:12:f5:e3:5d:e1:2c:5c:8c:56:52:ba:42:1c:ac:ba.
Are you sure you want to continue connecting (yes/no)? yes
hbase: Warning: Permanently added 'hbase' (RSA) to the list of known hosts.
hbase: starting namenode, logging to /usr/local/hadoop-2.5.2/logs/hadoop-root-namenode-hbase.out
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 1a:12:f5:e3:5d:e1:2c:5c:8c:56:52:ba:42:1c:ac:ba.
Are you sure you want to continue connecting (yes/no)? yes
localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
localhost: starting datanode, logging to /usr/local/hadoop-2.5.2/logs/hadoop-root-datanode-hbase.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
RSA key fingerprint is 1a:12:f5:e3:5d:e1:2c:5c:8c:56:52:ba:42:1c:ac:ba.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.5.2/logs/hadoop-root-secondarynamenode-hbase.out
16/10/09 19:35:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes wh
ere applicable

 

启动YARN

# sbin/start-yarn.sh 

starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.5.2/logs/yarn-root-resourcemanager-hbase.out
localhost: starting nodemanager, logging to /usr/local/hadoop-2.5.2/logs/yarn-root-nodemanager-hbase.out

 

因而jps查看各进程是或不是运行成功

# jps

2421 NodeManager
2339 ResourceManager
1924 NameNode
2029 DataNode
2170 SecondaryNameNode
2721 Jps

 

也可透过访问http://192.168.244.30:50070/查看hdfs是否启动成功

图片 1

 

Hbase的装置配置

下载地址:http://mirror.bit.edu.cn/apache/hbase/1.2.3/hbase-1.2.3-bin.tar.gz

在此地,下载的是1.2.3本子

有关hbase和hadoop的本子对应新闻,可参考官档的求证

http://hbase.apache.org/book/configuration.html#basic.prerequisites

# wget
http://mirror.bit.edu.cn/apache/hbase/1.2.3/hbase-1.2.3-bin.tar.gz

# tar xvf hbase-1.2.3-bin.tar.gz -C /usr/local/

# cd /usr/local/hbase-1.2.3/

# vim conf/hbase-env.sh

export JAVA_HOME=/usr/local/jdk1.7.0_79

 

配置Hbase

# mkdir /usr/local/hbase-1.2.3/data

# vim conf/hbase-site.xml 

<configuration>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://192.168.244.30:8020/hbase</value>
  </property>
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/usr/local/hbase-1.2.3/data/zookeeper</value>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
</configuration>

 

# vim conf/regionservers 

192.168.244.30

 

启动Hbase

# bin/hbase-daemon.sh start zookeeper

starting zookeeper, logging to /usr/local/hbase-1.2.3/bin/../logs/hbase-root-zookeeper-hbase.out

# bin/hbase-daemon.sh start master

starting master, logging to /usr/local/hbase-1.2.3/bin/../logs/hbase-root-master-hbase.out

# bin/hbase-daemon.sh start regionserver

starting regionserver, logging to /usr/local/hbase-1.2.3/bin/../logs/hbase-root-regionserver-hbase.out

通过jps查看新增的java进度

# jps

2421 NodeManager
2975 HQuorumPeer
3302 HRegionServer
3051 HMaster
2339 ResourceManager
1924 NameNode
2029 DataNode
2170 SecondaryNameNode
3332 Jps

能够见见,新增了HQuorumPeer,HRegionServer和HMaster七个经过。

 

通过http://192.168.244.30:16030/访问Hbase的web页面

图片 2

 

迄今,Hbase的伪分布式集群搭建完成~

 

参考

1. http://hadoop.apache.org/docs/r2.5.2/hadoop-project-dist/hadoop-common/SingleCluster.html

2. http://hbase.apache.org/book.html#quickstart

发表评论

电子邮件地址不会被公开。 必填项已用*标注