高等学校统招考试后放下的,是十十四日游

碰巧高三的时候,被迫放下了富有的游乐。从一开首尤其不适应,到新兴被多量练习麻痹了神经,恍恍惚惚就到了毕业。那里面,游戏已经不知不觉地淡出本身的视线

1.
编写翻译安装1.1 下载redis

       
相信广大人高等高校统招考试后率先件想做的事,一定是舒适的玩个通宵。我也是内部一员。可是当作者那天下午开拓游戏时,忽然有一股莫名的俗气涌上心头,然后,小编就甩掉游戏。

# cd /tmp/# wget http://download.redis.io/releases/redis-3.2.1.tar.gz# tar zxvf redis-3.2.1.tar.gz# cd redis-3.2.1/

澳门美高梅手机网站 1

1.2 编译redis

图表来源于互联网

# make

       
正如生态圈里有生态链一样,游戏界也有友好的生(鄙)态(视)链。正如图中所示,页游是最低级的,单机是最高级的。而且【一个玩更高级的玩耍的人一定会对初级的不行瞧不起。当她对游乐的体会进步拔尖时,会玩的二11日游也就越少。当进步到最高拔尖(不玩游戏的)他就退出了娱乐。】

错误:

       
小编想自个儿之所以能在这么早就脱离游戏,应该与自个儿早日就接触了高高的端的游戏(单机)有关联。

找不到 

        因为接触过wow,自然不会对套路相同的亡者农药会有趣味。

解决:

        因为家里网速难题,直接从页游跳到单机。

make distclean

                        。。。。。。

然后重新make;redis的代码包里有自带的jemalloc;

        等等等等一切的机缘巧合,让自己不住刷新对游乐的回味。

1.3 测试

       
但直至自个儿离开游戏之后,作者才发觉了更深层的事物。作者意识以前的wow,红警,CF,只是一小撮男人的钟爱。但玩乐发展到近日,已经是男女老少通吃了。Wi-Fi所在,手游所在。

# yum install tcl

 


# make test

          游戏为什么愈来愈热

       
作者曾长时间的思考过游戏风靡满世界的原委。手提式有线电话机和互联网的推广,完美的激励机制,广阔的张罗系统,还有现代社会时间碎片化。【但这一切,都是游戏设计者设的局,让全部在现实社会中失意的人着迷的局】

       
学生会喜欢玩玩,那是因为她们在切实可行世界的竞争中他们处于弱者的身价,而在玩乐中他们得以在朋友中居于齐头并进只怕更胜一筹的地位(社交系统的升级换代和互连网的推广则扩了“朋友”的概念,所以网游变得尤其火,而将此形成极致的农药更是红得拿下了第2)成年人会喜欢玩玩,这是因为游戏能够挑动他们的注意力,让他俩忘记一些琐事。

错误1:

    为什么游戏被叫作“毒品”呢

       
说到底游戏然而是充当一个,用大千世界喜爱的事体去替代人们痛恨到极点的政工。但玩乐和一般的兴趣爱好又是有分其他。之所以有人把嬉戏称为“毒品”,是因为游戏会令人上瘾。正如上上段所示游戏便是八个被设好的局,你1个老百姓,即使沾染,就决然会沦为个中,不能够自拔。更何况还有社会条件的压力时刻诱惑你投入游戏队伍容貌。

借使你的身边都以一群玩耍爱好者。。。

澳门美高梅手机网站 2

        解:如图所示

        答:因为设计者有相对种艺术让你玩游戏并乐此不疲,因而游戏是毒药。

 

                对娱乐的神态

         
作者以为娱乐产业一定是会发展壮大的,作者内心深处也是很谢谢游戏的,因为它让广大学本科有空子当先本身的人着迷,沦落。许多同班在三年前刚入高级中学时是比笔者牛逼的,可三年后,作者在排名上早已看不见他们了。大概他们有一天能够变成1个网络主播,游戏高手。但在本身眼里,像她们这么玩游戏只是单独的玩游戏,而不是用心血商讨游戏的玩家,高校完成学业后一定依旧个一般玩家,分裂只是游戏换了一款罢了。

这是小学生的第壹篇作品,希望我们多多指导(๑ºั╰╯ºั๑)

You need tcl 8.5 or newer in order to run the Redis test

解决:

# yum install tcl.x86_64

错误2:

[exception]: Executing test client: NOREPLICAS Not enough good slaves
to write..
NOREPLICAS Not enough good slaves to write.

……

Killing still running Redis server 63439
Killing still running Redis server 63486
Killing still running Redis server 63519
Killing still running Redis server 63546
Killing still running Redis server 63574
Killing still running Redis server 63591
I/O error reading reply

……

解决:

vim tests/integration/replication-2.tcl

– after 1000

+ after 10000

错误3:

[err]: Slave should be able to synchronize with the master in
tests/integration/replication-psync.tcl
Replication not started.

解决:

遇见过三遍,重试make test就ok了。

1.4 安装redis

 

# make install# cp redis.conf /usr/local/etc/# cp src/redis-trib.rb /usr/local/bin/
  1. standalone模式2.1 配置redis

 

# vim etc/redis.conf    daemonize yeslogfile "/var/run/redis/log/redis.log"pidfile /var/run/redis/pid/redis_6379.piddbfilename redis.rdbdir /var/run/redis/rdb/

 

2.2 启动redis

 

# mkdir -p /var/run/redis/log# mkdir -p /var/run/redis/rdb# mkdir -p /var/run/redis/pid# /usr/local/bin/redis-server /usr/local/etc/redis.conf# ps -ef | grep redisroot      71021      1  0 15:46 ?        00:00:00 /usr/local/bin/redis-server 127.0.0.1:6379

 

2.3 测试redis

 

# /usr/local/bin/redis-cli127.0.0.1:6379> set country chinaOK127.0.0.1:6379> get country"china"127.0.0.1:6379> set country americaOK127.0.0.1:6379> get country"america"127.0.0.1:6379> exists country(integer) 1127.0.0.1:6379> del country(integer) 1127.0.0.1:6379> exists country(integer) 0127.0.0.1:6379>exit

 

2.4 停止redis

 

# /usr/local/bin/redis-cli shutdown  

 

  1. master-slave模式3.1 配置redis

为了测试master-slave情势,小编索要在3个host上运维二个redis实例(有标准化的话,当然能够动用三个host,各种host运行贰个redis实例)。为此,须要把redis.conf复制多份:

 

# cp /usr/local/etc/redis.conf /usr/local/etc/redis_6379.conf# cp /usr/local/etc/redis.conf /usr/local/etc/redis_6389.conf

安插实例6379:

 

# vim /usr/local/etc/redis_6379.conf daemonize yes port 6379 logfile "/var/run/redis/log/redis_6379.log" pidfile /var/run/redis/pid/redis_6379.pid dbfilename redis_6379.rdb dir /var/run/redis/rdb/ min-slaves-to-write 1 min-slaves-max-lag 10

 

终极两项配置表示: 在10秒内,至少二个slave ping过master;

 

安排实例6389:

 

# vim /usr/local/etc/redis_6389.conf daemonize yes port 6389 slaveof 127.0.0.1 6379 logfile "/var/run/redis/log/redis_6389.log" pidfile /var/run/redis/pid/redis_6389.pid dbfilename redis_6389.rdb dir /var/run/redis/rdb/ repl-ping-slave-period 10

 

易见,小编即将运维五个redis实例,2个选用端口6379(默许端口),另3个用到6389;并且,前者为master,后者为slave。

repl-ping-slave-period表示slave向master发送PING的功用,单位是秒。

 

除此以外,在6389的铺排文件中,有如下配置能够修改(一般不需修改):

 

slave-read-only yes slave-serve-stale-data yes

 

首先个象征:slave是只读的;

其次个代表:当slave在一块新数据(从master同步数据)的时候,它采纳旧的数量服务client。那使得slave是非阻塞的。

在6379的安排文件中,有如下配置能够修改(一般不需修改):

 

# repl-backlog-size 1mbrepl-diskless-sync norepl-diskless-sync-delay 5

 

对于这几项,旧事是那般的:

  1. 对此一直保持着连连的slave,能够通过增量同步来完成基本一致。

  2. 断开重连的slave也许因此一些共同来达到相同(redis
    2.8随后才有这一功能,此版本此前,只可以和新的slave一样,通过一体化同步来达成相同),机制是:

master在内部存款和储蓄器中著录一些replication积压量;重连的slave与master就replication
offset和master run id举行切磋:若master run
id没变(即master没有重启),并且slave请求的replication
offset在积压量里,就能够从offset开始进行部分联合来完毕同等。那七个规格任何四个不满意,就亟须开始展览全体同步了。repl-backlog-size
1mb正是用来配置replication积压量大小的。

  1. 对此新的slave恐怕无法通过有些联合达到平等的重连slave,而必须进行总体同步,即传送1个卡宴DB文件。master有二种格局来传那些凯雷德DB文件:

 

disk-backed:在磁盘上生成瑞鹰DB文件,然后传送给slave;diskless:不在磁盘上生成RAV4DB文件,而是一边生成索罗德DB数据,一边直接写到socket;

 

repl-diskless-sync用于配置利用哪一种政策。对于前者,在磁盘上生成贰遍CRUISERDB文件,能够服务八个slave;而对于后人,一旦传送起首,新来的slave只好排队(等近日slave同步到位)。所以,master在开班传递在此之前,或然希望推迟一会儿,希望来越多的slave,那样master就能够相互的把转变的多寡传送给他俩。参数repl-diskless-sync-delay
正是用来配置推迟的年华的,单位秒。

慢磁盘的master,或许供给考虑使用diskless传送。

3.2 启动master

 

# /usr/local/bin/redis-server /usr/local/etc/redis_6379.conf# /usr/local/bin/redis-cli127.0.0.1:6379> set country Japan(error) NOREPLICAS Not enough good slaves to write.127.0.0.1:6379>

 

足见,由于slave没运维,不满足在10秒内,至少一个slave
ping过master的尺度,故出错。

3.3 启动slave

 

# /usr/local/bin/redis-server /usr/local/etc/redis_6389.conf# /usr/local/bin/redis-cli127.0.0.1:6379> set country JapanOK

 

3.4 停止redis

 

# /usr/local/bin/redis-cli -p 6389 shutdown# /usr/local/bin/redis-cli -p 6379 shutdown
  1. cluster + master-slave方式Cluster能做什么样:
    自动的将数据集分布到分化节点上;当有些节点故障时,能够一连服务; 数据分布:redis不利用一致性hash,而是另一种格局的sharding:一共用1638四个hash
    slot,它们分布在分裂的节点上(例如节点A包罗0-4999,节点B包含伍仟-9999,节点C包罗一千0-16384)。key被映射到hash
    slot上。比如,key_foo被映射到slot 1000,而slot
    一千储存于节点A上,那么key_foo(以及它的值)将积存于节点A上。其余,你能够让差异的key映射到同贰个slot上,方法是利用{}把key的一部分括起来;在key到slot的映照进程中,只考虑{}之内的有的。例如,this{foo}key和another{foo}key在炫耀中,唯有”foo”被考虑,所以它们必然映射到同样的slot。
    TCP
    port:每种节点使用三个端口:服务client的端口;集群bus端口。集群bus端口
    = 服务client的端口 +
    10000。配置进程中只需点名服务client的端口,后者由redis依照那个规则自行计算。集群bus端口用于故障侦测、配置更新、fail
    over鉴权等;服务client的端口除了用于服务客户,还用于节点间搬迁数据。
    服务client的端口必须对client和装有别的节点开放;集群bus端口必须对全数其余节点开放。
    一致性有限支撑:redis集群不可能担保强一致性。也正是说,在一定情景下,redis或者丢掉写入的数目(就算一度回复client:已成功写入)。 异步复制导致数据丢失:1.
    用户写入master节点B;2. master节点B向client回复OK;3.
    master节点B把写入数据复制到它的slave。可知,因为B不等slave确认写入就向client回复OK,若master节点B在2未来故障,就会导致数据丢失。网络分歧造成数据丢失:例如有A,B,C多少个master,它们的slave分别是A1,B1,C1;借使爆发网络差异,B和客户端分到旁边。在cluster-node-timeout之内,客户端能够继续向B写入数据;当跨越cluster-node-timeout时,差异的另一侧发生fail
    over,B第11中学选为master。客户端向B写的数目就不见了。redis(非cluster形式)自个儿也恐怕丢掉数据:1.
    RBD限期快速照相,导致快速照相周期内的数额丢失;2.
    AOF,即便每一种写操作都记入log,但log是期限sync的,也大概有失sync周期内的数据。
    在下文,节点和实例是足以调换的名词(因为,笔者是在同三个host上考试集群,一个节点由3个实例代替)。

本身将在一如既往台机器上试验redis集群格局,为此,小编须求创设伍个redis实例,当中3个master,别的三个是slave。它们的端口是8000-7005。

4.1 配置redis集群:

 

# cp /usr/local/etc/redis.conf /usr/local/etc/redis_7000.conf # vim /usr/local/etc/redis_7000.conf daemonize yes port 7000 pidfile /var/run/redis/pid/redis_7000.pid logfile "/var/run/redis/log/redis_7000.log" dbfilename redis_7000.rdb dir /var/run/redis/rdb/ min-slaves-to-write 0 cluster-enabled yes cluster-config-file /var/run/redis/nodes/nodes-7000.conf cluster-node-timeout 5000 cluster-slave-validity-factor 10 repl-ping-slave-period 10

此间本身把min-slave-to-write改为0,为了后文验证fail
over之后,还能够读写(不然,master
crash之后,slave取代它变成master,但它并未slave,故不可能读写)。

 

 

 

# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7001.conf# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7002.conf# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7003.conf# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7004.conf# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7005.conf# sed -i -e 's/7000/7001/' /usr/local/etc/redis_7001.conf# sed -i -e 's/7000/7002/' /usr/local/etc/redis_7002.conf# sed -i -e 's/7000/7003/' /usr/local/etc/redis_7003.conf# sed -i -e 's/7000/7004/' /usr/local/etc/redis_7004.conf# sed -i -e 's/7000/7005/' /usr/local/etc/redis_7005.conf

 

4.2 启动redis实例

 

# mkdir -p /var/run/redis/log# mkdir -p /var/run/redis/rdb# mkdir -p /var/run/redis/pid# mkdir -p /var/run/redis/nodes

 

 

# /usr/local/bin/redis-server /usr/local/etc/redis_7000.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7001.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7002.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7003.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7004.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7005.conf

 

如今,在种种实例的log里,能够看见类似那样一行(当然,前面这几个串16进制数各不一样):

 

3125:M 12 Jul 15:24:16.937 * No cluster configuration found, I'm b6be6eb409d0207e698997d79bab9adaa90348f0

 

实际,那串16进制数便是各类redis实例的ID。它在集群的环境下,唯一标识3个redis实例。每个redis实例通过这么些ID记录别的实例,而不是透过IP和port(因为IP和port能够变动)。大家那边所说的实例,就是集群中的节点,那个ID也正是Node
ID。

 4.3 创建redis集群

大家应用从redis源代码中拷贝的redis-trib.rb来创建集群。它是二个ruby脚步,为了失它能够运行,还供给如下准备干活:

 

# yum install gem# gem install redis

 

以往能够创制集群了:

 

# /usr/local/bin/redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005>>> Creating cluster>>> Performing hash slots allocation on 6 nodes...Using 3 masters:127.0.0.1:7000127.0.0.1:7001127.0.0.1:7002Adding replica 127.0.0.1:7003 to 127.0.0.1:7000Adding replica 127.0.0.1:7004 to 127.0.0.1:7001Adding replica 127.0.0.1:7005 to 127.0.0.1:7002M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:0-5460 (5461 slots) masterM: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:5461-10922 (5462 slots) masterM: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:10923-16383 (5461 slots) masterS: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   replicates b6be6eb409d0207e698997d79bab9adaa90348f0S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756Can I set the above configuration? (type 'yes' to accept): yes>>> Nodes configuration updated>>> Assign a different config epoch to each node>>> Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join...>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:0-5460 (5461 slots) masterM: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:5461-10922 (5462 slots) masterM: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:10923-16383 (5461 slots) masterM: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) master   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) master   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) master   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

 

最终一行:

[OK] All 16384 slots covered.

意味着至少有三个master可以服务整个slot(1638陆个)了。能够认为集群创制成功了。从命令的出口能够看出:

实例7000 ID:b6be6eb409d0207e698997d79bab9adaa90348f0

实例7001 ID:23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9

实例7002 ID:6b92f63f64d9683e2090a28ebe9eac60d05dc756

实例7003 ID:ebfa6b5ab54e1794df5786694fcabca6f9a37f42

实例7004 ID:026e747386106ad2f68e1c89543b506d5d96c79e

实例7005 ID:441896afc76d8bc06eba1800117fa97d59453564

其中:

7000、7001和7002是master;

7000包含的slot是0-5460,slave是7003

7001包含的slot是5461-10922,slave是7004

7002包含的slot是10923-16383,slave是7005

只要不想要slave,伍个实例都做master(没有备份),把”–replicas
1″去掉即可。

在继承拓展事先,先看看这几项配置项的意义:

cluster-enabled:启用集群效应。cluster-config-file:集群中的实例通过那个文件持久化集群配置:其余实例及其状态等。不是令人来编排的。redis实例在收取新闻(例如别的实例状态变化的音信)时重写。cluster-node-timeout:多个节点失联时间大于那几个值(单位微秒)时,就会认为是故障(failing)的。它有七个重要意义:1.
master节点失联时间大于它时,就大概被fail over到slave上;2.
二个节点假使和多数master失联的时辰超出它时,就告一段落接受请求(比如由于网络差别,二个master被隔绝出去,与多数master失联,当失联时间超出这一个值时,它就止住工作,因为在瓦解的那一边也许已经fail
over到slave上了)。cluster-slave-validity-factor:说来话长,三个slave当发现它的数额太老时,它就不会开始展览fail
over。如何显著它的多少年龄呢?有三个检查体制:机制1.
如若有七个slave都得以fail over,它们就会换来新闻,依据replication offset
(那几个值能反映从master获得数码的多少)来鲜明一个级别(rank),然后依照rank来延迟fail
over。Yuanguo:也许是这么的: 依据replication
offset鲜明哪些slave的多少相比较新(从master获得的多),哪个slave的多寡比较旧(从master获得的少),然后排列出1个rank。数据新的slave尽快fail
over,数据老的slave延迟fail
over,越老延迟的越长。机制1和那几个布局项非亲非故。 机制2.
种种slave记录它与master最终三次交互(PING也许是命令)以来逝去的时间。若那些日子过大,则觉得数额是老的。怎样明确这一个时间是否“过大”呢?那正是以此布局项的效果,若高于

(node-timeout * slave-validity-factor) + repl-ping-slave-period

就觉得过大,数据是老的。

repl-ping-slave-period:slave向master发送PING的频率,单位是秒。
以下那两项,我们从不设置:
cluster-migration-barrier:假若有1个master有二个slave,另三个master没有任何slave。那时,需求把第②个master的slave
迁移给第3个master。但第二个master把温馨的slave迁移给外人时,本身必须持有一定个数的slave。保有个数正是cluster-migration-barrier。例如,把那几个值设置为3时,
就不会时有发生slave迁移了,因为迁移之后保有数小于3。所以,你想禁止slave迁移,把那个数设置非常大即可。cluster-require-full-coverage:若设置为yes,一旦存在hash
slot没被覆盖,则集群结束选用请求。在这种状态下,若集群部分宕机,导致部分slot没有被遮盖,则全部集群变得不可用。你若希望在一些节点宕机时,被覆盖的这一个slot还可以服务,把它设置为no。4.4
测试集群4.4.1 redis-cli的集群格局

 

# /usr/local/bin/redis-cli -p 7000127.0.0.1:7000> set country China(error) MOVED 12695 127.0.0.1:7002127.0.0.1:7000> get country(error) MOVED 12695 127.0.0.1:7002127.0.0.1:7000> exit# /usr/local/bin/redis-cli -p 7002127.0.0.1:7002> set country ChinaOK127.0.0.1:7002> get country"China"127.0.0.1:7002> set testKey testValue(error) MOVED 5203 127.0.0.1:7000127.0.0.1:7002> exit# /usr/local/bin/redis-cli -p 7000127.0.0.1:7000> set testKey testValueOK127.0.0.1:7000> exit

 

某3个一定的key,只可以由特定的master来服务?

不是的。原来,redis-cli供给一个 -c
来表示cluster方式。使用cluster方式时,能够在任何节点(master或然slave)上存取数据:

 

# /usr/local/bin/redis-cli -c -p 7002127.0.0.1:7002> set country AmericaOK127.0.0.1:7002> set testKey testValue-> Redirected to slot [5203] located at 127.0.0.1:7000OK127.0.0.1:7000> exit# /usr/local/bin/redis-cli -c -p 7005127.0.0.1:7005> get country-> Redirected to slot [12695] located at 127.0.0.1:7002"America"127.0.0.1:7002> get testKey-> Redirected to slot [5203] located at 127.0.0.1:7000"testValue"127.0.0.1:7000> set foo bar-> Redirected to slot [12182] located at 127.0.0.1:7002OK127.0.0.1:7002> exit

其实,redis-cli对cluster的支撑是相比较基础的,它只是利用了节点能够基于slot举办重定向的效益。例如在上头的例子中,在实例7002上安装testKey时,它总括获得相应的slot是5203,而5203由实例八千顶住,所以它重定向到实例八千。

 

三个更好的客户端应该能够缓存 hash
slot到节点地址的照射,然后就足以平昔访问正确的节点,免于重定向。那几个映射唯有在集群配置产生变化的时候才要求刷新,例如,fail
over之后(slave代替了master,故节点地址变更),只怕管理人添加/删除了节点(hash
slot分布产生变化)。

4.4.2 ruby客户端:redis-rb-cluster

安装

 

# cd /usr/local/# wget https://github.com/antirez/redis-rb-cluster/archive/master.zip# unzip master.zip# cd redis-rb-cluster-master

 

测试

设置完毕未来,进入redis-rb-cluster-master目录,发现里面有二个example.rb,执行它:

 

# ruby example.rb 127.0.0.1 7000123456^C

 

它循环向redis集群set这样的键值对:

foo1 => 1

foo2 => 2

……

可因而redis-cli来表达后边的履行结果。

4.4.3 hash slots在节点间resharding

为了体现resharding进程中IO不间断,再打开3个极限,运营

 

# ruby example.rb 127.0.0.1 7000;

 

还要在原终端测试resharding:

 

# /usr/local/bin/redis-trib.rb reshard 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.How many slots do you want to move (from 1 to 16384)?2000                    <----  迁移多少hash slot?What is the receiving node ID? 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9      <----  迁移目的地? 7001实例Please enter all the source node IDs.  Type 'all' to use all the nodes as source nodes for the hash slots.  Type 'done' once you entered all the source nodes IDs.Source node #1:all                                                           <----  迁移源?allDo you want to proceed with the proposed reshard plan (yes/no)? yes          <----  确认

 

搬迁进程中,另贰个极限的IO持续没有间断。迁移完毕之后,检查以往hash
slot的遍布:

 

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

 

可知slot的遍布已经产生变化:

7000:4461个

1000-5460

7001:7462个

0-999

5461-11922

7002:4461个

11923-16383

4.4.4 fail over

在测试fail
over在此之前,先看一下包罗在redis-rb-cluster中的另三个工具:consistency-test.rb;它是多个一致性检查器:累加变量,然后检查变量的值是或不是科学。

 

# ruby consistency-test.rb 127.0.0.1 7000198 R (0 err) | 198 W (0 err) |685 R (0 err) | 685 W (0 err) |1174 R (0 err) | 1174 W (0 err) |1675 R (0 err) | 1675 W (0 err) |2514 R (0 err) | 2514 W (0 err) |3506 R (0 err) | 3506 W (0 err) |4501 R (0 err) | 4501 W (0 err) |

 

多少个括号中的(N
err)分别代表IO错误数,而不是数量差异。数据不一致在最后3个列打字与印刷(上例中从不多少区别现象)。为了演示数据不等同现象,作者修改consistency-test.rb脚步,把key打字与印刷出来,然后在另3个巅峰中,通过redis-cli更改key的值。

 

# vim consistency-test.rb            # Report            sleep @delay            if Time.now.to_i != last_report                report = "#{@reads} R (#{@failed_reads} err) | " +                         "#{@writes} W (#{@failed_writes} err) | "                report += "#{@lost_writes} lost | " if @lost_writes > 0                report += "#{@not_ack_writes} noack | " if @not_ack_writes > 0                last_report = Time.now.to_i+               puts key                puts report            end

 

运作脚本,我们能够瞥见脚本操作的每一个变量的key:

 

# ruby consistency-test.rb 127.0.0.1 700081728|502047|15681480|key_8715568 R (0 err) | 568 W (0 err) |81728|502047|15681480|key_33731882 R (0 err) | 1882 W (0 err) |81728|502047|15681480|key_893441 R (0 err) | 3441 W (0 err) |

 

在另一终极,修改key:81728|502047|15681480|key_8715的值:

 

127.0.0.1:7001> set 81728|502047|15681480|key_8715 0-> Redirected to slot [12146] located at 127.0.0.1:7002OK127.0.0.1:7002>

 

然后,能够瞥见consistency-test.rb检查出值不雷同:

 

# ruby consistency-test.rb 127.0.0.1 700081728|502047|15681480|key_8715568 R (0 err) | 568 W (0 err) |81728|502047|15681480|key_33731882 R (0 err) | 1882 W (0 err) |81728|502047|15681480|key_89......81728|502047|15681480|key_28417884 R (0 err) | 7884 W (0 err) |81728|502047|15681480|key_3088869 R (0 err) | 8869 W (0 err) | 2 lost |81728|502047|15681480|key_67719856 R (0 err) | 9856 W (0 err) | 2 lost |

变量的值应该为2,不过丢失了(被本身改掉了)。

4.4.4.1 自动fail over

当一个master crash了,一段时间后(后面配置的的5秒)会活动fail
over到它的slave上。

在1个极端上,启动一致性检查器consistency-test.rb(删掉了打字与印刷key的口舌)。然后在另1个终端上模仿crash3个master:

 

# ./redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001        <---- 可见7001是一个master   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.# /usr/local/bin/redis-cli -p 7001 debug segfault                 <---- 模拟7001 crash         Error: Server closed the connection# ./redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)M: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004        <---- 7001 fail over到7004;现在7004是master,并且没有slave   slots:0-999,5461-11922 (7462 slots) master   0 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

在一致性检查一侧,能够望见在fail over进度中冒出了一部分IO错误。

 

 

7379 R (0 err) | 7379 W (0 err) |8499 R (0 err) | 8499 W (0 err) |9586 R (0 err) | 9586 W (0 err) |10736 R (0 err) | 10736 W (0 err) |12416 R (0 err) | 12416 W (0 err) |Reading: Too many Cluster redirections? (last error: MOVED 11451 127.0.0.1:7001)Writing: Too many Cluster redirections? (last error: MOVED 11451 127.0.0.1:7001)13426 R (1 err) | 13426 W (1 err) |Reading: Too many Cluster redirections? (last error: MOVED 5549 127.0.0.1:7001)Writing: Too many Cluster redirections? (last error: MOVED 5549 127.0.0.1:7001)13426 R (2 err) | 13426 W (2 err) |Reading: Too many Cluster redirections? (last error: MOVED 9678 127.0.0.1:7001)Writing: Too many Cluster redirections? (last error: MOVED 9678 127.0.0.1:7001)13427 R (3 err) | 13427 W (3 err) |Reading: Too many Cluster redirections? (last error: MOVED 10649 127.0.0.1:7001)Writing: Too many Cluster redirections? (last error: MOVED 10649 127.0.0.1:7001)13427 R (4 err) | 13427 W (4 err) |Reading: Too many Cluster redirections? (last error: MOVED 9313 127.0.0.1:7001)Writing: Too many Cluster redirections? (last error: MOVED 9313 127.0.0.1:7001)13427 R (5 err) | 13427 W (5 err) |Reading: Too many Cluster redirections? (last error: MOVED 8268 127.0.0.1:7001)Writing: Too many Cluster redirections? (last error: MOVED 8268 127.0.0.1:7001)13428 R (6 err) | 13428 W (6 err) |Reading: CLUSTERDOWN The cluster is downWriting: CLUSTERDOWN The cluster is down13432 R (661 err) | 13432 W (661 err) |14786 R (661 err) | 14786 W (661 err) |15987 R (661 err) | 15987 W (661 err) |17217 R (661 err) | 17217 W (661 err) |18320 R (661 err) | 18320 W (661 err) |18737 R (661 err) | 18737 W (661 err) |18882 R (661 err) | 18882 W (661 err) |19284 R (661 err) | 19284 W (661 err) |20121 R (661 err) | 20121 W (661 err) |21433 R (661 err) | 21433 W (661 err) |22998 R (661 err) | 22998 W (661 err) |24805 R (661 err) | 24805 W (661 err) |

 

留神两点:

 

fail
over完成之后,IO错误数甘休扩展,集群能够接二连三健康劳动。没有出现区别错误。master
crash是或许导致数据差别等的(slave的数目滞后于master,master
crash后,slave取代它,导致使用落后的数额),但那种景观不是分外不难产生,因为master实现新的写操作时,大致在还原客户端的还要,就向slave同步了。但不代表不容许现身。

 

重启7001,它将改为7004的slave:

 

# /usr/local/bin/redis-server /usr/local/etc/redis_7001.conf# ./redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)M: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)S: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001             <---- 7001已变成7004的slave   slots: (0 slots) slave   replicates 026e747386106ad2f68e1c89543b506d5d96c79e[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

澳门美高梅手机网站, 

4.4.4.2 手动fail over

部分时候,用户大概想百尺竿头更进一步fail
over,比如,想升官有个别master,最好让它成为slave,那样能减小对集群可用性的震慑。那就须求手动fail
over。

手动fail over必须在slave上执行:

 

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)M: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)S: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001                <---- 7001是7004的slave   slots: (0 slots) slave   replicates 026e747386106ad2f68e1c89543b506d5d96c79e[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.# /usr/local/bin/redis-cli -p 7001 CLUSTER FAILOVER                       <---- 在slave 7001上执行fail overOK# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004               <---- 7004变成7001的slave   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001               <---- 7001变成master   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

 

手动fail over的过程:

 

slave告诉master,截至处理用户请求;master甘休处理用户请求,并回复slave它的replication
offset;slave等待它的replication
offset和master的匹配;即等待,直到从master接收到持有数据。那时候master和slave的数目一致,并且master不接收新数据;slave开头fail
over:从大部分master获取3个布局epoc(配置变更版本号,应该是cluster-config-file包罗的音讯的版本号),并播放新布局(在新布局中,slave已经化为master);老的master接收到新配置,重新起初拍卖用户请求:把请求重定向到新master;它和谐一度成为slave;

 

fail over命令有五个选项:

 

FORubiconCE:上边fail
over的经过中,要求master参预。若master处于失联状态(互连网故障或然master崩溃了,但没做到机关fail
over),加上FOLX570CE选项,则fail
over不与master实行握手,而是径直从第六步起头。TAKEOVER:上面fail
over的进度中,要求大部分master的授权并有多数master发生3个新的布局变更版本号。有时,我们不想与任何master实现一致,而直接fail
over,则须求TAKEOVE本田UR-V选项。1个真实的用例是:master在二个数额宗旨,
全数slave在另2个多少宗旨,当有着master宕机或许互联网差异时,集中地把具备处于另一数量主导的slave提高为master,来达成数据宗旨切换的指标。

 

4.4.5 添加节点4.4.5.1 添加master

 

# cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7006.conf# sed -i -e 's/7000/7006/' /usr/local/etc/redis_7006.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7006.conf             <---- 1. 拷贝、修改conf,并启动一个redis实例# /usr/local/bin/redis-trib.rb add-node 127.0.0.1:7006 127.0.0.1:7000    <---- 2. 把实例加入集群>>> Adding node 127.0.0.1:7006 to cluster 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.>>> Send CLUSTER MEET to node 127.0.0.1:7006 to make it join the cluster.[OK] New node added correctly.                                           <---- 新节点添加成功# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000                      <---- 3. 检查>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006               <---- 新节点没有任何slot;所以需要手动reshard   slots: (0 slots) master   0 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.# /usr/local/bin/redis-trib.rb reshard 127.0.0.1:7000                    <---- 4. reshard>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006   slots: (0 slots) master   0 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:0-999,5461-11922 (7462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.How many slots do you want to move (from 1 to 16384)? 3000                  <---- 迁移3000个slotWhat is the receiving node ID? 6147326f5c592aff26f822881b552888a23711c6     <---- 目的地是新加的节点Please enter all the source node IDs.  Type 'all' to use all the nodes as source nodes for the hash slots.  Type 'done' once you entered all the source nodes IDs.                    <---- 源是7001;由于上次reshard,它的slot非常多,所以迁走3000Source node #1:23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9Source node #2:done......    Moving slot 7456 from 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9    Moving slot 7457 from 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9    Moving slot 7458 from 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9    Moving slot 7459 from 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9    Moving slot 7460 from 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9Do you want to proceed with the proposed reshard plan (yes/no)? yes         <---- 确认# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000                         <---- 5. 再次检查>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006                   <---- 新加的节点有3000个slot   slots:0-999,5461-7460 (3000 slots) master   0 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:7461-11922 (4462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

 

4.4.5.2 添加slave

 

1. 拷贝、修改conf,并启动一个redis实例 # cp /usr/local/etc/redis_7000.conf /usr/local/etc/redis_7007.conf # sed -i -e 's/7000/7007/' /usr/local/etc/redis_7007.conf # /usr/local/bin/redis-server /usr/local/etc/redis_7007.conf 2. 作为slave添加,并指定其master [root@localhost ~]# /usr/local/bin/redis-trib.rb add-node --slave --master-id 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7007 127.0.0.1:7000 >>> Adding node 127.0.0.1:7007 to cluster 127.0.0.1:7000 >>> Performing Cluster Check (using node 127.0.0.1:7000) M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000 slots:1000-5460 (4461 slots) master 1 additional replica(s) S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005 slots: (0 slots) slave replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756 S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003 slots: (0 slots) slave replicates b6be6eb409d0207e698997d79bab9adaa90348f0 M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002 slots:11923-16383 (4461 slots) master 1 additional replica(s) S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004 slots: (0 slots) slave replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006 slots:0-999,5461-7460 (3000 slots) master 0 additional replica(s) M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001 slots:7461-11922 (4462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 127.0.0.1:7007 to make it join the cluster. Waiting for the cluster to join. >>> Configure node as replica of 127.0.0.1:7006. [OK] New node added correctly. 3. 检查 # /usr/local/bin/redis-trib.rb check 127.0.0.1:7000 M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000 slots:1000-5460 (4461 slots) master 1 additional replica(s) S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005 slots: (0 slots) slave replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756 S: 6d8675118da6b492c28844395ee6915506c73b3a 127.0.0.1:7007 <---- 新节点成功添加,并成为7006的slave slots: (0 slots) slave replicates 6147326f5c592aff26f822881b552888a23711c6 S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003 slots: (0 slots) slave replicates b6be6eb409d0207e698997d79bab9adaa90348f0 M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002 slots:11923-16383 (4461 slots) master 1 additional replica(s) S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004 slots: (0 slots) slave replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006 slots:0-999,5461-7460 (3000 slots) master 1 additional replica(s) M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001 slots:7461-11922 (4462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.

 

下边添加slave时,内定了其master。也得以不钦点master,那样,它将随机地改成1个master的slave;然后,可以把它迁移为钦命master的slave(通过CLUSTE宝马X3REPLICATE命令)。此外,还能经过作为3个空的master添加,然后采取CLUSTEMuranoREPLICATE命令把它成为slave。

 

4.4.6 删除节点

剔除此前,先看看当前的结构

 

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: 6d8675118da6b492c28844395ee6915506c73b3a 127.0.0.1:7007   slots: (0 slots) slave   replicates 6147326f5c592aff26f822881b552888a23711c6S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006   slots:0-999,5461-7460 (3000 slots) master   1 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:7461-11922 (4462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

可见:

 

master slave

7000 7003

7001 7004

7002 7005

7006 7007

我们将删掉7007(slave)和7002(master).

4.4.6.1 删除slave节点

去除slave(7007)相比较便于,通过del-node即可:

 

# /usr/local/bin/redis-trib.rb del-node 127.0.0.1:7000 6d8675118da6b492c28844395ee6915506c73b3a>>> Removing node 6d8675118da6b492c28844395ee6915506c73b3a from cluster 127.0.0.1:7000>>> Sending CLUSTER FORGET messages to the cluster...>>> SHUTDOWN the node.# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002   slots:11923-16383 (4461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006       <---- 7006现在没有slave了   slots:0-999,5461-7460 (3000 slots) master   0 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:7461-11922 (4462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

 

4.4.6.2 删除master节点

删除master节点之前,必须确认保障master是空的(没有别的slot),那能够透过reshard来成功。然后才能去除master。

 

# /usr/local/bin/redis-trib.rb reshard 127.0.0.1:7000 <---- reshard >>> Performing Cluster Check (using node 127.0.0.1:7000) M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000 slots:1000-5460 (4461 slots) master 1 additional replica(s) S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005 slots: (0 slots) slave replicates 6b92f63f64d9683e2090a28ebe9eac60d05dc756 S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003 slots: (0 slots) slave replicates b6be6eb409d0207e698997d79bab9adaa90348f0 M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002 slots:11923-16383 (4461 slots) master 1 additional replica(s) S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004 slots: (0 slots) slave replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006 slots:0-999,5461-7460 (3000 slots) master 0 additional replica(s) M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001 slots:7461-11922 (4462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 4461 <---- 我们计划把7002清空,所以需要迁移其所有slot,4461个 What is the receiving node ID? 6147326f5c592aff26f822881b552888a23711c6 <---- 目的地7006 Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1:6b92f63f64d9683e2090a28ebe9eac60d05dc756 <---- 源7002 Source node #2:done ...... Moving slot 16382 from 6b92f63f64d9683e2090a28ebe9eac60d05dc756 Moving slot 16383 from 6b92f63f64d9683e2090a28ebe9eac60d05dc756 Do you want to proceed with the proposed reshard plan (yes/no)? yes

 

反省7002是还是不是曾经被清空:

 

 

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005           <---- 7002的slave,变成7006的了 (若设置了cluster-migration-barrier,如何?)   slots: (0 slots) slave   replicates 6147326f5c592aff26f822881b552888a23711c6S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0M: 6b92f63f64d9683e2090a28ebe9eac60d05dc756 127.0.0.1:7002           <---- 7002已经被清空   slots: (0 slots) master   0 additional replica(s)                                           <---- 并且它的slave也不见了(因为没有数据,slave是浪费) !!!!!!S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006   slots:0-999,5461-7460,11923-16383 (7461 slots) master   1 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:7461-11922 (4462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

后天得以去除7002了:

 

 

# /usr/local/bin/redis-trib.rb del-node 127.0.0.1:7000 6b92f63f64d9683e2090a28ebe9eac60d05dc756>>> Removing node 6b92f63f64d9683e2090a28ebe9eac60d05dc756 from cluster 127.0.0.1:7000>>> Sending CLUSTER FORGET messages to the cluster...>>> SHUTDOWN the node.

探望今后的构造:

 

 

# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates 6147326f5c592aff26f822881b552888a23711c6S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006   slots:0-999,5461-7460,11923-16383 (7461 slots) master   1 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:7461-11922 (4462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

可见:

 

master slave

7000 7003

7001 7004

7006 7005

4.4.7 slave迁移

未来的拓扑结构是:

 

 

master slave

7000 7003

7001 7004

7006 7005

作者们能够透过命令把1个slave分配给其余master:

 

# /usr/local/bin/redis-cli -p 7003 CLUSTER REPLICATE 6147326f5c592aff26f822881b552888a23711c6    <---- 让7003做7006的slaveOK# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000           <---- 7000没有slave   slots:1000-5460 (4461 slots) master   0 additional replica(s)S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005           <---- 7005是7006的slave   slots: (0 slots) slave   replicates 6147326f5c592aff26f822881b552888a23711c6S: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003           <---- 7003是7006的slave   slots: (0 slots) slave   replicates 6147326f5c592aff26f822881b552888a23711c6S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006           <---- 7006有两个slave   slots:0-999,5461-7460,11923-16383 (7461 slots) master   2 additional replica(s)M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:7461-11922 (4462 slots) master   1 additional replica(s)[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

其它,除了手动员搬迁移之外,redis还会自动员搬迁移slave。前面介绍配置项cluster-migration-barrier时也大致解释过:

 

redis在特定的天天,试图把装有最多slave的master的slave迁移给没有slave的master;有了这一机制,你能够不难地向系统添加一些slave,不用钦定它们的master是哪个人。当有master失去了富有slave时(slave1个个都故障了),系统就会活动员搬迁移给它。cluster-migration-barrier:设置自动员搬迁移时,2个master最都尉留多少个slave。例如,那个值设置为2,小编有3个slave,你未曾时,小编会给您2个;作者有二个slave,你从猴时,作者不会给您。

 

4.4.8 升级节点4.4.8.1 升级slave

 

停掉;使用新本子的redis运行;

 

4.4.8.2 升级master

 

手动fail
over到2个slave上;等待master变为slave;然后,作为slave升级(停掉,使用新版redis启动);再fail
over回来(可选);

 

4.4.9 集群迁移

权且没需要;

4.4.10 Stop/Start集群

Stop集群:只需三个个停下各类实例

 

# /usr/local/bin/redis-cli -p 7000 shutdown# /usr/local/bin/redis-cli -p 7001 shutdown# /usr/local/bin/redis-cli -p 7003 shutdown# /usr/local/bin/redis-cli -p 7004 shutdown# /usr/local/bin/redis-cli -p 7005 shutdown# /usr/local/bin/redis-cli -p 7006 shutdown# ps -ef | grep redisroot      26266  23339  0 17:24 pts/2    00:00:00 grep --color=auto redis[root@localhost ~]#

 

Start集群:只需1个个起首各样实例(没要求再利用redis-trib.rb
create)

 

# /usr/local/bin/redis-server /usr/local/etc/redis_7000.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7001.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7003.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7004.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7005.conf# /usr/local/bin/redis-server /usr/local/etc/redis_7006.conf# /usr/local/bin/redis-trib.rb check 127.0.0.1:7000>>> Performing Cluster Check (using node 127.0.0.1:7000)M: b6be6eb409d0207e698997d79bab9adaa90348f0 127.0.0.1:7000   slots:1000-5460 (4461 slots) master   1 additional replica(s)M: ebfa6b5ab54e1794df5786694fcabca6f9a37f42 127.0.0.1:7003   slots:0-999,5461-7460,11923-16383 (7461 slots) master   1 additional replica(s)S: 026e747386106ad2f68e1c89543b506d5d96c79e 127.0.0.1:7004   slots: (0 slots) slave   replicates 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9M: 23149e93876a0d1d2cd7962d1b8fbdf42ecc64e9 127.0.0.1:7001   slots:7461-11922 (4462 slots) master   1 additional replica(s)S: 6147326f5c592aff26f822881b552888a23711c6 127.0.0.1:7006   slots: (0 slots) slave   replicates ebfa6b5ab54e1794df5786694fcabca6f9a37f42S: 441896afc76d8bc06eba1800117fa97d59453564 127.0.0.1:7005   slots: (0 slots) slave   replicates b6be6eb409d0207e698997d79bab9adaa90348f0[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.

测试IO是或不是平常:

 

# ruby consistency-test.rb 127.0.0.1 7000109 R (0 err) | 109 W (0 err) |661 R (0 err) | 661 W (0 err) |1420 R (0 err) | 1420 W (0 err) |2321 R (0 err) | 2321 W (0 err) |……

 

  1. 小结

本文主要记录了redis的陈设进程(包含单机形式,主备形式和cluster情势);在那么些历程中,尽量对redis系统的办事方法实行分解,就算不算详尽。希望能够作为入门知识

发表评论

电子邮件地址不会被公开。 必填项已用*标注