kafka-0.10.0官网翻译(一)入门指南

壹 、字体和复选框绑定

1.1 Introduction
Kafka is a distributed
streaming platform. What exactly does that mean?
kafka是一个分布式的流式平台,它究竟是何许意思?

<lable for="one"><input id="one" onclick="qx(this)" type="checkBox" />全选</lable>

We think of a
streaming platform as having three key capabilities:
大家以为流式平台有以下多个首要的法力:
  It let’s you
publish and subscribe to streams of records. In this respect it is
similar to a message queue or enterprise messaging system.
  它能让您推送和订阅流记录。在那一个地点它就像于三个音信队列或然商店级的音信系统。
  It let’s you store
streams of records in a fault-tolerant way.
  它能让您存款和储蓄流记录以一种容错的主意。
  It let’s you
process streams of records as they occur.
  它让你处理流记录当流数据来权且。

贰 、单机全选完结选中

今日头条博客园:intsmaze范博健洋哥

<lable for="one"><input id="one" onclick="qx(this)" type="checkBox" />全选</lable>
<br>
<input name="check" type="checkBox" />haha
<input name="check" type="checkBox" />haha
<input name="check" type="checkBox" />haha
<input name="check" type="checkBox" />haha
<script>
function qx(obj){
  var check = document.getElementsByName('check');
  for (var i = 0; i < check.length; i++) {
    check[i].checked=obj.checked;
  }
}

澳门美高梅手机网站 1
What is Kafka good
for?
kafka有啥样利益?
  It gets used for
two broad classes of application:
  它被用来两大类别的应用程序:
  Building real-time
streaming data pipelines that reliably get data between systems or
applications
  建立实时的流式数据通道,那几个通道能可相信的得到到在系统或应用间的多少
  Building real-time
streaming applications that transform or react to the streams of data
  建立实时代前卫媒体应用来转换流数据或对流数据做出反应。

叁 、动态给多少个数组赋值,利用arr.length

  To understand how
Kafka does these things, let’s dive in and explore Kafka’s capabilities
from the bottom up.
  为了知道kafka能如何做这么些业务,让我们从底下初步深刻切磋kafka的效率:
First a few
concepts:
率先看那多少个概念:
  Kafka is run as a
cluster on one or more servers.
  kafka作为集群运转在2个或七个服务器。
  The Kafka cluster
stores streams of records in categories called topics.
  kafka集群存储的流记录以项目划分称为核心。
  Each record
consists of a key, a value, and a timestamp.
澳门美高梅手机网站,  每条记下包罗二个键,一个值和三个小时戳。
    
  Kafka has four
core APIs:
  kafka有多少个主导的apis:
  The Producer API
allows an application to publish a stream records to one or more Kafka
topics.
  生成者api允许四个施用去推送一个流记录到3个或三个kafka主旨上。
  The Consumer API
allows an application to subscribe to one or more topics and process the
stream of records produced to them.
  消费者api允许一个用到去订阅一个或八个宗旨,对她们产生进度的笔录。
  The Streams API
allows an application to act as a stream processor, consuming an input
stream from one or more topics and producing an output stream to one or
more output topics, effectively transforming the input streams to output
streams.
  那个Streams
API允许利用去作为一个流处理器,消费三个来至于三个或四个大旨的输入流,生产2个输出流到一个或五个输出流宗旨,有效地将输入流转换为输出流。
  The Connector API
allows building and running reusable producers or consumers that connect
Kafka topics to existing applications or data systems. For example, a
connector to a relational database might capture every change to
  Connector
API允许建立和同意可选择的劳动者或顾客去老是kafka宗旨到存在的应用或数据系统。例如,关周到据库的连接器恐怕捕获每二个生成。

var arr=new array();
for(var j=0;j<5;j++){
    arr[arr.length]=j;
}

澳门美高梅手机网站 2
  In Kafka the
communication between the clients and the servers is done with a simple,
high-performance, language agnostic TCP protocol. This protocol is
versioned and maintains backwards compatibility with older version. We
provide a Java client for Kafka, but clients are available in many
languages.
  在kafka的客户和服务器之间的通讯是用一个粗略的,高质量的,语言毫无干系的TCP协议达成的。那些体协会议的本子能向后保卫安全来合作旧版本。大家为kafka提供三个java版本的客户端,其实这么些客户端有为数不少言语版本供选择。

四 、动态创设节点

Topics and Logs
主题和日志
  Let’s first dive
into the core abstraction Kafka provides for a stream of records—the
topic.
  我们第三深入kafka宗旨概念,kafka提供了千家万户的笔录称为大旨。

(1)创设节点creatElement(2)添加始末innerHTml(3)追加到目标里面appendChild

  A topic is a
category or feed name to which records are published. Topics in Kafka
are always multi-subscriber; that is, a topic can have zero, one, or
many consumers that subscribe to the data written to
it.

伍 、删除节点(先找父对象)

  宗旨正是一个品种可能命名哪些记录会被推送走。kafka中的核心总是有多个订阅者。所以,三个宗旨得以有零个,三个或四个顾客去订阅写到这一个核心里面包车型客车数额。
  For each topic,
the Kafka cluster maintains a partitioned log that looks like this:
  针对每2个宗旨,那个kafka集群维护三个像下边这样的分区日志:
澳门美高梅手机网站 3
  Each partition is
an ordered, immutable sequence of records that is continually appended
to—a structured commit log. The records in the partitions are each
assigned a sequential id number called the offset that uniquely
identifies each record within the partition.
  种种分区是3个静止,不变的类别的记录,它被无休止增多—那种结构化的操作日志。分区的笔录都分配了一个三番五次的id号叫做偏移量。偏移量唯一的标识在分区的每一条记下。

没有和谐删除自身的,都以父标签删除子标签。

  The Kafka cluster
retains all published records—whether or not they have been
consumed—using a configurable retention period. For example if the
retention policy is set to two days, then for the two days after a
record is published, it is available for consumption, after which it
will be discarded to free up space. Kafka’s performance is effectively
constant with respect to data size so storing data for a long time is
not a problem.

(1)之前以往去除

  kafka集群使用二个可布署的保存期来保存所以已经推送出去的笔录,不论他们是还是不是曾经被消费掉。例如,倘诺保留的国策设置为两日,然后记录被推送出去二日后,这一个记录能够开支,之后,它将被屏弃来腾出空间。kafka的品质是可行常数对数码大小所以存款和储蓄数据相当短一段时间不是3个难点。
澳门美高梅手机网站 4
  In fact, the only
metadata retained on a per-consumer basis is the offset or position of
that consumer in the log. This offset is controlled by the consumer:
normally a consumer will advance its offset linearly as it reads
records, but, in fact, since the position is controlled by the consumer
it can consume records in any order it likes. For example a consumer can
reset to an older offset to reprocess data from the past or skip ahead
to the most recent record and start consuming from “now”.
  事实上,唯一的元数据保存在种种顾客的基本功上
偏移量是透过消费者举行控制:日常当消费者读取2个记下后会线性的增多她的偏移量。然则,事实上,自从记录的运动由消费者决定后,消费者能够在别的顺序消费记录。例如,二个消费者能够另行设置偏移量为事先使用的偏移量来重新处理数据也许跳到近日的笔录开端消费。
  This combination
of features means that Kafka consumers are very cheap—they can come and
go without much impact on the cluster or on other consumers. For
example, you can use our command line tools to “tail” the contents of
any topic without changing what is consumed by any existing consumers.
  kafka的组成本性意味着kafka消费者们是很有益的,他们力所能及参预恐怕离开不会影响集群只怕其它的主顾。例如,你可见利用大家的下令行工具去追踪任何焦点的剧情不更改消费被此外部存储器在的消费者。
  The partitions in
the log serve several purposes. First, they allow the log to scale
beyond a size that will fit on a single server. Each individual
partition must fit on the servers that host it, but a topic may have
many partitions so it can handle an arbitrary amount of data. Second
they act as the unit of parallelism—more on that in a bit.
  日志划分分区有七个目标。第③:他们同意日志的大小能够超越他们安插在一台单机的限定。各类分区的服务器主机上必须符合它。

父节点.removeChild(父节点.firstChild);

Distribution
分布
  The partitions of
the log are distributed over the servers in the Kafka cluster with each
server handling data and requests for a share of the partitions. Each
partition is replicated across a configurable number of servers for
fault tolerance.
  日志的分区被分布在kafka集群的服务器上,每一个服务器处理数量和请求三个共享的分区。每个分区复制在一个可配置的容错服务器数量。

(2)从后往前去除

  Each partition has
one server which acts as the “leader” and zero or more servers which act
as “followers”. The leader handles all read and write requests for the
partition while the followers passively replicate the leader. If the
leader fails, one of the followers will automatically become the new
leader. Each server acts as a leader for some of its partitions and a
follower for others so load is well balanced within the cluster.
  各个分区都有1个服务器充当“领导者”和零个或七个服务器充当“追随者”。leader处理全数对分区读写请求时followers就会被动复制这几个leader的分区。如若那么些leader发送故障,这么些followers中的二个将自动的变成贰个新的leader。Each
server acts as a leader for some of its partitions and a follower for
others so load is well balanced within the cluster.

父节点.removeChild(父节点.lastChild);

Producers
生产者
  Producers publish
data to the topics of their choice. The producer is responsible for
choosing which record to assign to which partition within the topic.
This can be done in a round-robin fashion simply to balance load or it
can be done according to some semantic partition function (say based on
some key in the record). More on the use of partitioning in a second!
  生产者推送数据到她们接纳的宗旨。生产者负责选择哪个记录分配到钦定主旨的哪个分区中。通过轮回的艺术可以简单地来抵消负载记录到分区上或能够依照一些语义分区函数来规定记录到哪个分区上(依据记录的key实行分割)。立即你会看到关于更加多的分割使用。

(3)删除的时候先判断子节点的个数,当不止0时能去除

Consumers

父节点.childNodes.Length>0

消费者
  Consumers label
themselves with a consumer group name, and each record published to a
topic is delivered to
one consumer instance within each subscribing consumer group. Consumer
instances can be in separate processes or on separate
machines.
  消费者们标识他们本身通过消费组名称,每一条被推送到大旨的记录只被交付给订阅该大旨的每三个消费组。消费者能够在独立的实例流程或在差别的机器上。

五 、当网页有多个窗体(顺序从上到下,从左到右)

  If all the
consumer instances have the same consumer group, then the records will
effectively be load balanced over the consumer
instances.
  假诺拥有的顾客实例都在同3个消费组中,那么一条新闻将会有效地负载平衡给这个消费者实例。
  If all the
consumer instances have different consumer groups, then each record will
be broadcast to all the consumer processes.
  要是具有的买主实例在差别的消费组中,那么每一条新闻将会被广播给拥有的顾客处理。
  澳门美高梅手机网站 5

window.top不管多少深度的层次,直接回到最顶层的父窗口。

  A two server Kafka
cluster hosting four partitions (P0-P3) with two consumer groups.
Consumer group A has two consumer instances and group B has four.
  多少个服务器的kafka集群管理四个分区(P0-P3)成效于多少个消费者组。消费组A有三个顾客实例,消费组B有多个顾客实例。
  More commonly, however, we have found that topics have a small
number of consumer groups, one for each “logical subscriber”. Each group
is composed of many consumer instances for scalability and fault
tolerance. This is nothing more than publish-subscribe semantics where
the subscriber is a cluster of consumers instead of a single
process.

window.parent重回当前窗口的父窗口。能够八个parent连着使用。

  更宽泛的,我们发现主旨有三个小数目标成本群众体育one for each “logical
subscriber”。

window.top.frames[2].document.bgcolor='red';

  The way consumption is implemented in Kafka is by dividing up the
partitions in the log over the consumer instances so that each instance
is the exclusive consumer of a “fair share” of partitions at any point
in time. This process of maintaining membership in the group is handled
by the Kafka protocol dynamically. If new instances join the group they
will take over some partitions from other members of the group; if an
instance dies, its partitions will be distributed to the remaining
instances.

⑥ 、window.screen能够看荧屏的分辨率,颜色等。

  Kafka only provides a total order over records within a partition,
not between different partitions in a topic. Per-partition ordering
combined with the ability to partition data by key is sufficient for
most applications. However, if you require a total order over records
this can be achieved with a topic that has only one partition, though
this will mean only one consumer process per consumer group.

新语法with(document){

Guarantees
保证
  At a high-level
Kafka gives the following guarantees:
  kafka的高档api可以赋予以下保障:
  Messages sent by a
producer to a particular topic partition will be appended in the order
they are sent. That
is, if a record M1 is sent by the same producer as a record M2, and M1
is sent first, then M1 will have a lower offset
than M2 and appear earlier in the log.
  音讯被生产者发送到三个一定的宗旨分区,新闻将以发送的逐一追加到这么些分区上边。比如,假使M1和M2音讯都被同贰个买主发送,M1首发送,M1的偏移量将比M2的小且更早出现在日记上边。
  A consumer instance
sees records in the order they are stored in the log.
  四个买主实例根据记录存款和储蓄在日记上的相继读取。
  For a topic with
replication factor N, we will tolerate up to N-1 server failures without
losing any records
committed to the log.
  三个大旨的副本数是N,大家得以容忍N-3个服务器产生故障没而不会丢掉任何付出到日志中的记录。

}

More details on these
guarantees are given in the design section of the documentation.
至于有限支撑的越多的细节将在文书档案的宏图章节被给出去。

相当于,document.write();

 

澳门美高梅手机网站 6

 7、window.event

图形随着鼠标动,也许鼠标拖动图片。

澳门美高梅手机网站 7

⑧ 、撤销事件

澳门美高梅手机网站 8

玖 、获得核心onfocus();

失去onblur();

发表评论

电子邮件地址不会被公开。 必填项已用*标注