【Hadoop学起来】Linux配置$HADOOP_HOME/etc/hadoop/hadoop-env.sh时找不到JAVA_HOME?

本文在此以前

前天很愤慨!!想要学点东西,可是老是被环境所界定。Hadoop那么些见鬼的条件,我只是运行单机情势,结果就是都不成功。好不容易磕磕盼盼的到底把啥缺的东西都找出来了结果最终依然失利了。暂时我真的不想去看失利记录,因为即将睡了前日再说吧。此外我这边有《Hadoop
权威指南》第三版的翻译版本(华东师范高校翻译)。我前天吃完晚饭去书店逛的时候又看到了第四版的盗版书。所以见猎心喜之下买了回去。假设有哪位同志想要哪一页,可以让我帮忙拍一下,当然全本就免了,自己看电子书吗!!

图片 1

ganglia简介:

正文

今天碰着的一个很大的题材是。Java我一筹莫展找到安装路径。这简直是巨坑??配置$HADOOP_HOME/etc/hadoop/hadoop-env.sh这么些见鬼的钱物的时候,找不到JAVA_HOME???岂不是要gg,这也太惨了???所以自己左找右找,终于给本人逮到了!下边是博客来源,感谢大神!!

Linux环境中查看java的设置路径,设置环境变量

此外,我用其余一台云服务器亲自证实了自我后面介绍的那一篇老外写的很新版的Hadoop教程。具体请看下列链接:

【Hadoop学起来】分布式Hadoop的搭建(Ubuntu
17.04)

其它就是下载的资源立异啦,自己看评论!!

  Ganglia是一个跨平台可扩张的,高
性能总括体系下的分布式监控连串,如集群和网格。它是依据分层设计,它使用大规模的技能,如XML数据意味着,便携数据传输,RRDtool用于数据存储和可
视化。

第一步:whereis java

[root@Hadoop Master java]# whereis java
java: /usr/bin/java /etc/java /usr/lib/java /usr/share/java /usr/share/man/man1/java.1.gz

配备信息:

第二步:ls -lrt /usr/bin/java

[root@Hadoop Master java]# ls -lrt /usr/bin/java
lrwxrwxrwx. 1 root root 22 Nov  2 23:38 /usr/bin/java -> /etc/alternatives/java

  ubuntu 12.04 x64

第三步:ls -lrt /etc/alternatives/java

[root@Hadoop Master java]# ls -lrt /etc/alternatives/java
lrwxrwxrwx. 1 root root 46 Nov  2 23:38 /etc/alternatives/java -> /usr/lib/jvm/java

  hadoop版本1.0.4

第四步:设置环境变量

vi 此文件/etc/profile
在profile文件末尾参与:,

export JAVA_HOME=/usr/lib/jvm/java
(上面这一行也加到标题里面那个地址的文件中去,记得把原来的删掉)
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
(我记得这一些好像也~/.bashrc这个文件里面,同样的也要生效一下)

首先需要安装ganglia,ubuntu下通过apt-get安装即可:

第五步:使生效 :source /etc/profile

图片 2

图片 3

归结下边这个手续就早已做到了履新JAVA的环境变量这一过程~~


图片 4

图片 5

从上边图片可以看出来。当我运行gps命令的时候。是完全可以呈现出Hadoop需要的这些组件的。理论上来说应该可以形成这些题目。不过自己也不亮堂怎么,总是有bug!!!

root@VM-161-78-ubuntu:/home/ubuntu/hadoop/hadoop# bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.2.jar pi 16 1000

下面所要求的题材

Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF-8
Picked up _JAVA_OPTIONS: -Xmx512M
Number of Maps  = 16
Samples per Map = 1000
18/01/10 00:41:47 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/root/QuasiMonteCarlo_1515516105834_522663899/in/part0 could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1728)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2515)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)

    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1483)
    at org.apache.hadoop.ipc.Client.call(Client.java:1429)
    at org.apache.hadoop.ipc.Client.call(Client.java:1339)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:440)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
    at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1809)
    at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1609)
    at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:704)
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/root/QuasiMonteCarlo_1515516105834_522663899/in/part0 could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1728)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2515)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:507)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:847)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:790)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2486)

    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1483)
    at org.apache.hadoop.ipc.Client.call(Client.java:1429)
    at org.apache.hadoop.ipc.Client.call(Client.java:1339)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:440)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
    at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1809)
    at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1609)
    at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:704)

下面是自己的运转记录。我是确实不想去看哪个地点出问题了。明日延续看我的书啊,这一个烦恼的题材留到将来解决。现在对那个事物的了解还不够,很难看出怎么着问题来!

图片 6

各位再看一下自身另外一台云服务器上的如出一辙操作。这台服务器是海外的,所以当场在官网下载镜像的时候简直快到不可思议。不过后来各样幺蛾子,甚至目前自己发现他的NameNode会平常自己关闭,YARN这小畜生也是同一的德性!!

  sudo apt-get install  ganglia-monitor ganglia-webfront gmetad

本文之后

背着了!有人催我上床了。这几天看看Hadoop吧。毕设毕竟要做这方面的事物。所以测度会很漫长的战线。开个密密麻麻。立个Flag咯~溜了溜了

图片 7

安装完成后修改/etc/ganglia/gmond.conf文件

将globals模块下的setuid=yes修改为setuid=no,cluster模块下的name修改为hadoop

接下来修改/etc/ganglia/gmetad.conf文件

找到data_source, 将其修改为   data_source “hadoop” 127.0.0.1
假如若集群直接在三个ip用空格隔开,可以活动定义监听端口号,就算不自定义使用默认端口8649,假若有防火墙,记得开放端口号。

修改形成后重启ganglia

sudo /etc/init.d/ganglia-monitor restart

sudo /etc/init.d/gmetad restart

重启完将来就足以把ganglia-webfront 复制到apache的www目录下

sudo cp -r /usr/share/ganglia-webfront /var/www/ganglia

再重启apache 服务

sudo /etc/init.d/apache2 restart

诸如此类就足以经过走访http://localhost/ganglia 查看界面。

配置hadoop:

找到hadoop目录下conf下的hadoop-metrics2.properties文件

#
# Below are for sending metrics to Ganglia
#
# for Ganglia 3.0 support
 *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink30
#
# for Ganglia 3.1 support
# *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31

 *.sink.ganglia.period=10

# default for supportsparse is false
 *.sink.ganglia.supportsparse=true

*.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both
*.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40

namenode.sink.ganglia.servers=239.2.11.71:8649

datanode.sink.ganglia.servers=239.2.11.71:8649

jobtracker.sink.ganglia.servers=239.2.11.71:8649

tasktracker.sink.ganglia.servers=239.2.11.71:8649

maptask.sink.ganglia.servers=239.2.11.71:8649

reducetask.sink.ganglia.servers=239.2.11.71:8649

  只需要将注释修改,然后将ganglia的server地址修改为239.2.11.71即可,重启hadoop,即可见到如下视图,表示ganglia搭建成功

图片 8

—苏醒内容停止—

发表评论

电子邮件地址不会被公开。 必填项已用*标注