MongoDB Java Driver

       由于余先生在 V4.05
未来的本子就把停放 HTTP服务去掉了,所以固然那篇你测试上传成功了,你也拜会不了。

  本文使用 Java 来讲述对 Mongodb 的连锁操作,数据库版本是
3.2.8,驱动版本为 3.2.2。

推荐介绍大家结合 Nginx 使用 fastdfs-nginx-module
模块,

  本文将讨论

搭建好fastdfs 系统后
就足以搭建web访问效果了。

  • 什么样连接MongoDB
  • 文档的 CURD 操作
  • 文书档案的上传和下载

大概思路有以下三种

1. 连接到MongoDB

  首先保险 mongodb
运行了身份验证成效(不运维直接动用IP,Port连接即可)。连接目的可分为三种:单机,集群和副本集。

1.一向设置nginx关于fastdfs集合的增添模块
 fastdfs-nginx-module  
    恐怕直接设置apache关于fastdfs集合的扩大模块
 fastdfs-apache-module  

1.1 连接单机和集群

  站在代码的角度,单机和集群的区分就是端口号不相同,借使服务器地址和端口号为:192.168.0.8和27017,则一而再代码如下:

String user = "admin";
String pwd = "111111";
String authDb = "admin";
String host = "192.168.0.8";
int port = 27017;
// 1. 直接连接
MongoCredential credential = MongoCredential.createCredential(user, authDb, pwd.toCharArray());
MongoClient mongoClient = new MongoClient(new ServerAddress(host , port), Arrays.asList(credential));

// 2. 使用连接字符串连接
// mongodb://user:pwd@host:port/?authSource=db
String conString = "mongodb://{0}:{1}@{2}:{3}/?authSource={4}";
MongoClientURI uri = new MongoClientURI(MessageFormat.format(conString, user, pwd, host, port+"", authDb)); //注意port为字符串
MongoClient mongoClient = new MongoClient(uri);

2.装置web软件后,通过配备nginx达成了fastdfs-nginx-module的效果

1.2 连接副本集
// 1. 直接连接
MongoClient mongoCli1ent = new MongoClient(
          Arrays.asList(new ServerAddress("localhost", 27017),
                        new ServerAddress("localhost", 27018)),
          Arrays.asList(MongoCredential.createCredential(user, authDb, pwd.toCharArray())));

// 2. 使用连接字符串
// mongodb://user:pwd@host:port, host:port/?authSource=db&replicaSet=rs&slaveOk=true
String conStr = "mongodb://{0}:{1}@{2}:{3},{4}:{5}/?authSource={6}&replicaSet={7}&slaveOk=true";
MongoClientURI uri=new MongoClientURI(MessageFormat.format(conStr,"admin","111","host1","27017","host2","27018","admin","rs0"));
MongoClient mongoClient = new MongoClient(uri);

第一种(推荐).
下载nginx  和   插件fastdfs-nginx-module-master.zip   那七个软件

1.3 关于连接相关的参数

  不管是利用字符串连接,依旧平昔连接都能够顺便一些参数,间接连接时用
MongoClientOptions 类的builder()构造,字符串直接动用 &
拼接在后头就行。常用的参数如下:

replicaSet=name 副本集名称 ssl=true|false 是否使用ssl
connectTimeoutMS=ms 连接超时时间 socketTimeoutMS=ms socket超时时间
maxPoolSize=n 连接池大小 safe=true|false 驱动是否发送getLastError
journal=true|false 是否等待将日志刷到磁盘    
authMechanism= 身份验证方式 验证方式有SCRAM-SHA-1 MONGODB-X509,MONGO-CR,etc
authSource=string 验证数据库,默认admin 采用指定数据库的验证方式 3.0之后,默认验证方式为 SCRAM-SHA-1.

愈来愈多详细的参数请见:MongoClientURI

(假诺nginx已经设置好了,需求再一次编写翻译一次,编写翻译时把插件装上)
(借使nginx使用yum安装的,要求下载yum相同版本安装包,重新编写翻译)

2. 文档的 CURD 操作

  在进展 CUMuranoD 操作时,有多少个常用的相助静态类:Filters, Sorts,
Projections,Updates,详细用法请查看:Builders

(1)文档的插入

// 获取待插入集合 mycol
MongoDatabase mydb = mongoClient.getDatabase("myDb");
MongoCollection<Document> mycol =  mydb.getCollection("myCol");
// 生成一个文档
Document doc = new Document();
doc.put("name", "cyhe");
doc.put("blog", "http://www.cnblogs.com/cyhe/");
// 也可以使用链式风格构建
new Document("id",1).append("name", "cyhe");// ... etc
// 也可以直接将 JSON 转成 BSON
doc = Document.parse("{\"name\":\"cyhe\",\"blog\":\"http://www.cnblogs.com/cyhe/\"}");
// 插入
mycol.insertOne(doc);
// 批量插入
mycol.insertMany(new ArrayList<Document>());

(2)文书档案的寻找

// 查询集合所有文档
List<Document> docs = mycol.find().into(new ArrayList<Document>());
// 直接导入 import static com.mongodb.client.model.Filters.*; 就可以直接使用 and eq 等静态方法      
// 查询 age = 20 的文档
Document doc = mycol.find(Filters.eq("age", 20)).first();
// 查询 10<age<20 的文档 返回一个游标
MongoCursor<Document> cur=mycol.find(Filters.and(Filters.gt("age", 10), Filters.lt("age", 20))).iterator();

(3)文档的换代和删除

// 查找并删除名称为 cyhe 的文档
mycol.findOneAndDelete(Filters.eq("name", "cyhe"));
// 查找并重命名
mycol.findOneAndUpdate(Filters.eq("name", "cyhe"), Updates.set("name", "wqq"));
// 小于10的都加1
mycol.updateMany(Filters.lt("size", 10), Updates.inc("size", 1));
// 删除 age 大于 110 的文档
mycol.deleteOne(Filters.gt("age", "110"));
  1. 文书档案的上传和下载

  在Mongodb中,普通文书档案最大为16M,对于图片,附属类小部件来说就展现比较小了,mongodb的处理方式正是采纳GridFS 分块存款和储蓄。

// GridFS 默认的名字为 fs,使用默认名称连接
GridFSBucket gridFSBucket = GridFSBuckets.create(mydb);
// 使用指定名称连接
GridFSBucket gridFSBucket1 = GridFSBuckets.create(mydb, "imags");
// 上传文件
// ==============================================================================================
InputStream streamToUploadFrom = new FileInputStream(new File("/tmp/mongodb-tutorial.pdf"));
// 自定义参数
GridFSUploadOptions options = new GridFSUploadOptions()
                                    .chunkSizeBytes(1024)
                                    .metadata(new Document("type", "presentation"));
ObjectId fileId = gridFSBucket.uploadFromStream("mongodb-tutorial", streamToUploadFrom, options);
// ===============================================================================================
// 或者使用 GridFSUploadStream
byte[] data = "Data to upload into GridFS".getBytes(StandardCharsets.UTF_8);
GridFSUploadStream uploadStream = gridFSBucket.openUploadStream("sampleData", options);
uploadStream.write(data);
uploadStream.close();
System.out.println("The fileId of the uploaded file is: " + uploadStream.getFileId().toHexString());
// ===============================================================================================
// 下载文件
// ===============================================================================================
// 根据生成的 ObjectId 下载
FileOutputStream streamToDownloadTo = new FileOutputStream("/tmp/mongodb-tutorial.pdf");
gridFSBucket.downloadToStream(fileId, streamToDownloadTo);
streamToDownloadTo.close();
System.out.println(streamToDownloadTo.toString());
// ===============================================================================================
// 根据文件名称下载
FileOutputStream streamToDownloadTo = new FileOutputStream("/tmp/mongodb-tutorial.pdf");
GridFSDownloadByNameOptions downloadOptions = new GridFSDownloadByNameOptions().revision(0);
gridFSBucket.downloadToStreamByName("mongodb-tutorial", streamToDownloadTo, downloadOptions);
streamToDownloadTo.close();
// ===============================================================================================
// 使用 GridFSDownloadStream 根据 ObjectId 下载
GridFSDownloadStream downloadStream = gridFSBucket.openDownloadStream(fileId);
int fileLength = (int) downloadStream.getGridFSFile().getLength();
byte[] bytesToWriteTo = new byte[fileLength];
downloadStream.read(bytesToWriteTo);
downloadStream.close();
System.out.println(new String(bytesToWriteTo, StandardCharsets.UTF_8));
// ===============================================================================================
// 使用 GridFSDownloadStream 根据 名称 下载
GridFSDownloadStream downloadStream = gridFSBucket.openDownloadStreamByName("sampleData");
int fileLength = (int) downloadStream.getGridFSFile().getLength();
byte[] bytesToWriteTo = new byte[fileLength];
downloadStream.read(bytesToWriteTo);
downloadStream.close();
System.out.println(new String(bytesToWriteTo, StandardCharsets.UTF_8));

  本文只是 Mongodb
操作的冰山一角,越多的用法,会在后头的实战中不断更新完善。更加多用法和 API
介绍,详见
MongoDB-Java-3.2

此处从头起头安装:

解压:

# tar -zxvf nginx-1.13.5.tar.gz
# unzip fastdfs-nginx-module-master.zip 

安装nginx和fastdfs插件:
 ./configure –prefix=/usr/local/nginx
 –add-module=/home/packages/fastdfs-nginx-module-master/src

连年报错:

报错: the HTTP rewrite module requires the PCRE
library.……
解决:yum install pcre-devel.x86_64

报错: the HTTP gzip module requires the zlib
library.……
解决:yum install
zlib-devel.x86_64

编写翻译成功:

adding module in
/home/packages/fastdfs-nginx-module-master/src

 + ngx_http_fastdfs_module was
configured

Configuration summary

  + using system PCRE library

  + OpenSSL library is not used

  + using system zlib library

  nginx path prefix:
“/usr/local/nginx”

  nginx binary file:
“/usr/local/nginx/sbin/nginx”

  nginx modules path:
“/usr/local/nginx/modules”

  nginx configuration prefix:
“/usr/local/nginx/conf”

  nginx configuration file:
“/usr/local/nginx/conf/nginx.conf”

  nginx pid file:
“/usr/local/nginx/logs/nginx.pid”

  nginx error log file:
“/usr/local/nginx/logs/error.log”

  nginx http access log file:
“/usr/local/nginx/logs/access.log”

  nginx http client request body temporary
files: “client_body_temp”

  nginx http proxy temporary files:
“proxy_temp”

  nginx http fastcgi temporary files:
“fastcgi_temp”

  nginx http uwsgi temporary files:
“uwsgi_temp”

  nginx http scgi temporary files:
“scgi_temp”

接轨设置:

# make

# make install

进去nginx安装目录,修改其布局文件:

# cd /usr/local/nginx/conf

# vi nginx.conf

日增以下内容:

 location /M00 {

            root
/home/yuqing/fastdfs/data;

            ngx_fastdfs_module;

        }

新建2个软连接:

# ln -s /home/yuqing/fastdfs/data
 /home/yuqing/fastdfs/data/M00

复制并修改mod_fastdfs.conf文件:

# cp mod_fastdfs.conf /etc/fdfs/

# vi /etc/fdfs/mod_fastdfs.conf

修改了此间:

tracker_server=10.0.0.42:22122

(复制那三个文本到fdfs配置文件,要不不恐怕访问nginx)

(# cp
/home/fastdfs-5.11/conf/http.conf /etc/fdfs/)
(# cp
/home/fastdfs-5.11/conf/mime.types /etc/fdfs/)

重启nginx:
# /usr/local/nginx/sbin/nginx -s stop;
# /usr/local/nginx/sbin/nginx

报错:浏览器不只怕访问
翻开nginx错误日志 如下

 ERROR –
file: ini_file_reader.c, line: 631, include file “http.conf” not
exists, line: “#include http.conf”

 ERROR –
file: /home/packages/fastdfs-nginx-module-master/src/common.c, line:
163, load conf file “/etc/fdfs/mod_fastdfs.conf” fail, ret code:
2

解决:#
cp /home/fastdfs-5.11/conf/http.conf /etc/fdfs/

 ERROR – file: shared_func.c, line: 968, file /etc/fdfs/mime.types not
exist

解决:#
cp /home/fastdfs-5.11/conf/mime.types /etc/fdfs/

访问nginx
成功

上边看一下 mod_fastdfs.conf文件:

# connect timeout in seconds

# default value is 30s

connect_timeout=2

连日来超时时间

# network recv and send timeout in
seconds

# default value is 30s

network_timeout=30

出殡接受数据 超时时间

# the base path to store log files

base_path=/tmp

日志文件 地方

# if load FastDFS parameters from tracker
server

# since V1.12

# default value is false

load_fdfs_parameters_from_tracker=true

是或不是从 tracket服务器读撤消息

比方不从tracket服务器读裁撤息,以下三个参数生效

# storage sync file max delay
seconds

# same as tracker.conf

# valid only when
load_fdfs_parameters_from_tracker is false

# since V1.12

# default value is 86400 seconds (one
day)

storage_sync_file_max_delay =
86400

     
当load_fdfs_parameters_from_tracker设置为false时,本参数生效

一同文件最大延迟时间,与tracker.conf文件中参数相同
暗中认可是一天

# if use storage ID instead of IP
address

# same as tracker.conf

# valid only when
load_fdfs_parameters_from_tracker is false

# default value is false

# since V1.13

use_storage_id = false

    是还是不是使用storage_id 

# specify storage ids filename, can use
relative or absolute path

# same as tracker.conf

# valid only when
load_fdfs_parameters_from_tracker is false

# since V1.13

storage_ids_filename =
storage_ids.conf
     storage_id的位置

# FastDFS tracker_server can ocur more
than once, and tracker_server format is

#  “host:port”, host can be hostname or
ip address

# valid only when
load_fdfs_parameters_from_tracker is true

tracker_server=10.0.0.42:22122

从tracket服务器读取消息,那么些参数生效

tracket服务器地址、端口

# the port of the local storage
server

# the default value is 23000

storage_server_port=23000

    本storage server监听端口

# the group name of the local storage
server

group_name=group1

      group组名

# if the url / uri including the group
name

# set to false when uri like
/M00/00/00/xxx

# set to true when uri like
${group_name}/M00/00/00/xxx, such as group1/M00/xxx

# default value is false

url_have_group_name = false

url链接中是或不是带有
组名

类似 /M00/00/00/xxx
和group1/M00/00/00/xxx

一旦选用true
即包蕴组名,须要修改nginx配置文件
将  location /M00    改为     location /group1/M00 

# path(disk or mount point) count,
default value is 1

# must same as storage.conf

store_path_count=1

      存储路径数量,必须和 storage.conf文件一律

# store_path#, based 0, if store_path0
not exists, it’s value is base_path

# the paths must be exist

# must same as storage.conf

store_path0=/home/yuqing/fastdfs

#store_path1=/home/yuqing/fastdfs1

     存款和储蓄路径地址、必须和 storage.conf文件一律

# standard log level as syslog, case
insensitive, value list:

### emerg for emergency

### alert

### crit for critical

### error

### warn for warning

### notice

### info

### debug

log_level=info

    日志级别

# set the log filename, such as
/usr/local/apache2/logs/mod_fastdfs.log

# empty for output to stderr (apache and
nginx error_log file)

log_filename=

     设置日志的名目,为空则记录到
http服务的一无可取日志里

# response mode when the file not exist
in the local file system

## proxy: get the content from other
storage server, then send to client

## redirect: redirect to the original
storage server (HTTP Header is Location)

response_mode=proxy

当文件在本土不设有时如何应对

  • proxy
    代理,从别的存款和储蓄服务器获取内容,然后发送给客户
  • redirect
     重定向原始存款和储蓄服务器的http头

# the NIC alias prefix, such as eth in
Linux, you can see it by ifconfig -a

# multi aliases split by comma. empty
value means auto set by OS type

# this paramter used to get all ip
address of the local host

# default values is empty

if_alias_prefix=

????

七种外号用逗号分隔,空值表示自动安装系统项目

其一参数用于获取本机的装有ip

# use “#include” directive to include
HTTP config file

# NOTE: #include is an include
directive, do NOT remove the # before include

#include http.conf

   使用#include指令 包括http配置文件

   
注意:#include
 那是二个涵盖指令,不要删除 include后边的 #

# if support flv

# default value is false

# since v1.15

flv_support = true

   是或不是援救flv文件,暗中认可不支持

# flv file extension name

# default value is flv

# since v1.15

flv_extension = flv

    flv文件的壮大名,默许是flv

# set the group count

# set to none zero to support multi-group
on this storage server

# set to 0  for single group only

# groups settings section as [group1],
[group2], …, [groupN]

# default value is 0

# since v1.14

group_count = 0

设置组的数额
设置没有0 ,补助多少个group组在那么些蕴藏服务器上

设置0
为唯有3个组

固然服务器上八个组,组的装置
就好像这么 [group1], [group2], …, [groupN]

默认是0,一组

# group settings for group #1

# since v1.14

# when support multi-group on this
storage server, uncomment following section

#[group1]

#group_name=group1

#storage_server_port=23000

#store_path_count=2

#store_path0=/home/yuqing/fastdfs

#store_path1=/home/yuqing/fastdfs1

   组1的设置

 
 若本存款和储蓄服务器协助多组,废除下边包车型客车笺注

# group settings for group #2

# since v1.14

# when support multi-group, uncomment
following section as neccessary

#[group2]

#group_name=group2

#storage_server_port=23000

#store_path_count=1

#store_path0=/home/yuqing/fastdfs

组2的设置

若本存款和储蓄服务器支持多组,裁撤下边包车型地铁注明

二 、直接使用 web访问

以此相比较简单,只需求把web根目录钦命到
fastdfs的储存目录就能够了

三 、web访问fastdfs 的两种思路

3.1  只使用 组名
适用于图片量大、group组多、每种group组都有备份

做客地址类似于  
picture.xxx.com/groupx/M00/……jpg

只供给三个域名;
上传图片后 将重回的值 直接+域名  保存为图片名 :picture.xxx.com/groupx/M00/……jpg

前者选择二个nginx做反向代理+负载均衡
 配置如下:

upstream group1{

   server group1_storage1:80;
   server
group1_storage2:80;
}

upstream group2{

   server group2_storage1:80;
   server group2_storage2:80;
}

location /group1/M00{

proxy_pass http://group1;

}

location /group2/M00{

proxy_pass http://group2;

}

专注:group1 和 group2 中 机器要科学配置 nginx.conf  storage.conf 和
mod_fastdfs.conf 文件

nginx.conf中

 location /group1/M00 {

            root
/home/yuqing/fastdfs/data;

           
ngx_fastdfs_module;

        }

 尤其注意  mod_fastdfs.conf文件中  那两项 组名、url是或不是带有group名
group_name=
url_have_group_name =

3.2 只使用 域名 (已布局生产条件)
适用于图片较少,各种group组都以单机跑,没有备份的动静

做客地址类似于   picturex.xxx.com/M00/……jpg

亟待几个域名;
上传图片后 将再次来到的值分开后 结合域名 生成图片名   picture**x.xxx.com/M00/……jpg**

前者采取一个nginx做反向代理
 配置如下:

upstream group1{

   server group1_storage1:80;
}

upstream group2{

   server group2_storage1:80;
}

location /group1/M00{

proxy_pass http://group1;

}

location /group2/M00{

proxy_pass http://group2;

}

专注:group1 和 group2 中 机器要科学配置
nginx.conf  storage.conf
鉴于每组没有备份, mod_fastdfs.conf 也不用安装了

nginx.conf中

 location /group1/M00 {

            root /home/yuqing/fastdfs/data;

        }

3.3 域名和组名同时利用  

适用于图片量大、业务量多、group组多、每种group组都有备份

走访地址类似于   picturex.xxx.com/groupx/M00/……jpg

只要求三个域名;
上传图片后 将回到的值+改动域名  保存为图片名 :picture
x.xxx.com/groupx/M00/……jpg

应用八个域名对应3个nginx
 做反向代理+负载均衡  配置如下:

 

upstream group1{

   server group1_storage1:80;
   server group1_storage2:80;
}

upstream group2{

   server group2_storage1:80;
   server group2_storage2:80;
}

server {
        listen       80;
        server_name  picture1.xxx.com;
       ……

location /group1/M00{

proxy_pass http://group1;

}

location /group2/M00{

proxy_pass http://group2;

}

}

server {
        listen      
80;

        server_name
 picture2.xxx.com;

       …… }

瞩目:group1 和 group2 中 机器要正确配置 nginx.conf  storage.conf
和 mod_fastdfs.conf 文件

nginx.conf中

 location /group1/M00 {

            root /home/yuqing/fastdfs/data;

            ngx_fastdfs_module;

        }

 尤其注意  mod_fastdfs.conf文件中  那两项 组名、url是或不是含有group名
group_name=
url_have_group_name =

 

发表评论

电子邮件地址不会被公开。 必填项已用*标注