FastDFS是用c语言编写的一款开源的分布式文件系统。FastDFS为互联网量身定制,充分考虑了冗余备份、负载均衡、线性扩容等机制,并注重高可用、高性能等指标,使用FastDFS很容易搭建一套高性能的文件服务器集群提供文件上传、下载等服务。FastDFS是为互联网应用量身定作的一套分布式文件存储系统,很是适合用来存储用户图片、视频、文档等文件。html
FastDFS服务端有两个角色:跟踪器(tracker)和存储节点(storage)。跟踪器主要作调度工做,在访问上起负载均衡的做用。前端
存储节点存储文件,完成文件管理的全部功能:存储、同步和提供存取接口,FastDFS同时对文件的meta data进行管理。所谓文件的meta data就是文件的相关属性,以键值对(key value pair)方式表示,如:width=1024,其中的key为width,value为1024。文件meta data是文件属性列表,能够包含多个键值对。java
FastDFS架构包括 Tracker server和Storage server。客户端请求Tracker server进行文件上传、下载,经过Trackerserver调度最终由Storage server完成文件上传和下载。nginx
Trackerserver做用是负载均衡和调度,经过Trackerserver在文件上传时能够根据一些策略找到Storageserver提供文件上传服务。能够将tracker称为追踪服务器或调度服务器。
Storageserver做用是文件存储,客户端上传的文件最终存储在Storage服务器上,Storage server没有实现本身的文件系统而是利用操做系统 的文件系统来管理文件。能够将storage称为存储服务器。c++
跟踪器和存储节点均可以由一台或多台服务器构成。跟踪器和存储节点中的服务器都可以随时增长或下线而不会影响线上服务。其中跟踪器中的全部服务器都是对等的,能够根据服务器的压力状况随时增长或减小。git
为了支持大容量,存储节点(服务器)采用了分卷(或分组)的组织方式。存储系统由一个或多个卷组成,卷与卷之间的文件是相互独立的,全部卷 的文件容量累加就是整个存储系统中的文件容量。一个卷能够由一台或多台存储服务器组成,一个卷下的存储服务器中的文件都是相同的,卷中的多台存储服务器起到了冗余备份和负载均衡的做用。github
在卷中增长服务器时,同步已有的文件由系统自动完成,同步完成后,系统自动将新增服务器切换到线上提供服务。web
当存储空间不足或即将耗尽时,能够动态添加卷。只须要增长一台或多台服务器,并将它们配置为一个新的卷,这样就扩大了存储系统的容量。正则表达式
FastDFS中的文件标识分为两个部分:卷名和文件名,者缺一不可。spring
FastDFS集群中的Tracker server能够有多台,Trackerserver之间是相互平等关系同时提供服务,Trackerserver不存在单点故障。客户端请求Trackerserver采用轮询方式,若是请求的tracker没法提供服务则换另外一个tracker。
Storage集群采用了分组存储方式。storage集群由一个或多个组构成,集群存储总容量为集群中全部组的存储容量之和。一个组由一台或多台存储服务器组成,组内的Storage server之间是平等关系,不一样组的Storageserver之间不会相互通讯,同组内的Storageserver之间会相互链接进行文件同步,从而保证同组内每一个storage上的文件彻底一致的。一个组的存储容量为该组内存储服务器容量最小的那个,因而可知组内存储服务器的软硬件配置最好是一致的。
采用分组存储方式的好处是灵活、可控性较强。好比上传文件时,能够由客户端直接指定上传到的组也能够由tracker进行调度选择。一个分组的存储服务器访问压力较大时,能够在该组增长存储服务器来扩充服务能力(纵向扩容)。当系统容量不足时,能够增长组来扩充存储容量(横向扩容)。
Storage server会链接集群中全部的Tracker server,定时向他们报告本身的状态,包括磁盘剩余空间、文件同步情况、文件上传下载次数等统计信息。
tracker根据请求的文件路径即文件ID 来快速定义文件。
好比请求下边的文件:
# github地址:https://github.com/happyfish100 wget https://github.com/happyfish100/fastdfs/archive/V5.11.tar.gz wget https://github.com/happyfish100/libfastcommon/archive/V1.0.39.tar.gz wget https://github.com/happyfish100/fastdfs-nginx-module/archive/V1.20.tar.gz wget https://github.com/happyfish100/fastdfs-client-java/archive/master.zip # openresty(nginx+lua) wget https://openresty.org/download/openresty-1.15.8.1.tar.gz
FastDFS是C语言开发,安装FastDFS须要先将官网下载的源码进行编译,编译依赖gcc环境,若是没有gcc环境,须要安装gcc
[sandu@bogon ~]$ sudo yum install -y gcc gcc-c++ [sandu@bogon ~]$ sudo yum -y groupinstall 'Development Tools' [sandu@bogon ~]$ sudo yum -y install wget
若安装了桌面图形界面,就不须要安装;FastDFS依赖libevent库,须要安装:
[sandu@bogon ~]$ sudo yum -y install libevent
libfastcommon是FastDFS官方提供的,libfastcommon包含了FastDFS运行所须要的一些基础库。
github地址:https://github.com/happyfish100/libfastcommon/
[sandu@bogon ~]$ sudo wget https://github.com/happyfish100/libfastcommon/archive/master.zip [sandu@bogon ~]$ sudo unzip master.zip [sandu@bogon ~]$ sudo cd libfastcommon-master [sandu@bogon ~]$ sudo ./make.sh [sandu@bogon ~]$ sudo ./make.sh install # libfastcommon安装好后会在/usr/lib64 目录下生成 libfastcommon.so 库文件 [sandu@bogon fdfs]$ ll /usr/lib64 | grep fdfs -rwxr-xr-x. 1 root root 316136 Jul 31 09:53 libfdfsclient.so # 因为FastDFS程序引用/usr/lib目录因此须要将/usr/lib64下的libfdfsclient.so拷贝至/usr/lib下,如有的话则不用再操做了。 [sandu@bogon src]$ sudo cp /usr/lib64/libfdfsclient.so /usr/lib/ [sandu@bogon fdfs]$ ll /usr/lib | grep fdfs -rwxr-xr-x. 1 root root 316136 Jul 31 09:53 libfdfsclient.so
官方github地址:https://github.com/happyfish100/fastdfs
[sandu@bogon ~]$ sudo wget https://github.com/happyfish100/fastdfs/archive/V5.11.tar.gz [sandu@bogon ~]$ sudo tar -zxv -f V5.11.tar.gz [sandu@bogon ~]$ sudo cd fastdfs-5.11/ [sandu@bogon ~]$ sudo ./make.sh [sandu@bogon ~]$ sudo ./make.sh install
若在[sandu@bogon ~]$ sudo ./make.sh
这一步报错以下:
cc -Wall -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE -g -O -DDEBUG_FLAG -c -o ../common/fdfs_global.o ../common/fdfs_global.c -I../common -I/usr/include/fastcommon ../common/fdfs_global.c:20:20: fatal error: logger.h: No such file or directory #include "logger.h" ^ compilation terminated. make: *** [../common/fdfs_global.o] Error 1
则是由于须要先安装libfastcommon
安装成功将安装目录下的conf下的俩文件拷贝到/etc/fdfs/下
[sandu@bogon conf]$ sudo cp /usr/local/src/fastdfs-5.11/conf/http.conf /etc/fdfs/ [sandu@bogon conf]$ sudo cp /usr/local/src/fastdfs-5.11/conf/mime.types /etc/fdfs/ # 其他的配置文件在/etc/fdfs/目录下有sample文件,用这个修改就好了
安装以后sample配置文件在目录/etc/fdfs下,复制一份并重命名做为配置文件使用
[sandu@bogon ~]$ sudo cp tracker.conf.sample tracker.conf
修改配置文件: /etc/fdfs/tracker.conf,修改路径到/opt/fdfs_data目录。
[sandu@bogon fdfs]$ sudo mkdir -p /opt/{fastdfs,fdfs_storage} [sandu@bogon fdfs]$ sudo vim tracker.conf # the base path to store data and log files #base_path=/home/yuqing/fastdfs base_path=/opt/fastdfs # HTTP port on this tracker server #http.server_port=8080 http.server_port=80
启动,查看端口号
[sandu@bogon ~]$ sudo /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf start [sandu@bogon fdfs_data]$ ps -ef | grep fdfs root 2397 1 0 10:23 ? 00:00:00 /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf start [sandu@bogon fdfs]$ sudo netstat -tulnp (yum install net-tools) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22122 0.0.0.0:* LISTEN 2313/fdfs_trackerd ...... # 命令行选项 Usage: /usr/bin/fdfs_trackerd <config_file> [start | stop | restart]
注意:在/opt/fdfs_data目录下生成两个目录, 一个是数据,一个是日志;
设置开机自启动,新增以下一行内容
[sandu@bogon ~]$ sudo vim /etc/rc.d/rc.local /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf restart
安装以后sample配置文件在目录/etc/fdfs下,复制一份并重命名做为配置文件使用
修改配置文件: /etc/fdfs/storage.conf,修改路径到/data/fdfs目录,同时配置tracker_server地址。
[sandu@bogon ~]$ sudo cp storage.conf.sample storage.conf [sandu@bogon fdfs]$ sudo vim storage.conf # the base path to store data and log files #base_path=/home/yuqing/fastdfs base_path=/opt/fastdfs # store_path#, based 0, if store_path0 not exists, it's value is base_path # the paths must be exist #store_path0=/home/yuqing/fastdfs store_path0=/opt/fdfs_storage #store_path1=/home/yuqing/fastdfs2 #若是有多个挂载磁盘则定义多个store_path,以下 #store_path1=..... #store_path2=...... # tracker_server can ocur more than once, and tracker_server format is # "host:port", host can be hostname or ip address tracker_server=172.21.168.119:22122 # ip使用127.0.0.1则服务启动不了 # the port of the web server on this storage server #http.server_port=8888 http.server_port=88
/opt/fdfs_storage/data下有256个1级目录,每级目录下又有256个2级子目录,总共65536个文件,新写的文件会以hash的方式被路由到其中某个子目录下,而后将文件数据直接做为一个本地文件存储到该目录中。
启动,查看端口号
[sandu@bogon fdfs]$ sudo /usr/bin/fdfs_storaged /etc/fdfs/storage.conf start [sandu@bogon fdfs]$ sudo ps -ef | grep fdfs root 2397 1 0 10:23 ? 00:00:00 /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf start root 16694 1 99 10:51 ? 00:00:02 /usr/bin/fdfs_storaged /etc/fdfs/storage.conf start [sandu@bogon fdfs]$ sudo netstat -tulnp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22122 0.0.0.0:* LISTEN 2397/fdfs_trackerd tcp 0 0 0.0.0.0:23000 0.0.0.0:* LISTEN 16694/fdfs_storaged ...... # 命令行选项 Usage: /usr/bin/fdfs_trackerd <config_file> [start | stop | restart]
设置开机自启动,新增以下一行内容
[sandu@bogon ~]$ sudo vim /etc/rc.d/rc.local /usr/bin/fdfs_storaged /etc/fdfs/storage.conf restart
要肯定一下,storage是否注册到了tracker中去
成功后能够看到: ip_addr = 172.21.168.119 (bogon) ACTIVE
[sandu@bogon src]$ sudo /usr/bin/fdfs_monitor /etc/fdfs/storage.conf [2019-07-31 15:37:00] DEBUG - base_path=/opt/fastdfs, connect_timeout=30, network_timeout=60, tracker_server_count=1, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0 server_count=1, server_index=0 tracker server is 172.21.168.119:22122 group count: 1 Group 1: group name = group1 disk total space = 48096 MB disk free space = 45996 MB trunk free space = 0 MB storage server count = 1 active server count = 1 storage server port = 23000 storage HTTP port = 88 store path count = 1 subdir count per path = 256 current write server index = 0 current trunk file id = 0 Storage 1: id = 172.21.168.119 ip_addr = 172.21.168.119 (bogon) ACTIVE http domain = version = 5.11 join time = 2019-07-31 14:33:43 up time = 2019-07-31 14:33:43 total storage = 48096 MB free storage = 45996 MB upload priority = 10 store_path_count = 1 subdir_count_per_path = 256 storage_port = 23000 storage_http_port = 88 current_write_path = 0 source storage id = if_trunk_server = 0 connection.alloc_count = 256 connection.current_count = 0 connection.max_count = 2 total_upload_count = 2 success_upload_count = 2 total_append_count = 0 success_append_count = 0 total_modify_count = 0 success_modify_count = 0 total_truncate_count = 0 success_truncate_count = 0 total_set_meta_count = 2 success_set_meta_count = 2 total_delete_count = 0 success_delete_count = 0 total_download_count = 0 success_download_count = 0 total_get_meta_count = 0 success_get_meta_count = 0 total_create_link_count = 0 success_create_link_count = 0 total_delete_link_count = 0 success_delete_link_count = 0 total_upload_bytes = 465092 success_upload_bytes = 465092 total_append_bytes = 0 success_append_bytes = 0 total_modify_bytes = 0 success_modify_bytes = 0 stotal_download_bytes = 0 success_download_bytes = 0 total_sync_in_bytes = 0 success_sync_in_bytes = 0 total_sync_out_bytes = 0 success_sync_out_bytes = 0 total_file_open_count = 2 success_file_open_count = 2 total_file_read_count = 0 success_file_read_count = 0 total_file_write_count = 2 success_file_write_count = 2 last_heart_beat_time = 2019-07-31 15:36:40 last_source_update = 2019-07-31 14:42:07 last_sync_update = 1970-01-01 08:00:00 last_synced_timestamp = 1970-01-01 08:00:00
切换目录到 /etc/fdfs/ 目录下,拷贝一份新的client配置文件
# 客户端配置文件修改,主要用来本地测试使用的 [sandu@bogon fdfs]$ sudo cd /etc/fdfs [sandu@bogon fdfs]$ sudo cp client.conf.sample client.conf [sandu@bogon fdfs]$ sudo vim client.conf # the base path to store log files #base_path=/home/yuqing/fastdfs base_path=/opt/fastdfs # tracker_server can ocur more than once, and tracker_server format is # "host:port", host can be hostname or ip address #tracker_server=192.168.0.197:22122 tracker_server=172.21.168.119:22122
上传一张图片1.jpg 到Centos服务器上的 /tmp 目录下,进行测试,命令以下:
[sandu@bogon fdfs]$ sudo /usr/bin/fdfs_test /etc/fdfs/client.conf upload /tmp/1.png This is FastDFS client test program v5.11 Copyright (C) 2008, Happy Fish / YuQing FastDFS may be copied only under the terms of the GNU General Public License V3, which may be found in the FastDFS source kit. Please visit the FastDFS Home Page http://www.csource.org/ for more detail. [2019-07-31 14:42:08] DEBUG - base_path=/opt/fastdfs, connect_timeout=30, network_timeout=60, tracker_server_count=1, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0 tracker_query_storage_store_list_without_group: server 1. group_name=, ip_addr=172.21.168.119, port=23000 group_name=group1, ip_addr=172.21.168.119, port=23000 storage_upload_by_filename group_name=group1, remote_filename=M00/00/00/rBWod11BOECAQjjjAAOMYl1argw387.png # 文件路径 source ip address: 172.21.168.119 file timestamp=2019-07-31 14:42:08 file size=232546 file crc32=1566223884 example file url: http://172.21.168.119/group1/M00/00/00/rBWod11BOECAQjjjAAOMYl1argw387.png storage_upload_slave_by_filename group_name=group1, remote_filename=M00/00/00/rBWod11BOECAQjjjAAOMYl1argw387_big.png source ip address: 172.21.168.119 file timestamp=2019-07-31 14:42:08 file size=232546 file crc32=1566223884 example file url: http://172.21.168.119/group1/M00/00/00/rBWod11BOECAQjjjAAOMYl1argw387_big.png # 文件URL地址
以上图中的文件地址:http://172.21.168.119/group1/M00/00/00/rBWod11BOECAQjjjAAOMYl1argw387_big.png 对应storage服务器上的/opt/fdfs_storage/data/00/00/wKisFFpBG9eAHaQvAAAWKd1hQR4158_big.jpg文件;
可是查看该目录,会有四个图片文件:
[sandu@bogon fdfs]$ tree /opt/fdfs_storage/data/00/00 /opt/fdfs_storage/data/00/00 ├── rBWod11BOECAQjjjAAOMYl1argw387_big.png ├── rBWod11BOECAQjjjAAOMYl1argw387_big.png-m ├── rBWod11BOECAQjjjAAOMYl1argw387.png └── rBWod11BOECAQjjjAAOMYl1argw387.png-m
因为如今尚未和nginx整合没法使用http下载。
在每一个tracker上安装nginx的主要目的是作负载均衡及实现高可用。若是只有一台tracker服务器能够不配置nginx。
一个tracker对应多个storage,经过nginx对storage负载均衡;
FastDFS版本与fastdfs-nginx-module模块版本对应问题:
FastDFS Version 5.11对应的fastdfs-nginx-module的Version 1.20 FastDFS Version 5.10对应的fastdfs-nginx-module的Version 1.19
github地址:https://github.com/happyfish100/fastdfs-nginx-module
下载fastdfs-nginx-module模块,并作相应的预处理修改
[sandu@bogon src]$ sudo wget https://github.com/happyfish100/fastdfs-nginx-module/archive/V1.20.tar.gz [sandu@bogon src]$ sudo tar -zxv -f V1.20.tar.gz [sandu@bogon src]$ sudo cd /usr/local/src/fastdfs-nginx-module-1.20/src [sandu@bogon src]$ sudo vim config 把以下这两行的内容: ngx_module_incs="/usr/local/include" CORE_INCS="$CORE_INCS /usr/local/include" 换成: ngx_module_incs="/usr/include/fastdfs /usr/include/fastcommon/" CORE_INCS="$CORE_INCS /usr/include/fastdfs /usr/include/fastcommon/" # 备注 上面这一步若不操做的话,在编译安装openresty的时候会报错: In file included from /usr/local/src/fastdfs-nginx-module-1.20/src/common.c:26:0, from /usr/local/src/fastdfs-nginx-module-1.20/src/ngx_http_fastdfs_module.c:6: /usr/include/fastdfs/fdfs_define.h:15:27: fatal error: common_define.h: No such file or directory #include "common_define.h" ^ compilation terminated. gmake[2]: *** [objs/addon/src/ngx_http_fastdfs_module.o] Error 1 gmake[2]: Leaving directory `/usr/local/src/openresty-1.15.8.1/build/nginx-1.15.8' gmake[1]: *** [build] Error 2 gmake[1]: Leaving directory `/usr/local/src/openresty-1.15.8.1/build/nginx-1.15.8' gmake: *** [all] Error 2 # 将同目录下的mod_FastDFS.conf拷贝至/etc/fdfs/下,并作适当修改: [sandu@bogon src]$ sudo cd /usr/local/src/fastdfs-nginx-module-1.20/src [sandu@bogon src]$ sudo cp mod_fastdfs.conf /etc/fdfs/ [sandu@bogon fdfs]$ sudo cp mod_fastdfs.conf mod_fastdfs.conf.simple [sandu@bogon fdfs]$ sudo vim mod_fastdfs.conf # the base path to store log files #base_path=/tmp base_path=/opt/fastdfs # FastDFS tracker_server can ocur more than once, and tracker_server format is # "host:port", host can be hostname or ip address # valid only when load_fdfs_parameters_from_tracker is true tracker_server=172.21.168.119:22122 # if the url / uri including the group name # set to false when uri like /M00/00/00/xxx # set to true when uri like ${group_name}/M00/00/00/xxx, such as group1/M00/xxx # default value is false #url_have_group_name = false url_have_group_name = true # url中包含group名称 # store_path#, based 0, if store_path0 not exists, it's value is base_path # the paths must be exist # must same as storage.conf #store_path0=/home/yuqing/fastdfs store_path0=/opt/fdfs_storage #store_path1=/home/yuqing/fastdfs1 # 将libfdfsclient.so拷贝至/usr/lib下,这个已经作过了 [sandu@bogon src]$ sudo cp /usr/lib64/libfdfsclient.so /usr/lib/ # 建立nginx/client目录,做为nginx使用的临时目录;若该目录不建立的话则使用默认的,同时在下面编译openresty的参数中也不指定这些目录 [sandu@bogon src]$ sudo mkdir -p /var/temp/nginx/client
下载安装openresty:
[sandu@bogon src]$ sudo wget https://openresty.org/download/openresty-1.15.8.1.tar.gz [sandu@bogon src]$ sudo yum -y install pcre-devel openssl openssl-devel [sandu@bogon src]$ sudo cd openresty-1.15.8.1 [sandu@bogon openresty-1.15.8.1]$ sudo ./configure \ --with-luajit \ --with-http_stub_status_module \ --with-http_ssl_module \ --with-http_realip_module \ --with-http_gzip_static_module \ --http-client-body-temp-path=/var/temp/nginx/client \ --http-proxy-temp-path=/var/temp/nginx/proxy \ --http-fastcgi-temp-path=/var/temp/nginx/fastcgi \ --http-uwsgi-temp-path=/var/temp/nginx/uwsgi \ --http-scgi-temp-path=/var/temp/nginx/scgi \ --add-module=/usr/local/src/fastdfs-nginx-module-1.20/src nginx path prefix: "/usr/local/openresty/nginx" nginx binary file: "/usr/local/openresty/nginx/sbin/nginx" nginx modules path: "/usr/local/openresty/nginx/modules" nginx configuration prefix: "/usr/local/openresty/nginx/conf" nginx configuration file: "/usr/local/openresty/nginx/conf/nginx.conf" nginx pid file: "/usr/local/openresty/nginx/logs/nginx.pid" nginx error log file: "/usr/local/openresty/nginx/logs/error.log" nginx http access log file: "/usr/local/openresty/nginx/logs/access.log" nginx http client request body temporary files: "/var/temp/nginx/client" nginx http proxy temporary files: "/var/temp/nginx/proxy" nginx http fastcgi temporary files: "/var/temp/nginx/fastcgi" nginx http uwsgi temporary files: "/var/temp/nginx/uwsgi" nginx http scgi temporary files: "/var/temp/nginx/scgi" [sandu@bogon openresty-1.15.8.1]$ sudo gmake [sandu@bogon openresty-1.15.8.1]$ sudo gmake install
fastdfs-nginx-module模块相关操做
# 拷贝fastdfs-nginx-module模块相关文件 [sandu@bogon conf]$ sudo cd /usr/local/src/fastdfs-5.11/conf [sandu@bogon conf]$ sudo cp http.conf mime.types /etc/fdfs/ # 这一步若不错的话nginx启动会报错: [2019-07-31 13:29:00] ERROR - file: ini_file_reader.c, line: 1029, include file "http.conf" not exists, line: "#include http.conf" [2019-07-31 13:29:00] ERROR - file: /usr/local/src/fastdfs-nginx-module-1.20/src/common.c, line: 163, load conf file "/etc/fdfs/mod_fastdfs.conf" fail, ret code: 2 2019/07/31 13:29:00 [alert] 43690#0: worker process 43691 exited with fatal code 2 and cannot be respawned
# 编辑nginx.conf配置文件 error_log logs/error.log; pid logs/nginx.pid; server{ server_name 172.21.168.119; # 指定本机ip # group1为nginx 服务FastDFS的分组名称,M00是FastDFS自动生成编号,对应store_path0=/home/fdfs_storage, # 若是FastDFS定义store_path1,这里就是M01 # 后期能够考虑使用正则表达式进行匹配 location /group1/M00/ { root /opt/fdfs_storage/data; ngx_fastdfs_module; } }
启动nginx:
[sandu@bogon logs]$ sudo /usr/local/openresty/nginx/sbin/nginx [sandu@bogon logs]$ sudo /usr/local/openresty/nginx/sbin/nginx -s reload # 设置nginx开机启动,增长以下内容 [sandu@bogon logs]$ sudo vim /etc/rc.d/rc.local # nginx start /usr/local/nginx/sbin/nginx
开机启动综合操做:
# 在centos7中, /etc/rc.d/rc.local 文件的权限被下降了,须要给rc.local 文件增长可执行的权限 [sandu@bogon logs]$ sudo chmod +x /etc/rc.d/rc.local [sandu@bogon logs]$ sudo vim /etc/rc.d/rc.local # fastdfs start /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf restart /usr/bin/fdfs_storaged /etc/fdfs/storage.conf restart # nginx start /usr/local/openresty/nginx/sbin/nginx
Centos系统有防火墙,须要先关闭掉,才能够在浏览器中访问
# CentOS 7.0默认使用的是firewall做为防火墙;若没有启用iptables 做为防火墙,则使用如下方式关闭防火墙 [sandu@bogon logs]$ sudo systemctl stop firewalld.service #中止firewall [sandu@bogon logs]$ sudo systemctl disable firewalld.service #禁止firewall开机启动 [sandu@bogon logs]$ sudo firewall-cmd --state #查看默认防火墙状态(关闭后显示notrunning,开启后显示running) # 若已经启用iptables做为防火墙,则使用如下方式关闭 [sandu@bogon logs]$ sudo service iptables stop # 临时关闭防火墙 [sandu@bogon logs]$ sudo chkconfig iptables off # 永久关闭防火墙 # 或者考虑防火墙放行80端口
# 使用浏览器访问图片地址:http://172.21.168.119/group1/M00/00/00/rBWod11BOECAQjjjAAOMYl1argw387_big.png
文件服务器与java整合是有对应的工具包的,本身能够去下载 fastdfs-client-java.jar
本身建立一个maven web项目,maven的pom.xml,以下:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.zlh</groupId> <artifactId>fastdfs</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>war</packaging> <name>fastdfs</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>org.csource</groupId> <artifactId>fastdfs-client-java</artifactId> <version>5.0.5</version> </dependency> <dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> <version>2.4</version> </dependency> <!-- spring 的基本依赖 开始 --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>4.1.2.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-expression</artifactId> <version>4.1.2.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>4.1.2.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aop</artifactId> <version>4.1.2.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>4.1.2.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context-support</artifactId> <version>4.1.2.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-tx</artifactId> <version>4.1.2.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>4.1.2.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jdbc</artifactId> <version>4.1.2.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>4.1.2.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aspects</artifactId> <version>4.1.2.RELEASE</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-test</artifactId> <version>4.1.2.RELEASE</version> </dependency> <!-- spring 的基本依赖 结束 --> <!-- json --> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> <version>1.2.6</version> </dependency> <dependency> <groupId>com.sun.jersey.contribs</groupId> <artifactId>jersey-multipart</artifactId> <version>1.19.4</version> </dependency> <dependency> <groupId>com.sun.jersey</groupId> <artifactId>jersey-client</artifactId> <version>1.19.4</version> </dependency> <dependency> <groupId>commons-fileupload</groupId> <artifactId>commons-fileupload</artifactId> <version>1.3.3</version> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2.5</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.6.4</version> </dependency> <dependency> <groupId>net.sf.json-lib</groupId> <artifactId>json-lib</artifactId> <version>2.2.3</version> </dependency> <dependency> <groupId>org.apache.ant</groupId> <artifactId>ant</artifactId> <version>1.6.5</version> </dependency> <dependency> <groupId>java.unrar</groupId> <artifactId>unrar</artifactId> <version>0.5</version> </dependency> <dependency> <groupId>commons-lang</groupId> <artifactId>commons-lang</artifactId> <version>2.6</version> </dependency> </dependencies> <build> <finalName>fastdfs</finalName> <!-- 配置工程编译级别 --> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.7</source> <target>1.7</target> </configuration> </plugin> </plugins> </build> </project>
Controller类:
package com.zlh.fastdfs.controller; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.csource.common.NameValuePair; import org.csource.fastdfs.ClientGlobal; import org.csource.fastdfs.StorageClient; import org.csource.fastdfs.StorageServer; import org.csource.fastdfs.TrackerClient; import org.csource.fastdfs.TrackerServer; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.RestController; import org.springframework.web.multipart.MultipartFile; import com.zlh.fastdfs.common.BaseView; import com.zlh.fastdfs.common.FileView; import com.zlh.fastdfs.common.PropertitesUtil; /** * */ @RestController @RequestMapping("") public class FastdfsController { private Logger logger = LoggerFactory.getLogger(FastdfsController.class); private static StorageClient storageClient = null; static { try { ClientGlobal.init(PropertitesUtil.conf_filename); TrackerClient tracker = new TrackerClient(); TrackerServer trackerServer = tracker.getConnection(); StorageServer storageServer = null; storageClient = new StorageClient(trackerServer, storageServer); } catch (Exception e) { e.printStackTrace(); } } @RequestMapping(value = "upload", method = RequestMethod.POST) public Object upload(MultipartFile attach, HttpServletRequest request, HttpServletResponse response) { FileView fileView = new FileView(); try { NameValuePair nvp[] = new NameValuePair[] { new NameValuePair("fileName", attach.getOriginalFilename()), new NameValuePair("type", attach.getContentType()), new NameValuePair("ext", PropertitesUtil.getFilenameExt(attach.getOriginalFilename())), new NameValuePair("size", Long.valueOf(attach.getSize()).toString()) }; String fileIds[] = storageClient.upload_file(attach.getBytes(), PropertitesUtil.getFilenameExt(attach.getOriginalFilename()), nvp); fileView.setFileSize(Long.valueOf(attach.getSize()).toString()); fileView.setFileName(attach.getOriginalFilename()); fileView.setFilePath(fileIds[0] + "/" + fileIds[1]); logger.info(fileIds.length + ""); logger.info("组名:" + fileIds[0]); logger.info("路径: " + fileIds[1]); } catch (Exception e) { e.printStackTrace(); return new BaseView(false, "上传失败!"); } return new BaseView(fileView); } }
服务器应答公共实体类:
package com.zlh.fastdfs.common; import java.io.Serializable; /** * 文件名称: com.sdzkpt.common.utils.BaseView.java</br> * 功能说明: 封装控制层返回给前端的状态描述,实体数据等 <br/> */ public class BaseView implements Serializable { /** * 字段描述: [字段功能描述] */ private static final long serialVersionUID = -3312282922207239793L; /** * 请求是否成功,true:成功,false:失败 */ private boolean isSuccess; /** * 状态描述 */ private String msgCode; /** * 待返回数据视图 */ private Object data; /** * 方法描述: 默认无参构造器 */ public BaseView() { isSuccess = true; msgCode = ""; } /** * 方法描述: 指定状态与描述的构造器 * * @param isSuccess * 成功与否标志,true:成功,false:失败 * @param msgCode * 状态码 */ public BaseView(boolean isSuccess, String msgCode) { this.isSuccess = isSuccess; this.msgCode = msgCode; } /** * 方法描述: 指定特定类型数据视图,因为指定了数据,默认状态为成功,状态码为空,如需指定自定义状态码,请使用其余构造器 * * @param data * 待返回指定类型数据视图 */ public BaseView(Object data) { isSuccess = true; msgCode = ""; this.data = data; } /** * 方法描述: 指定状态,描述及指定数据视图的构造器 * * @param isSuccess * 成功与否标志,true:成功,false:失败 * @param msgCode * 状态码 * @param obj * 待返回指定类型数据视图 */ public BaseView(boolean isSuccess, String msgCode, Object data) { this.isSuccess = isSuccess; this.msgCode = msgCode; this.data = data; } /** * 方法描述: 获取请求是否成功标志,true:成功,false:失败</br> * @return * boolean 请求是否成功,true:成功,false:失败 */ public boolean getIsSuccess() { return isSuccess; } /** * 方法描述: 获取状态描述</br> * @return * String 状态描述 */ public String getMsgCode() { return msgCode; } /** * 方法描述: 获取待返回的数据视图</br> * @return * Object 指定类型的数据视图 */ public Object getData() { return data; } }
返回值实体类:
package com.zlh.fastdfs.common; import java.io.Serializable; /** * 文件名称: com.zlh.fastdfs.common.FileView.java</br> */ public class FileView implements Serializable { /** * 字段描述: [字段功能描述] */ private static final long serialVersionUID = 1L; private String fileSize; private String fileName; private String filePath; public String getFileSize() { return fileSize; } public void setFileSize(String fileSize) { this.fileSize = fileSize; } public String getFileName() { return fileName; } public void setFileName(String fileName) { this.fileName = fileName; } public String getFilePath() { return filePath; } public void setFilePath(String filePath) { this.filePath = filePath; } }
读取配置工具类:
package com.zlh.fastdfs.common; /** * 文件名称: com.zlh.fastdfs.common.PropertitesUtil.java</br> */ public class PropertitesUtil extends ConfigurableContants { // 静态初始化读入framework.properties中的设置 static { init("/client.conf"); } public static final String conf_filename = PropertitesUtil.class.getResource("/client.conf").getPath(); // public String conf_filename =getProperty(key, defaultValue); public static String getFilenameExt(String fileName) { String fileExtName = ""; if (fileName == null || "".equals(fileName)) { return null; } int nPos = fileName.lastIndexOf('.'); if (nPos > 0 && fileName.length() - nPos <= 7) { fileExtName = fileName.substring(nPos + 1); } return fileExtName; } }
整合的配置文件client.conf:
connect_timeout = 2 network_timeout = 30 charset = UTF-8 http.tracker_http_port = 80 http.anti_steal_token = no tracker_server = 172.21.168.119:22122 #tracker_server = 192.168.0.119:22122
springmvc的dispatcher-servlet.xml:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:context="http://www.springframework.org/schema/context" xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-4.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-4.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-4.0.xsd"> <!-- 配置controller层扫描包 --> <context:component-scan base-package="com.zlh.fastdfs.controller" /> <!-- 支持上传文件 --> <bean id="multipartResolver" class="org.springframework.web.multipart.commons.CommonsMultipartResolver"/> <!-- 配置应用参数 --> <context:property-placeholder location="classpath:client.conf" ignore-unresolvable="true"/> <!-- 注解驱动 --> <mvc:annotation-driven> <mvc:message-converters register-defaults="true"> <!--避免IE执行AJAX时,返回JSON出现下载文件 --> <bean id="fastJsonHttpMessageConverter" class="com.alibaba.fastjson.support.spring.FastJsonHttpMessageConverter"> <property name="supportedMediaTypes"> <list> <value>application/json;charset=UTF-8</value> <value>text/plain;charset=UTF-8</value> <value>text/html;charset=UTF-8</value> </list> </property> <property name="features"> <list> <value>WriteDateUseDateFormat</value> </list> </property> </bean> </mvc:message-converters> </mvc:annotation-driven> <!-- 设置view resolver --> <bean class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="prefix" value="/WEB-INF/jsp/" /> <property name="suffix" value=".jsp" /> </bean> <mvc:default-servlet-handler/> </beans>
web.xml:
<?xml version="1.0" encoding="UTF-8"?> <web-app id="WebApp_ID" version="3.0" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <context-param> <param-name>fastdfs</param-name> <param-value>webapp</param-value> </context-param> <!-- 统一编码过滤器 --> <filter> <filter-name>encodingFilter</filter-name> <filter-class> org.springframework.web.filter.CharacterEncodingFilter </filter-class> <init-param> <param-name>encoding</param-name> <param-value>UTF-8</param-value> </init-param> </filter> <filter-mapping> <filter-name>encodingFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <!-- 配置Spring MVC DispatcherServlet --> <servlet> <servlet-name>springmvc</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <!-- 初始化参数 --> <init-param> <!-- 加载SpringMVC的xml到 spring的上下文容器中 --> <param-name>contextConfigLocation</param-name> <param-value>classpath:dispatcher-servlet.xml</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>springmvc</servlet-name> <url-pattern>/fdfs/*</url-pattern> </servlet-mapping> </web-app>
跨域访问
最终的方式就是:本地的nginx代理 -代理请求-云上的nginx-而后云上的nginx代理请求最终的web项目-获得了最终的解决办法!
前端js 的url: http://127.0.0.1:8000/my-fdfs/ 本地的nginx配置我监听的8000: location /my-fdfs/ { proxy_pass http://118.145.79.12/fastdfs/fdfs/upload/; } 云上的nginx我监听的是80端口,因此上面代理的没写端口, 云上的nginx配置: location /fastdfs/fdfs/upload/{ proxy_pass http://118.145.79.12:8081/fastdfs/fdfs/upload/; } 这层代理是直接访问web项目的ip+port访问的
1 基本配置 disable #func:配置是否生效 #valu:true、false disable=false bind_addr #func:绑定IP #valu:IP地址 bind_addr=192.168.6.102 port #func:服务端口 #valu:端口整数值 port=22122 connect_timeout #func:链接超时 #valu:秒单位正整数值 connect_timeout=30 network_timeout #func:网络超时 #valu:秒单位正整数值 network_timeout=60 base_path #func:Tracker数据/日志目录地址 #valu:路径 base_path=/home/michael/fdfs/base4tracker max_connections #func:最大链接数 #valu:正整数值 max_connections=256 work_threads #func:线程数,一般设置CPU数 #valu:正整数值 work_threads=4 store_lookup #func:上传文件的选组方式。 #valu:0、1或2。 # 0:表示轮询 # 1:表示指定组 # 2:表示存储负载均衡(选择剩余空间最大的组) store_lookup=2 store_group #func:指定上传的组,若是在应用层指定了具体的组,那么这个参数将不会起效。另外若是store_lookup若是是0或2,则此参数无效。 #valu:group1等 store_group=group1 store_server #func:上传服务器的选择方式。(一个文件被上传后,这个storageserver就至关于这个文件的storage server源,会对同组的storage server推送这个文件达到同步效果) #valu:0、1或2 # 0: 轮询方式(默认) # 1: 根据ip 地址进行排序选择第一个服务器(IP地址最小者) # 2: 根据优先级进行排序(上传优先级由storage server来设置,参数名为upload_priority),优先级值越小优先级越高。 store_server=0 store_path #func:上传路径的选择方式。storage server能够有多个存放文件的basepath(能够理解为多个磁盘)。 #valu: # 0: 轮流方式,多个目录依次存放文件 # 2: 存储负载均衡。选择剩余空间最大的目录存放文件(注意:剩余磁盘空间是动态的,所以存储到的目录或磁盘可能也是变化的) store_path=0 download_server #func:下载服务器的选择方式。 #valu: # 0:轮询(默认) # 1:IP最小者 # 2:优先级排序(值最小的,优先级最高。) download_server=0 reserved_storage_space #func:保留空间值。若是某个组中的某个服务器的剩余自由空间小于设定值,则文件不会被上传到这个组。 #valu: # G or g for gigabyte # M or m for megabyte # K or k for kilobyte reserved_storage_space=1GB log_level #func:日志级别 #valu: # emerg for emergency # alert # crit for critical # error # warn for warning # notice # info for information # debug for debugging log_level=info run_by_group / run_by_user #func:指定运行该程序的用户组 #valu:用户组名或空 run_by_group= #func: #valu: run_by_user= allow_hosts #func:能够链接到tracker server的ip范围。可设定多个值。 #valu allow_hosts= check_active_interval #func:检测 storage server 存活的时间隔,单位为秒。 # storage server按期向trackerserver 发心跳, # 若是tracker server在一个check_active_interval内尚未收到storageserver的一次心跳, # 那边将认为该storage server已经下线。因此本参数值必须大于storage server配置的心跳时间间隔。 # 一般配置为storage server心跳时间间隔的2倍或3倍。 check_active_interval=120 thread_stack_size #func:设定线程栈的大小。 线程栈越大,一个线程占用的系统资源就越多。 # 若是要启动更多的线程(V1.x对应的参数为max_connections,V2.0为work_threads),能够适当下降本参数值。 #valu:如64KB,默认值为64,tracker server线程栈不该小于64KB thread_stack_size=64KB storage_ip_changed_auto_adjust #func:这个参数控制当storage server IP地址改变时,集群是否自动调整。注:只有在storage server进程重启时才完成自动调整。 #valu:true或false storage_ip_changed_auto_adjust=true 2 同步 storage_sync_file_max_delay #func:同组storage服务器之间同步的最大延迟时间。存储服务器之间同步文件的最大延迟时间,根据实际状况进行调整 #valu:秒为单位,默认值为1天(24*3600) #sinc:v2.0 storage_sync_file_max_delay=86400 storage_sync_file_max_time #func:存储服务器同步一个文件须要消耗的最大时间,缺省为300s,即5分钟。 #sinc:v2.0 storage_sync_file_max_time=300 sync_log_buff_interval #func:同步或刷新日志信息到硬盘的时间间隔。注意:tracker server 的日志不是时时写硬盘的,而是先写内存。 #valu:以秒为单位 sync_log_buff_interval=10 3 trunk 和 slot #func:是否使用trunk文件来存储几个小文件 #valu:true或false #sinc:v3.0 use_trunk_file=false #func:最小slot大小 #valu:<= 4KB,默认为256字节 #sinc:v3.0 slot_min_size=256 #func:最大slot大小 #valu:>= slot_min_size,当小于这个值的时候就存储到trunkfile中。默认为16MB。 #sinc:v3.0 slot_max_size=16MB #func:trunk file的size #valu:>= 4MB,默认为64MB #sinc:v3.0 trunk_file_size=64MB 4 HTTP 相关 是否启用 HTTP #func:HTTP是否生效 #valu:true或false http.disabled=false HTTP 服务器端口号 #func:tracker server上的http port #valu: #note:只有http.disabled=false时才生效 http.server_port=7271 检查Storage存活状态的间隔时间(心跳检测) #func:检查storage http server存活的间隔时间 #valu:单位为秒 #note:只有http.disabled=false时才生效 http.check_alive_interval=30 心跳检测使用的协议方式 #func:检查storage http server存活的方式 #valu: # tcp:链接到storage server的http端口,不进行request和response。 # http:storage check alive url must return http status 200. #note:只有http.disabled=false时才生效 http.check_alive_type=tcp 检查 Storage 状态的 URI #func:检查storage http server是否alive的uri/url #note:只有http.disabled=false时才生效 http.check_alive_uri=/status.html