FastDFS是用c语言编写的一款开源的分布式文件系统,它是由淘宝资深架构师余庆编写并开源。前端
FastDFS专为互联 网量身定制,充分考虑了冗余备份、负载均衡、线性扩容等机制,并注重高可用、高性能等指标java
使用FastDFS很容易搭建一套高性能的文件服务器集群提供文件上传、下载等服务linux
做用 Tracker Server做用是负载均衡和调度, 经过Tracker server在文件上传时能够根据一些策略找到Storage server提供文件上传服务。 能够将tracker称为追踪服务器或调度服务器。 集群 FastDFS集群中的Tracker server能够有多台 Tracker server之间是相互平等关系同时提供服务. 客户端请求Tracker server采用轮询方式,若是请求的tracker没法提供服务则换另外一个tracker。
做用 Storage Server做用是文件存储,客户端上传的文件最终存储在Storage服务器上 集群 Storage集群采用了分组存储方式。storage集群由一个或多个组构成,集群存储总容量为集群中全部组的存储容量之和 一个组由一台或多台存储服务器组成,组内的Storage server之间是平等关系, 不一样组的Storage server之间不会相互通讯 同组内的Storage server之间会相互链接进行文件同步,从而保证同组内每一个storage上的文件是彻底一致的 一个组的存储容量为该组内的存储服务器容量最小的那个 采用分组存储方式的好处 灵活、可控性较强。好比上传文件时,能够由客户端直接指定上传到的组也能够由tracker进行调度选择。 一个分组的存储服务器访问压力较大时,能够在该组增长存储服务器来扩充服务能力(纵向 扩容) 当系统容量不足时,能够增长组来扩充存储容量(横向扩容)。 Storage状态收集 Storage server会链接集群中全部的Tracker server,定时向他们报告本身的状态, 包括磁盘剩余空间、文件同步 情况、文件上传下载次数等统计信息。
1.须要安装 gccnginx
yum install gcc-c++
2.安装libevent(FastDFS依赖libevent库)c++
yum -y install libevent
3.安装libfastcommon (由 FastDFS 官方提供,包含了 FastDFS 运行所须要的一些基础库)spring
将libfastcommonV1.0.7.tar.gz拷贝至/usr/local/下 cd /usr/local/ tar -zxvf libfastcommonV1.0.7.tar.gz cd libfastcommon-1.0.7 ./make.sh ./make.sh install libfastcommon安装好后会自动将库文件拷贝至/usr/lib64下
4.安装libeventapache
cd /usr/local/ tar -zxvf libevent-2.0.15-stable.tar.gz cd libevent-2.0.15-stable/ ./configure make && make install ln -s /usr/local/lib/libevent-2.0.so.5 /usr/lib/libevent-2.0.so.5
4.安装libeventjson
cd /usr/local/ tar -zxvf libevent-2.0.15-stable.tar.gz cd libevent-2.0.15-stable/ ./configure make && make install ln -s /usr/local/lib/libevent-2.0.so.5 /usr/lib/libevent-2.0.so.5
5.tracker编译安装vim
将FastDFS_v5.05.tar.gz拷贝至/usr/local/下 tar -zxvf FastDFS_v5.05.tar.gz cd FastDFS ./make.sh ./make.sh install 安装成功将安装目录下的conf下的文件拷贝到/etc/fdfs/下 cp -ri conf/* /etc/fdfs 进入/etc/fdfs目录 cd /etc/fdfs 修改tracker.conf vim tracker.conf base_path=/home/fastdfs http.server_port=80 建立目录 mkdir -p /home/fastdfs 启动 /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf restart
6.进入/etc/fdfs数组
cd /etc/fdfs vi storage.conf group_name=group1 base_path=/home/yuqing/FastDFS改成:base_path=/home/fastdfs store_path0=/home/fastdfs/fdfs_storage #配置tracker服务器:IP若是有多个则配置多个tracker tracker_server=192.168.1.88:22122 http.server_port=80 mkdir -p /home/fastdfs/fdfs_storage /usr/bin/fdfs_storaged /etc/fdfs/storage.conf restart 查看是否启动 ps aux|grep dfs
客户端上传文件后,存储服务器将文件ID返回给客户端,此文件ID用于之后访问该文件的索引信息。 文件索引信息 包括:组名,虚拟磁盘路径,数据两级目录,文件名 组名: 文件上传后所在的storage组名称,在文件上传成功后有storage服务器返回,须要客户端自行保存。 虚拟磁盘路径: storage配置的虚拟路径,与磁盘选项store_path*对应。若是配置了store_path0则是M00, 若是配置了store_path1则是M01,以此类推。 数据两级目录: storage服务器在每一个虚拟磁盘路径下建立的两级目录,用于存储数据文件 文件名: 是由存储服务器根据特定信息生成,文件名包含:源存储服务器IP地址、文件创 建时间戳、文件大小、随机数和文件拓展名等信息
<dependencies> <dependency> <groupId>org.csource.fastdfs</groupId> <artifactId>fastdfs</artifactId> <version>1.2</version> </dependency> </dependencies>
# connect timeout in seconds # default value is 30s connect_timeout=30 # network timeout in seconds # default value is 30s network_timeout=60 # the base path to store log files base_path=/home/fastdfs # tracker_server can ocur more than once, and tracker_server format is # "host:port", host can be hostname or ip address tracker_server=192.168.1.88:22122 #standard log level as syslog, case insensitive, value list: ### emerg for emergency ### alert ### crit for critical ### error ### warn for warning ### notice ### info ### debug log_level=info # if use connection pool # default value is false # since V4.05 use_connection_pool = false # connections whose the idle time exceeds this time will be closed # unit: second # default value is 3600 # since V4.05 connection_pool_max_idle_time = 3600 # if load FastDFS parameters from tracker server # since V4.05 # default value is false load_fdfs_parameters_from_tracker=false # if use storage ID instead of IP address # same as tracker.conf # valid only when load_fdfs_parameters_from_tracker is false # default value is false # since V4.05 use_storage_id = false # specify storage ids filename, can use relative or absolute path # same as tracker.conf # valid only when load_fdfs_parameters_from_tracker is false # since V4.05 storage_ids_filename = storage_ids.conf #HTTP settings http.tracker_server_port=80 #use "#include" directive to include HTTP other settiongs ##include http.conf
public static void main(String[] args) throws Exception { //1. 加载配置文件 ClientGlobal.init("D:\\Java\\testcode\\fastDFSProject\\src\\main\\resources\\fdfs_client.conf"); //2. 建立管理端对象 TrackerClient trackerClient = new TrackerClient(); //3. 经过管理端对象获取链接 TrackerServer connection = trackerClient.getConnection(); //4. 建立存储端对象 StorageClient1 storageClient = new StorageClient1(connection, null); //建立文件属性信息对象数组 NameValuePair[] meta_list = new NameValuePair[3]; meta_list[0] = new NameValuePair("fileName","idea"); meta_list[1] = new NameValuePair("ExtName","jpg"); meta_list[2] = new NameValuePair("zuozhe","gaowei"); //5. 上传文件 String path = storageClient.upload_file1("E:\\idea.jpg", "jpg", meta_list); System.out.println("======" + path); }
将 FastDFS-nginx-module_v1.16.tar.gz上到usr/local下 cd /usr/local tar -zxvf fastdfs-nginx-module_v1.16.tar.gz rm -rf fastdfs-nginx-module_v1.16.tar.gz cd fastdfs-nginx-module/src 修改config文件将带有/usr/local/的路径改成/usr/ vi config esc后保存并退出 :wq 将FastDFS-nginx-module/src下的mod_FastDFS.conf拷贝至/etc/fdfs/下 cp mod_fastdfs.conf /etc/fdfs/ 修改mod_fastdfs.conf vim /etc/fdfs/mod_fastdfs.conf base_path=/home/fastdfs tracker_server=192.168.1.88:22122 url_have_group_name=true store_path0=/home/fastdfs/fdfs_storage esc后保存并退出 :wq 将libfdfsclient.so拷贝至/usr/lib下 cp /usr/lib64/libfdfsclient.so /usr/lib/ 复制 FastDFS的部分配置文件到/etc/fdfs目录,根据相对应的安装状况进入到相对应的路径 cd /usr/local/FastDFS/conf/ cp http.conf mime.types /etc/fdfs/
将nginx-1.8.1.tar.gz拷贝到/usr/local下 cd /usr/local 解压nginx-1.8.1.tar.gz tar -zxvf nginx-1.8.1.tar.gz rm -rf nginx-1.8.1.tar.gz 安装依赖包 sudo yum -y install pcre pcre-devel zlib zlib-devel openssl openssl-devel cd nginx-1.8.1/ 执行配置 ./configure --prefix=/opt/nginx --sbin-path=/usr/bin/nginx --add-module=/usr/local/fastdfs-nginx-module/src make make install useradd -s /sbin/nologin -M nginx id nginx 启动 nginx 中止 nginx -s stop 从新加载配置 nginx -s reload 查看是否启动 ps -ef|grep nginx 修改配置文件,添加上 vim /opt/nginx/conf/nginx.conf #监听域名中带有group的,交给FastDFS模块处理 location ~/group([0-9])/ { ngx_fastdfs_module; }
/usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf restart /usr/bin/fdfs_storaged /etc/fdfs/storage.conf restart nginx
"imageActionName": "upload/uploadImage.do", /* 执行上传图片的action名称 */ "imageFieldName": "upfile", /* 提交的图片表单名称 */ "imageMaxSize": 2048000, /* 上传大小限制,单位B */ "imageAllowFiles": [".png", ".jpg", ".jpeg", ".gif", ".bmp"], /* 上传图片格式显示 */ "imageCompressEnable": true, /* 是否压缩图片,默认是true */ "imageCompressBorder": 1600, /* 图片压缩最长边限制 */ "imageInsertAlign": "none", /* 插入的图片浮动方式 */ "imageUrlPrefix": "", /* 图片访问路径前缀 */ "imagePathFormat": "", /* 上传保存路径,能够自定义保存路径和文件名格式 */
ue.ready(function() { UE.Editor.prototype._bkGetActionUrl = UE.Editor.prototype.getActionUrl; UE.Editor.prototype.getActionUrl = function (action) { if (action == 'upload/uploadImage.do') { return "http://localhost:8082/upload/uploadImage.do"; } else { return this._bkGetActionUrl.call(this, action); } }; });
定义上传图片接口
@RequestMapping("/uploadImage") public Map uploadImage(MultipartFile upfile) throws Exception { try { FastDFSClient fastDFS = new FastDFSClient("classpath:fastDFS/fdfs_client.conf"); //上传文件返回文件保存的路径和文件名 String path = fastDFS.uploadFile(upfile.getBytes(), upfile.getOriginalFilename(), upfile.getSize()); //拼接上服务器的地址返回给前端 String url = FILE_SERVER + path; Map<String ,Object > result = new HashMap<>(); result.put("state","SUCCESS"); result.put("url",url); result.put("title",upfile.getOriginalFilename()); result.put("original",upfile.getOriginalFilename()); return result; } catch (Exception e) { e.printStackTrace(); } return null; }
<dependency> <groupId>org.csource.fastdfs</groupId> <artifactId>fastdfs</artifactId> <version>1.2</version> </dependency>
FILE_SERVER_URL=http://192.168.1.88/
# connect timeout in seconds # default value is 30s connect_timeout=30 # network timeout in seconds # default value is 30s network_timeout=60 # the base path to store log files base_path=/home/fastdfs # tracker_server can ocur more than once, and tracker_server format is # "host:port", host can be hostname or ip address tracker_server=192.168.1.88:22122 #standard log level as syslog, case insensitive, value list: ### emerg for emergency ### alert ### crit for critical ### error ### warn for warning ### notice ### info ### debug log_level=info # if use connection pool # default value is false # since V4.05 use_connection_pool = false # connections whose the idle time exceeds this time will be closed # unit: second # default value is 3600 # since V4.05 connection_pool_max_idle_time = 3600 # if load FastDFS parameters from tracker server # since V4.05 # default value is false load_fdfs_parameters_from_tracker=false # if use storage ID instead of IP address # same as tracker.conf # valid only when load_fdfs_parameters_from_tracker is false # default value is false # since V4.05 use_storage_id = false # specify storage ids filename, can use relative or absolute path # same as tracker.conf # valid only when load_fdfs_parameters_from_tracker is false # since V4.05 storage_ids_filename = storage_ids.conf #HTTP settings http.tracker_server_port=80 #use "#include" directive to include HTTP other settiongs ##include http.conf
<context:property-placeholder location="classpath:config/application.properties" />
import org.apache.commons.io.FilenameUtils; import org.csource.common.NameValuePair; import org.csource.fastdfs.*; import java.io.IOException; public class FastDFSClient { private TrackerClient trackerClient = null; private TrackerServer trackerServer = null; private StorageServer storageServer = null; private StorageClient1 storageClient = null; public FastDFSClient(String conf) throws Exception { if (conf.contains("classpath:")) { conf = conf.replace("classpath:", this.getClass().getResource("/").getPath()); } ClientGlobal.init(conf); trackerClient = new TrackerClient(); trackerServer = trackerClient.getConnection(); storageServer = null; storageClient = new StorageClient1(trackerServer, storageServer); } /** * @param file 文件二进制 * @param fileName 文件名 * @param fileSize 文件大小 * @return * @throws Exception */ public String uploadFile(byte[] file, String fileName, long fileSize) throws Exception { NameValuePair[] metas = new NameValuePair[3]; metas[0] = new NameValuePair("fileName", fileName); metas[1] = new NameValuePair("fileSize", String.valueOf(fileSize)); metas[2] = new NameValuePair("fileExt", FilenameUtils.getExtension(fileName)); String result = storageClient.upload_file1(file, FilenameUtils.getExtension(fileName), metas); return result; } /** * * @param storagePath 文件的所有路径 如:group1/M00/00/00/wKgRsVjtwpSAXGwkAAAweEAzRjw471.jpg * @return -1失败,0成功 * @throws Exception */ public Integer delete_file(String storagePath){ int result=-1; try { result = storageClient.delete_file1(storagePath); } catch (Exception e) { e.printStackTrace(); } return result; } }
@Value("${FILE_SERVER_URL}") private String FILE_SERVER; @RequestMapping("/uploadFile") public Result uploadFile(MultipartFile file) throws Exception { try { FastDFSClient fastDFS = new FastDFSClient("classpath:fastDFS/fdfs_client.conf"); //上传文件返回文件保存的路径和文件名 String path = fastDFS.uploadFile(file.getBytes(), file.getOriginalFilename(), file.getSize()); //拼接上服务器的地址返回给前端 return new Result(true, FILE_SERVER + path); } catch (Exception e) { e.printStackTrace(); return new Result(false, "上传失败!"); } }
SPU是商品信息聚合的最小单位,是一组可复用、易检索的标准化信息的集合
该集合描述了一个产品的特性
spu 属性,不会影响到库存和价格的属性, 又叫关键属性
Oppo R17这是商品的SPU
但Oppo R17只是一个名词,单纯的理解这个名词是没有意义的
SPU是一组商品的属性组合
【硬件参数】: CPU 型号:高通骁龙™ 670 CPU 频率:2.0GHz 核心数:八核 处理器位数:64 位 GPU 型号:Adreno™ 615 电池容量:3500mAh(典型值)* 【尺寸】: 长:约 157.5mm 宽:约 74.9mm 厚:约 7.5mm 重:约 182g 毛重: 420.00 g 产地: 中国大陆 这个SPU属性组合的名称叫作Oppo R17
会影响到库存和价格的属性, 又叫销售属性
指的是具体规格单品
买家购买、商家进货、供应商备货、工厂生产都是依据SKU进行的
影响价格和库存的属性集合, 与商品是多对一的关系,即一个商品有多个SKU
如流光蓝(三种颜色:流光蓝、霓光紫、霓光渐变色)+8G+128G(两种配置:8G+128G、6G+128G)。 即Oppo R17有一个SPU、6种SKU。 如一件M码(四个尺码:S码、M码、L码、X码)的粉色(三种颜色:粉色、黄色、黑色)Zara女士风衣,其中M码、粉色就是一组SKU的组合 SKU在生成时, 会根据属性生成相应的笛卡尔积,根据一组SKU能够肯定商品的库存状况,那么上面的Zara女士风衣一共有4 * 3 = 12个SKU组合