IP:192.168.4.101 环境:CentOS 6.六、JDK7
一、 安装 JDK 并配置环境变量(略)html
JAVA_HOME=/usr/local/java/jdk1.7.0_72
二、 下载 Linux 版的 ActiveMQ(当前最新版 apache-activemq-5.11.1-bin.tar.gz)java
$ wget http://apache.fayea.com/activemq/5.11.1/apache-activemq-5.11.1-bin.tar.gz
三、 解压安装mysql
$ tar -zxvf apache-activemq-5.11.1-bin.tar.gz $ mv apache-activemq-5.11.1 activemq-01 若是启动脚本 activemq 没有可执行权限,此时则须要受权(此步可选) $ cd /home/wusc/activemq-01/bin/ $ chmod 755 ./activemq
四、 防火墙中打开对应的端口
ActiveMQ 须要用到两个端口
一个是消息通信的端口(默认为 61616)
一个是管理控制台端口(默认为 8161)可在 conf/jetty.xml 中修改,以下:linux
<bean id="jettyPort" class="org.apache.activemq.web.WebConsolePort" init-method="start"> <!-- the default port number for the web console --> <property name="host" value="0.0.0.0"/> <property name="port" value="8161"/> </bean>
# vi /etc/sysconfig/iptables
添加:nginx
-A INPUT -m state --state NEW -m tcp -p tcp --dport 61616 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 8161 -j ACCEPT
重启防火墙:git
# service iptables restart
五、 启动github
$ cd /home/wusc/activemq-01/bin $ ./activemq start
六、 打开管理界面:http://192.168.4.101:8161
默认用户名和密码为:admin/admin
登陆后进入
七、 安全配置(消息安全)
ActiveMQ 若是不加入安全机制的话,任何人只要知道消息服务的具体地址(包括 ip,端口,消息地址 [队列或者主题地址],),均可以肆无忌惮的发送、接收消息。关于 ActiveMQ 安装配置
http://activemq.apache.org/security.html
ActiveMQ 的消息安全配置策略有多种,咱们以简单受权配置为例:
在 conf/activemq.xml 文件中在 broker 标签最后加入如下内容便可:web
用于在java程序中配置的链接用户名密码redis
$ vi /home/wusc/activemq-01/conf/activemq.xml <plugins> <simpleAuthenticationPlugin> <users> <authenticationUser username="wusc" password="wusc.123" groups="users,admins"/> </users> </simpleAuthenticationPlugin> </plugins>
定义了一个 wusc 用户,密码为 wusc.123,角色为 users,admins
设置 admin 的用户名和密码:spring
$ vi /home/wusc/activemq-01/conf/jetty.xml <bean id="securityConstraint" class="org.eclipse.jetty.util.security.Constraint"> <property name="name" value="BASIC" /> <property name="roles" value="admin" /> <property name="authenticate" value="true" /> </bean>
确保 authenticate 的值为 true(默认),管控台的权限
控制台的登陆用户名密码保存在 conf/jetty-realm.properties 文件中,内容以下:
$ vi /home/wusc/activemq-01/conf/jetty-realm.properties # Defines users that can access the web (console, demo, etc.) # username: password [,rolename ...] admin: wusc.123, admin
注意:用户名和密码的格式是
用户名 : 密码 ,角色名
重启:
$ /home/wusc/activemq-01/bin/activemq restart
设置开机启动:
# vi /etc/rc.local
加入如下内容
## ActiveMQ su - wusc -c '/home/wusc/activemq-01/bin/activemq start'
八、 MQ 消息生产者也与消息消费者的 Demo 样例讲解与演示
一个发送邮件的小demo
pom.xml
<!-- 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 --> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>wusc.edu.common</groupId> <artifactId>edu-common-parent</artifactId> <version>1.0-SNAPSHOT</version> <packaging>pom</packaging> <name>edu-common-parent</name> <url>http://maven.apache.org</url> <distributionManagement> <repository> <id>nexus-releases</id> <name>Nexus Release Repository</name> <url>http://192.168.4.221:8081/nexus/content/repositories/releases/</url> </repository> <snapshotRepository> <id>nexus-snapshots</id> <name>Nexus Snapshot Repository</name> <url>http://192.168.4.221:8081/nexus/content/repositories/snapshots/</url> </snapshotRepository> </distributionManagement> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <!-- common projects --> <edu-common.version>1.0-SNAPSHOT</edu-common.version> <edu-common-config.version>1.0-SNAPSHOT</edu-common-config.version> <edu-common-core.version>1.0-SNAPSHOT</edu-common-core.version> <edu-common-web.version>1.0-SNAPSHOT</edu-common-web.version> <edu-demo.version>1.0-SNAPSHOT</edu-demo.version> <!-- facade projects --> <!-- 用户服务接口 --> <edu-facade-user.version>1.0-SNAPSHOT</edu-facade-user.version> <!-- 帐户服务接口 --> <edu-facade-account.version>1.0-SNAPSHOT</edu-facade-account.version> <!-- 订单服务接口 --> <edu-facade-order.version>1.0-SNAPSHOT</edu-facade-order.version> <!-- 运营服务接口 --> <edu-facade-operation.version>1.0-SNAPSHOT</edu-facade-operation.version> <!-- 消息队列服务接口 --> <edu-facade-queue.version>1.0-SNAPSHOT</edu-facade-queue.version> <!-- service projects --> <!-- 用户服务 --> <edu-service-user.version>1.0-SNAPSHOT</edu-service-user.version> <!-- 帐户服务 --> <edu-service-account.version>1.0-SNAPSHOT</edu-service-account.version> <!-- 订单服务 --> <edu-service-order.version>1.0-SNAPSHOT</edu-service-order.version> <!-- 运营服务 --> <edu-service-operation.version>1.0-SNAPSHOT</edu-service-operation.version> <!-- 消息队列服务 --> <edu-service-queue.version>1.0-SNAPSHOT</edu-service-queue.version> <!-- web projects --> <!-- 运营 --> <edu-web-operation.version>1.0-SNAPSHOT</edu-web-operation.version> <!-- 门户 --> <edu-web-portal.version>1.0-SNAPSHOT</edu-web-portal.version> <!-- 网关 --> <edu-web-gateway.version>1.0-SNAPSHOT</edu-web-gateway.version> <!-- 模拟商城 --> <edu-web-shop.version>1.0-SNAPSHOT</edu-web-shop.version> <!-- app projects --> <!-- timer projects --> <!-- frameworks --> <org.springframework.version>3.2.4.RELEASE</org.springframework.version> <org.apache.struts.version>2.3.15.1</org.apache.struts.version> </properties> <dependencies> <!-- Test Dependency Begin --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> </dependency> <!-- Test Dependency End --> </dependencies> <dependencyManagement> <dependencies> <!-- Common Dependency Begin --> <dependency> <groupId>xalan</groupId> <artifactId>xalan</artifactId> <version>2.7.1</version> </dependency> <dependency> <groupId>antlr</groupId> <artifactId>antlr</artifactId> <version>2.7.6</version> </dependency> <dependency> <groupId>aopalliance</groupId> <artifactId>aopalliance</artifactId> <version>1.0</version> </dependency> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjweaver</artifactId> <version>1.7.3</version> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib</artifactId> <version>2.2.2</version> </dependency> <dependency> <groupId>asm</groupId> <artifactId>asm</artifactId> <version>3.3.1</version> </dependency> <dependency> <groupId>net.sf.json-lib</groupId> <artifactId>json-lib</artifactId> <version>2.3</version> <classifier>jdk15</classifier> <scope>compile</scope> </dependency> <dependency> <groupId>org.codehaus.jackson</groupId> <artifactId>jackson-core-asl</artifactId> <version>1.9.13</version> </dependency> <dependency> <groupId>org.codehaus.jackson</groupId> <artifactId>jackson-mapper-asl</artifactId> <version>1.9.13</version> </dependency> <dependency> <groupId>ognl</groupId> <artifactId>ognl</artifactId> <version>3.0.6</version> </dependency> <dependency> <groupId>oro</groupId> <artifactId>oro</artifactId> <version>2.0.8</version> </dependency> <dependency> <groupId>commons-net</groupId> <artifactId>commons-net</artifactId> <version>3.2</version> </dependency> <dependency> <groupId>commons-beanutils</groupId> <artifactId>commons-beanutils</artifactId> <version>1.8.0</version> </dependency> <dependency> <groupId>commons-codec</groupId> <artifactId>commons-codec</artifactId> <version>1.8</version> </dependency> <dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3.2</version> </dependency> <dependency> <groupId>commons-digester</groupId> <artifactId>commons-digester</artifactId> <version>2.0</version> </dependency> <dependency> <groupId>commons-fileupload</groupId> <artifactId>commons-fileupload</artifactId> <version>1.3.1</version> </dependency> <dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> <version>2.0.1</version> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-lang3</artifactId> <version>3.1</version> </dependency> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1.1.3</version> </dependency> <dependency> <groupId>commons-validator</groupId> <artifactId>commons-validator</artifactId> <version>1.1.4</version> </dependency> <dependency> <groupId>commons-cli</groupId> <artifactId>commons-cli</artifactId> <version>1.2</version> </dependency> <dependency> <groupId>dom4j</groupId> <artifactId>dom4j</artifactId> <version>1.6.1</version> </dependency> <dependency> <groupId>net.sf.ezmorph</groupId> <artifactId>ezmorph</artifactId> <version>1.0.6</version> </dependency> <dependency> <groupId>javassist</groupId> <artifactId>javassist</artifactId> <version>3.12.1.GA</version> </dependency> <dependency> <groupId>jstl</groupId> <artifactId>jstl</artifactId> <version>1.2</version> </dependency> <dependency> <groupId>javax.transaction</groupId> <artifactId>jta</artifactId> <version>1.1</version> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.17</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.5</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.7.5</version> </dependency> <dependency> <groupId>net.sourceforge.jexcelapi</groupId> <artifactId>jxl</artifactId> <version>2.6.12</version> </dependency> <!-- <dependency> <groupId>com.alibaba.external</groupId> <artifactId>sourceforge.spring</artifactId> <version>2.0.1</version> </dependency> <dependency> <groupId>com.alibaba.external</groupId> <artifactId>jakarta.commons.poolg</artifactId> <version>1.3</version> </dependency> --> <dependency> <groupId>org.jdom</groupId> <artifactId>jdom</artifactId> <version>1.1.3</version> </dependency> <dependency> <groupId>jaxen</groupId> <artifactId>jaxen</artifactId> <version>1.1.1</version> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>dubbo</artifactId> <version>2.5.3</version> </dependency> <dependency> <groupId>redis.clients</groupId> <artifactId>jedis</artifactId> <version>2.4.2</version> </dependency> <!-- Common Dependency End --> <!-- Zookeeper 用于分布式服务管理 --> <dependency> <groupId>org.apache.zookeeper</groupId> <artifactId>zookeeper</artifactId> <version>3.4.5</version> </dependency> <dependency> <groupId>com.101tec</groupId> <artifactId>zkclient</artifactId> <version>0.3</version> </dependency> <!-- Zookeeper 用于分布式服务管理 end --> <!-- Spring Dependency Begin --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aop</artifactId> <version>${org.springframework.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aspects</artifactId> <version>${org.springframework.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>${org.springframework.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${org.springframework.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context-support</artifactId> <version>${org.springframework.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${org.springframework.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-expression</artifactId> <version>${org.springframework.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-instrument</artifactId> <version>${org.springframework.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-instrument-tomcat</artifactId> <version>${org.springframework.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jdbc</artifactId> <version>${org.springframework.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jms</artifactId> <version>${org.springframework.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-orm</artifactId> <version>${org.springframework.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-oxm</artifactId> <version>${org.springframework.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-struts</artifactId> <version>${org.springframework.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-test</artifactId> <version>${org.springframework.version}</version> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-tx</artifactId> <version>${org.springframework.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>${org.springframework.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>${org.springframework.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc-portlet</artifactId> <version>${org.springframework.version}</version> </dependency> <!-- Spring Dependency End --> <!-- MyBatis Dependency Begin --> <dependency> <groupId>org.mybatis</groupId> <artifactId>mybatis</artifactId> <version>3.2.8</version> </dependency> <dependency> <groupId>org.mybatis</groupId> <artifactId>mybatis-spring</artifactId> <version>1.2.2</version> </dependency> <!-- MyBatis Dependency End --> <!-- Mysql Driver Begin --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.32</version> </dependency> <!-- Mysql Driver End --> <!-- Struts2 Dependency Begin --> <dependency> <groupId>org.apache.struts</groupId> <artifactId>struts2-json-plugin</artifactId> <version>${org.apache.struts.version}</version> </dependency> <dependency> <groupId>org.apache.struts</groupId> <artifactId>struts2-convention-plugin</artifactId> <version>${org.apache.struts.version}</version> </dependency> <dependency> <groupId>org.apache.struts</groupId> <artifactId>struts2-core</artifactId> <version>${org.apache.struts.version}</version> </dependency> <dependency> <groupId>org.apache.struts</groupId> <artifactId>struts2-spring-plugin</artifactId> <version>${org.apache.struts.version}</version> </dependency> <dependency> <groupId>org.apache.struts.xwork</groupId> <artifactId>xwork-core</artifactId> <version>${org.apache.struts.version}</version> </dependency> <!-- Struts2 Dependency End --> <!-- Others Begin --> <dependency> <groupId>google.code</groupId> <artifactId>kaptcha</artifactId> <version>2.3.2</version> </dependency> <dependency> <groupId>org.apache.tomcat</groupId> <artifactId>servlet-api</artifactId> <version>6.0.37</version> </dependency> <dependency> <groupId>org.apache.tomcat</groupId> <artifactId>jsp-api</artifactId> <version>6.0.37</version> </dependency> <dependency> <groupId>org.freemarker</groupId> <artifactId>freemarker</artifactId> <version>2.3.19</version> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>druid</artifactId> <version>1.0.12</version> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> <version>1.1.41</version> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>4.3.3</version> </dependency> <dependency> <groupId>org.jboss.netty</groupId> <artifactId>netty</artifactId> <version>3.2.5.Final</version> </dependency> <dependency> <groupId>org.apache.activemq</groupId> <artifactId>activemq-all</artifactId> <version>5.11.1</version> </dependency> <dependency> <groupId>org.apache.activemq</groupId> <artifactId>activemq-pool</artifactId> <version>5.11.1</version> </dependency> <!-- Others End --> <dependency> <groupId>org.jsoup</groupId> <artifactId>jsoup</artifactId> <version>1.7.3</version> </dependency> </dependencies> </dependencyManagement> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-deploy-plugin</artifactId> <version>2.7</version> <configuration> <uniqueVersion>false</uniqueVersion> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-eclipse-plugin</artifactId> <version>2.8</version> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <failOnError>true</failOnError> <verbose>true</verbose> <fork>true</fork> <compilerArgument>-nowarn</compilerArgument> <source>1.6</source> <target>1.6</target> <encoding>UTF-8</encoding> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-source-plugin</artifactId> <version>2.1.2</version> <executions> <execution> <id>attach-sources</id> <goals> <goal>jar</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project>
pom.xml
<!-- 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 --> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>wusc.edu.common</groupId> <artifactId>edu-common-parent</artifactId> <version>1.0-SNAPSHOT</version> <relativePath>../edu-common-parent</relativePath> </parent> <groupId>wusc.edu.mqtest</groupId> <artifactId>edu-demo-mqproducer</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <name>edu-demo-mqproducer</name> <url>http://maven.apache.org</url> <build> <finalName>edu-demo-mqproducer</finalName> <resources> <resource> <targetPath>${project.build.directory}/classes</targetPath> <directory>src/main/resources</directory> <filtering>true</filtering> <includes> <include>**/*.xml</include> <include>**/*.properties</include> </includes> </resource> </resources> </build> <dependencies> <!-- Common Dependency Begin --> <dependency> <groupId>antlr</groupId> <artifactId>antlr</artifactId> </dependency> <dependency> <groupId>aopalliance</groupId> <artifactId>aopalliance</artifactId> </dependency> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjweaver</artifactId> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib</artifactId> </dependency> <dependency> <groupId>net.sf.json-lib</groupId> <artifactId>json-lib</artifactId> <classifier>jdk15</classifier> <scope>compile</scope> </dependency> <dependency> <groupId>ognl</groupId> <artifactId>ognl</artifactId> </dependency> <dependency> <groupId>oro</groupId> <artifactId>oro</artifactId> </dependency> <dependency> <groupId>commons-beanutils</groupId> <artifactId>commons-beanutils</artifactId> </dependency> <dependency> <groupId>commons-codec</groupId> <artifactId>commons-codec</artifactId> </dependency> <dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> </dependency> <dependency> <groupId>commons-digester</groupId> <artifactId>commons-digester</artifactId> </dependency> <dependency> <groupId>commons-fileupload</groupId> <artifactId>commons-fileupload</artifactId> </dependency> <dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-lang3</artifactId> </dependency> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> </dependency> <dependency> <groupId>commons-validator</groupId> <artifactId>commons-validator</artifactId> </dependency> <dependency> <groupId>dom4j</groupId> <artifactId>dom4j</artifactId> </dependency> <dependency> <groupId>net.sf.ezmorph</groupId> <artifactId>ezmorph</artifactId> </dependency> <dependency> <groupId>javassist</groupId> <artifactId>javassist</artifactId> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> </dependency> <!-- Common Dependency End --> <!-- Spring Dependency Begin --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aop</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aspects</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context-support</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jms</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-orm</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-oxm</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-tx</artifactId> </dependency> <!-- Spring Dependency End --> <dependency> <groupId>org.apache.activemq</groupId> <artifactId>activemq-all</artifactId> </dependency> <dependency> <groupId>org.apache.activemq</groupId> <artifactId>activemq-pool</artifactId> </dependency> </dependencies> </project>
mq.properties
## MQ mq.brokerURL=tcp\://192.168.4.101\:61616 mq.userName=wusc mq.password=wusc.123 mq.pool.maxConnections=10 #queueName queueName=wusc.edu.mqtest.v1
spring-mq.xml
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.2.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.2.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.2.xsd" default-autowire="byName" default-lazy-init="false"> <!-- 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 --> <!-- 真正能够产生Connection的ConnectionFactory,由对应的 JMS服务厂商提供 --> <bean id="targetConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory"> <!-- ActiveMQ服务地址 --> //这里经过上面配置中的信息进行自动的读取 <property name="brokerURL" value="${mq.brokerURL}" /> <property name="userName" value="${mq.userName}"></property> <property name="password" value="${mq.password}"></property> </bean> <!-- ActiveMQ为咱们提供了一个PooledConnectionFactory,经过往里面注入一个ActiveMQConnectionFactory 能够用来将Connection、Session和MessageProducer池化,这样能够大大的减小咱们的资源消耗。 要依赖于 activemq-pool包 --> <bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory"> <property name="connectionFactory" ref="targetConnectionFactory" /> //链接池数 <property name="maxConnections" value="${mq.pool.maxConnections}" /> </bean> <!-- Spring用于管理真正的ConnectionFactory的ConnectionFactory --> <bean id="connectionFactory" class="org.springframework.jms.connection.SingleConnectionFactory"> <!-- 目标ConnectionFactory对应真实的能够产生JMS Connection的ConnectionFactory --> <property name="targetConnectionFactory" ref="pooledConnectionFactory" /> </bean> <!-- Spring提供的JMS工具类,它能够进行消息发送、接收等 --> <!-- 队列模板 --> <bean id="activeMqJmsTemplate" class="org.springframework.jms.core.JmsTemplate"> <!-- 这个connectionFactory对应的是咱们定义的Spring提供的那个ConnectionFactory对象 --> <property name="connectionFactory" ref="connectionFactory"/> <property name="defaultDestinationName" value="${queueName}"></property> </bean> </beans>
spring-context.xml
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.2.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.2.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.2.xsd" default-autowire="byName" default-lazy-init="false"> <!-- 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 --> <!-- 采用注释的方式配置bean --> <context:annotation-config /> <!-- 配置要扫描的包 --> <context:component-scan base-package="wusc.edu.demo" /> <!-- 读入配置属性文件 --> <context:property-placeholder location="classpath:mq.properties" /> <!-- proxy-target-class默认"false",更改成"ture"使用CGLib动态代理 --> <aop:aspectj-autoproxy proxy-target-class="true" /> <import resource="spring-mq.xml" /> </beans>
MailParam.java
/** * 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 . */ package wusc.edu.demo.mqtest.params; public class MailParam { /** 发件人 **/ private String from; /** 收件人 **/ private String to; /** 主题 **/ private String subject; /** 邮件内容 **/ private String content; public MailParam() { } public MailParam(String to, String subject, String content) { this.to = to; this.subject = subject; this.content = content; } public String getFrom() { return from; } public void setFrom(String from) { this.from = from; } public String getTo() { return to; } public void setTo(String to) { this.to = to; } public String getSubject() { return subject; } public void setSubject(String subject) { this.subject = subject; } public String getContent() { return content; } public void setContent(String content) { this.content = content; } }
MqProducer.java
/** * 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 . */ package wusc.edu.demo.mqtest; import javax.jms.JMSException; import javax.jms.Message; import javax.jms.Session; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.jms.core.JmsTemplate; import org.springframework.jms.core.MessageCreator; import org.springframework.stereotype.Service; import com.alibaba.fastjson.JSONObject; import wusc.edu.demo.mqtest.params.MailParam; @Service("mqProducer")//在这里注入了spring-mq.xml中的bean(serviceMqJmsTemplate) public class MQProducer { @Autowired private JmsTemplate activeMqJmsTemplate; /** * 发送消息. * @param mail */ public void sendMessage(final MailParam mail) { activeMqJmsTemplate.send(new MessageCreator() { public Message createMessage(Session session) throws JMSException { return session.createTextMessage(JSONObject.toJSONString(mail)); } }); } }
MqProducerTest.java
/** 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 **/ package wusc.edu.demo.mqtest; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.springframework.context.support.ClassPathXmlApplicationContext; import wusc.edu.demo.mqtest.params.MailParam; public class MQProducerTest { private static final Log log = LogFactory.getLog(MQProducerTest.class); public static void main(String[] args) { try { ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("classpath:spring/spring-context.xml"); context.start(); MQProducer mqProducer = (MQProducer) context.getBean("mqProducer"); // 邮件发送 MailParam mail = new MailParam(); mail.setTo("wu-sc@foxmail.com"); mail.setSubject("ActiveMQ测试"); mail.setContent("经过ActiveMQ异步发送邮件!"); mqProducer.sendMessage(mail); context.stop(); } catch (Exception e) { log.error("==>MQ context start error:", e); System.exit(0); } finally { log.info("===>System.exit"); System.exit(0); } } }
pom.xml
<!-- 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 --> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>wusc.edu.common</groupId> <artifactId>edu-common-parent</artifactId> <version>1.0-SNAPSHOT</version> <relativePath>../edu-common-parent</relativePath> </parent> <groupId>wusc.edu.mqtest</groupId> <artifactId>edu-demo-mqconsumer</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <name>edu-demo-mqconsumer</name> <url>http://maven.apache.org</url> <build> <finalName>edu-demo-mqconsumer</finalName> <resources> <resource> <targetPath>${project.build.directory}/classes</targetPath> <directory>src/main/resources</directory> <filtering>true</filtering> <includes> <include>**/*.xml</include> <include>**/*.properties</include> </includes> </resource> </resources> </build> <dependencies> <!-- Common Dependency Begin --> <dependency> <groupId>antlr</groupId> <artifactId>antlr</artifactId> </dependency> <dependency> <groupId>aopalliance</groupId> <artifactId>aopalliance</artifactId> </dependency> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjweaver</artifactId> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib</artifactId> </dependency> <dependency> <groupId>net.sf.json-lib</groupId> <artifactId>json-lib</artifactId> <classifier>jdk15</classifier> <scope>compile</scope> </dependency> <dependency> <groupId>ognl</groupId> <artifactId>ognl</artifactId> </dependency> <dependency> <groupId>oro</groupId> <artifactId>oro</artifactId> </dependency> <dependency> <groupId>commons-beanutils</groupId> <artifactId>commons-beanutils</artifactId> </dependency> <dependency> <groupId>commons-codec</groupId> <artifactId>commons-codec</artifactId> </dependency> <dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> </dependency> <dependency> <groupId>commons-digester</groupId> <artifactId>commons-digester</artifactId> </dependency> <dependency> <groupId>commons-fileupload</groupId> <artifactId>commons-fileupload</artifactId> </dependency> <dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-lang3</artifactId> </dependency> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> </dependency> <dependency> <groupId>commons-validator</groupId> <artifactId>commons-validator</artifactId> </dependency> <dependency> <groupId>dom4j</groupId> <artifactId>dom4j</artifactId> </dependency> <dependency> <groupId>net.sf.ezmorph</groupId> <artifactId>ezmorph</artifactId> </dependency> <dependency> <groupId>javassist</groupId> <artifactId>javassist</artifactId> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> </dependency> <!-- Common Dependency End --> <!-- Spring Dependency Begin --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aop</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aspects</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context-support</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jms</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-orm</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-oxm</artifactId> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-tx</artifactId> </dependency> <!-- Spring Dependency End --> <dependency> <groupId>org.apache.activemq</groupId> <artifactId>activemq-all</artifactId> </dependency> <dependency> <groupId>org.apache.activemq</groupId> <artifactId>activemq-pool</artifactId> </dependency> <dependency> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> <version>1.4.7</version> </dependency> </dependencies> </project>
mq.properties
## MQ mq.brokerURL=tcp\://192.168.4.101\:61616 mq.userName=wusc mq.password=wusc.123 mq.pool.maxConnections=10 #queueName queueName=wusc.edu.mqtest.v1
mail.properties
#SMTP服务配置 mail.host=smtp.qq.com mail.port=25 mail.username=XXX@qq.com mail.password=XXXX mail.smtp.auth=true mail.smtp.timeout=30000 mail.default.from=XXXXX@qq.com
spring-mq.xml
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.2.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.2.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.2.xsd" default-autowire="byName" default-lazy-init="false"> <!-- 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 --> <!-- 真正能够产生Connection的ConnectionFactory,由对应的 JMS服务厂商提供 --> <bean id="targetConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory"> <!-- ActiveMQ服务地址 --> <property name="brokerURL" value="${mq.brokerURL}" /> <property name="userName" value="${mq.userName}"></property> <property name="password" value="${mq.password}"></property> </bean> <!-- ActiveMQ为咱们提供了一个PooledConnectionFactory,经过往里面注入一个ActiveMQConnectionFactory 能够用来将Connection、Session和MessageProducer池化,这样能够大大的减小咱们的资源消耗。 要依赖于 activemq-pool包 --> <bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory"> <property name="connectionFactory" ref="targetConnectionFactory" /> <property name="maxConnections" value="${mq.pool.maxConnections}" /> </bean> <!-- Spring用于管理真正的ConnectionFactory的ConnectionFactory --> <bean id="connectionFactory" class="org.springframework.jms.connection.SingleConnectionFactory"> <!-- 目标ConnectionFactory对应真实的能够产生JMS Connection的ConnectionFactory --> <property name="targetConnectionFactory" ref="pooledConnectionFactory" /> </bean> <!-- Spring提供的JMS工具类,它能够进行消息发送、接收等 --> <!-- 队列模板 --> <bean id="activeMqJmsTemplate" class="org.springframework.jms.core.JmsTemplate"> <!-- 这个connectionFactory对应的是咱们定义的Spring提供的那个ConnectionFactory对象 --> <property name="connectionFactory" ref="connectionFactory"/> <property name="defaultDestinationName" value="${queueName}"></property> </bean> <!--这个是sessionAwareQueue目的地 --> <bean id="sessionAwareQueue" class="org.apache.activemq.command.ActiveMQQueue"> <constructor-arg> <value>${queueName}</value> </constructor-arg> </bean> <!-- 能够获取session的MessageListener --> <bean id="consumerSessionAwareMessageListener" class="wusc.edu.demo.mqtest.listener.ConsumerSessionAwareMessageListener"></bean> //spring提供的监听器 <bean id="sessionAwareListenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer"> <property name="connectionFactory" ref="connectionFactory" /> //监听的队列名字 <property name="destination" ref="sessionAwareQueue" /> <property name="messageListener" ref="consumerSessionAwareMessageListener" /> </bean> </beans>
配置文件中的关键点:sessionAwareQueue目的地 和 session的MessageListener
spring-mail.xml
<?xml version="1.0" encoding="UTF-8" ?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:cache="http://www.springframework.org/schema/cache" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.2.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.2.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.2.xsd http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache-3.2.xsd"> <!-- 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 --> <!-- Spring提供的发送电子邮件的高级抽象类 --> <bean id="mailSender" class="org.springframework.mail.javamail.JavaMailSenderImpl"> <property name="host" value="${mail.host}" /> <property name="username" value="${mail.username}" /> <property name="password" value="${mail.password}" /> <property name="defaultEncoding" value="UTF-8"></property> <property name="javaMailProperties"> <props> <prop key="mail.smtp.auth">${mail.smtp.auth}</prop> <prop key="mail.smtp.timeout">${mail.smtp.timeout}</prop> </props> </property> </bean> <bean id="simpleMailMessage" class="org.springframework.mail.SimpleMailMessage"> <property name="from"> <value>${mail.default.from}</value> </property> </bean> <!-- 配置线程池 --> <bean id="threadPool" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor"> <!-- 线程池维护线程的最少数量 --> <property name="corePoolSize" value="5" /> <!-- 线程池维护线程所容许的空闲时间 --> <property name="keepAliveSeconds" value="30000" /> <!-- 线程池维护线程的最大数量 --> <property name="maxPoolSize" value="50" /> <!-- 线程池所使用的缓冲队列 --> <property name="queueCapacity" value="100" /> </bean> </beans>
spring-context.xml
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.2.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.2.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.2.xsd" default-autowire="byName" default-lazy-init="false"> <!-- 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 --> <!-- 采用注释的方式配置bean --> <context:annotation-config /> <!-- 配置要扫描的包 --> <context:component-scan base-package="wusc.edu.demo" /> <!-- 读入配置属性文件 --> <context:property-placeholder location="classpath:mq.properties,classpath:mail.properties" /> <!-- proxy-target-class默认"false",更改成"ture"使用CGLib动态代理 --> <aop:aspectj-autoproxy proxy-target-class="true" /> <import resource="spring-mq.xml" /> <import resource="spring-mail.xml" /> </beans>
ConsumerSessionAwareMessageListener.java
/** * 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 . */ package wusc.edu.demo.mqtest.listener; import javax.jms.Destination; import javax.jms.JMSException; import javax.jms.Message; import javax.jms.Session; import org.apache.activemq.command.ActiveMQTextMessage; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.jms.core.JmsTemplate; import org.springframework.jms.core.MessageCreator; import org.springframework.jms.listener.SessionAwareMessageListener; import org.springframework.stereotype.Component; import wusc.edu.demo.mqtest.biz.MailBiz; import wusc.edu.demo.mqtest.params.MailParam; import com.alibaba.fastjson.JSONObject; //一个自定义的监听类 @Component public class ConsumerSessionAwareMessageListener implements SessionAwareMessageListener<Message> { private static final Log log = LogFactory.getLog(ConsumerSessionAwareMessageListener.class); @Autowired private JmsTemplate activeMqJmsTemplate; @Autowired private Destination sessionAwareQueue; @Autowired private MailBiz bailBiz; //经过onMessage方法不停的监听 public synchronized void onMessage(Message message, Session session) { try { ActiveMQTextMessage msg = (ActiveMQTextMessage) message; final String ms = msg.getText(); log.info("==>receive message:" + ms); //将接收到的json对象转换成MailParam的类型 MailParam mailParam = JSONObject.parseObject(ms, MailParam.class);// 转换成相应的对象 if (mailParam == null) { return; } try { //调用发邮件 bailBiz.mailSend(mailParam); } catch (Exception e) { // 发送异常,从新放回队列 // activeMqJmsTemplate.send(sessionAwareQueue, new MessageCreator() { // public Message createMessage(Session session) throws JMSException { // return session.createTextMessage(ms); // } // }); log.error("==>MailException:", e); } } catch (Exception e) { log.error("==>", e); } } }
MailBiz.java
/** * 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 . */ package wusc.edu.demo.mqtest.biz; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.mail.MailException; import org.springframework.mail.SimpleMailMessage; import org.springframework.mail.javamail.JavaMailSender; import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor; import org.springframework.stereotype.Component; import wusc.edu.demo.mqtest.params.MailParam; @Component("mailBiz") public class MailBiz { @Autowired private JavaMailSender mailSender;// spring配置中定义 @Autowired private SimpleMailMessage simpleMailMessage;// spring配置中定义 @Autowired private ThreadPoolTaskExecutor threadPool; /** * 发送模板邮件 * * @param mailParamTemp须要设置四个参数 * templateName,toMail,subject,mapModel * @throws Exception * */ public void mailSend(final MailParam mailParam) { threadPool.execute(new Runnable() { public void run() { try { simpleMailMessage.setFrom(simpleMailMessage.getFrom()); // 发送人,从配置文件中取得 simpleMailMessage.setTo(mailParam.getTo()); // 接收人 simpleMailMessage.setSubject(mailParam.getSubject()); simpleMailMessage.setText(mailParam.getContent()); mailSender.send(simpleMailMessage); } catch (MailException e) { throw e; } } }); } }
MailParam.java
/** * 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 . */ package wusc.edu.demo.mqtest.params; public class MailParam { /** 发件人 **/ private String from; /** 收件人 **/ private String to; /** 主题 **/ private String subject; /** 邮件内容 **/ private String content; public MailParam() { } public MailParam(String to, String subject, String content) { this.to = to; this.subject = subject; this.content = content; } public String getFrom() { return from; } public void setFrom(String from) { this.from = from; } public String getTo() { return to; } public void setTo(String to) { this.to = to; } public String getSubject() { return subject; } public void setSubject(String subject) { this.subject = subject; } public String getContent() { return content; } public void setContent(String content) { this.content = content; } }
MQConsumer.java
/** * 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 . */ package wusc.edu.demo.mqtest; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.springframework.context.support.ClassPathXmlApplicationContext; public class MQConsumer { private static final Log log = LogFactory.getLog(MQConsumer.class); public static void main(String[] args) { try { ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("classpath:spring/spring-context.xml"); context.start(); } catch (Exception e) { log.error("==>MQ context start error:", e); System.exit(0); } } }
启动生产者
MqProducerTest.java
name:队列的名字
Number Of Peding:等待发送的消息
Number Of Consumer:消费者数量
Messages Enqueuqd:进入队列的消息
Messages Dequeued :出了队列的消息
Views:
Operations:
启动消费者
MQConsumer.java
有一个消费者,有一个消息进入队列,一个消息出了队列
而且能消息到一个消息。
经过上面的例子,能够看到,消费者和消费者之间没有直接的调用,是经过将消息发到队列中,具体消费者怎么实现跟消费者没有关系,从而实现异步和解耦。
IP:192.168.4.111
环境:CentOS 6.6
Redis 版本:redis-3.0 (考虑到 Redis3.0 在集群和性能提高方面的特性,rc 版为正式版的候选版,并且 很快就出正式版)
安装目录:/usr/local/redis
用户:root
编译和安装所需的包:
# yum install gcc tcl
下载 3.0 版 Redis(当前最新版 redis-3.0.0-rc5.tar.gz,请学员们在安装时自行选用最新版)
# cd /usr/local/src # wget https://github.com/antirez/redis/archive/3.0.0-rc5.tar.gz
建立安装目录:
# mkdir /usr/local/redis
解压:
# tar -zxvf 3.0.0-rc5.tar.gz # mv redis-3.0.0-rc5 redis3.0 # cd redis3.0
安装(使用 PREFIX 指定安装目录):
# make PREFIX=/usr/local/redis install
安装完成后,能够看到/usr/local/redis
目录下有一个 bin 目录,bin 目录里就是 redis 的命令脚本:
redis-benchmark redis-check-aof redis-check-dump redis-cli redis-server
将 Redis 配置成服务:
按上面的操做步骤,Redis 的启动脚本为:/usr/local/src/redis3.0/utils/redis_init_script
将启动脚本复制到/etc/rc.d/init.d/
目录下,并命名为 redis:
# cp /usr/local/src/redis3.0/utils/redis_init_script /etc/rc.d/init.d/redis
编辑/etc/rc.d/init.d/redis,修改相应配置,使之能注册成为服务:
# vi /etc/rc.d/init.d/redis #!/bin/sh # # Simple Redis init.d script conceived to work on Linux systems # as it does use of the /proc filesystem. REDISPORT=6379 EXEC=/usr/local/bin/redis-server CLIEXEC=/usr/local/bin/redis-cli 进程id的一个文件 PIDFILE=/var/run/redis_${REDISPORT}.pid CONF="/etc/redis/${REDISPORT}.conf" case "$1" in start) if [ -f $PIDFILE ] then echo "$PIDFILE exists, process is already running or crashed" else echo ”Starting Redis server..." $EXEC $CONF fi ;; stop) if [ ! -f $PIDFILE ] then echo "$PIDFILE does not exist, process is not running" else PID=$(cat $PIDFILE) echo "Stopping ..." $CLIEXEC -p $REDISPORT shutdown while [ -x /proc/${PID} ] do echo "Waiting for Redis to shutdown ..." sleep 1 done echo "Redis stopped" fi ;; *) echo "Please use start or stop as first argument" ;; esac
查看以上 redis 服务脚本,关注标为橙色的几个属性,作以下几个修改的准备:
(1) 在脚本的第一行后面添加一行内容以下:
#chkconfig: 2345 80 90
(若是不添加上面的内容,在注册服务时会提示:service redis does not support chkconfig
)
(2) REDISPORT 端口保持 6379
不变;(注意,端口名将与下面的配置文件名有关
)
(3) EXEC=/usr/local/bin/redis-server
改成 EXEC=/usr/local/redis/bin/redis-server
执行路径,修改成本身的路径
(4) CLIEXEC=/usr/local/bin/redis-cli
改成 CLIEXEC=/usr/local/redis/bin/redis-cli
client执行路径,修改成本身的路径
(5) 配置文件设置:
建立 redis 配置文件目录
# mkdir /usr/local/redis/conf
复制 redis 配置文件/usr/local/src/redis3.0/redis.conf
到/usr/local/redis/conf
目录并按端口 号重命名为 6379.conf
,考虑到之后可能作集群,因此这样命名。
# cp /usr/local/src/redis3.0/redis.conf /usr/local/redis/conf/6379.conf
作了以上准备后,再对 CONF 属性做以下调整:
CONF="/etc/redis/${REDISPORT}.conf"
改成 CONF="/usr/local/redis/conf/${REDISPORT}.conf"
修改成本身的路径,REDISPORT在这个命令的上面进行了配置
(6) 更改 redis 开启的命令,之后台运行的方式执行: $EXEC $CONF & #“&”
做用是将服务转到后面运行,因此上面必定要配置正确。
修改后的/etc/rc.d/init.d/redis 服务脚本内容为:
#!/bin/sh #chkconfig: 2345 80 90 # # Simple Redis init.d script conceived to work on Linux systems # as it does use of the /proc filesystem. REDISPORT=6379 EXEC=/usr/local/redis/bin/redis-server CLIEXEC=/usr/local/redis/bin/redis-cli PIDFILE=/var/run/redis_${REDISPORT}.pid CONF="/usr/local/redis/conf/${REDISPORT}.conf" case "$1" in start) if [ -f $PIDFILE ] then else fi ;; stop) echo "$PIDFILE exists, process is already running or crashed" echo "Starting Redis server..." $EXEC $CONF & if [ ! -f $PIDFILE ] then else echo "$PIDFILE does not exist, process is not running" PID=$(cat $PIDFILE) echo "Stopping ..." $CLIEXEC -p $REDISPORT shutdown while [ -x /proc/${PID} ] do echo "Waiting for Redis to shutdown ..." sleep 1 done echo "Redis stopped" fi ;; *) echo "Please use start or stop as first argument" ;; esac
以上配置操做完成后,即可将 Redis 注册成为服务:
# chkconfig --add redis
防火墙中打开对应的端口
# vi /etc/sysconfig/iptables
添加:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 6379 -j ACCEPT
重启防火墙:
# service iptables restart
修改 redis 配置文件设置:
# vi /usr/local/redis/conf/6379.conf
修改以下配置
//若是这里不设置为yes,pid文件不会生生成,pid不生成的话start命令就用不了 //PIDFILE=/var/run/redis_${REDISPORT}.pid这里用到了pid文件,下面的$PIDFILE也就没法生效 daemonize no 改成> daemonize yes //由于在刚才的脚本中是经过端口命名pid的,因此这里也应该修改一下 pidfile /var/run/redis.pid 改成> pidfile /var/run/redis_6379.pid
启动 Redis 服务
# service redis start
将 Redis 添加到环境变量中: # vi /etc/profile 在最后添加如下内容:
## Redis env export PATH=$PATH:/usr/local/redis/bin
使配置生效:
# source /etc/profile
如今就能够再任何路径使用bin里面的命令了,
即如今就能够直接使用 redis-cli 等 redis 命令了:
运行了上面的命令,就能写入redis增删改查等命令了。
简单测试下命令
set name xiaoming get name xiaoming
关闭 Redis 服务
# service redis stop
默认状况下,Redis 开启安全认证,能够经过/usr/local/redis/conf/6379.conf
的 requirepass 指定一个
验证密码。
Redis 的使用的 Demo 样例讲解与演示:
目录结构
spring-context.xml
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.2.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.2.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.2.xsd" default-autowire="byName" default-lazy-init="false"> <!-- 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 --> <!-- 采用注释的方式配置bean --> <context:annotation-config /> <!-- 配置要扫描的包 --> <context:component-scan base-package="wusc.edu.demo" /> <!-- proxy-target-class默认"false",更改成"ture"使用CGLib动态代理 --> <aop:aspectj-autoproxy proxy-target-class="true" /> <import resource="spring-redis.xml" /> </beans>
spring-redis.xml
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <!-- Jedis连接池配置 --> <bean id="jedisPoolConfig" class="redis.clients.jedis.JedisPoolConfig"> <property name="testWhileIdle" value="true" /> <property name="minEvictableIdleTimeMillis" value="60000" /> <property name="timeBetweenEvictionRunsMillis" value="30000" /> <property name="numTestsPerEvictionRun" value="-1" /> <property name="maxTotal" value="8" /> <property name="maxIdle" value="8" /> <property name="minIdle" value="0" /> </bean> //重点配置这里 <bean id="shardedJedisPool" class="redis.clients.jedis.ShardedJedisPool"> <constructor-arg index="0" ref="jedisPoolConfig" /> <constructor-arg index="1"> <list> <bean class="redis.clients.jedis.JedisShardInfo"> <constructor-arg index="0" value="192.168.4.111" /> <constructor-arg index="1" value="6379" type="int" /> </bean> </list> </constructor-arg> </bean> </beans>
RedisTest.java (没有集成spring的测试类)
/** 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 **/ package wusc.edu.demo.redis; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import redis.clients.jedis.Jedis; /** * * @描述: Redis测试 . * @做者: WuShuicheng . * @建立时间: 2015-3-23,上午1:30:40 . * @版本号: V1.0 . */ public class RedisTest { private static final Log log = LogFactory.getLog(RedisTest.class); public static void main(String[] args) { Jedis jedis = new Jedis("192.168.4.111"); String key = "wusc"; String value = ""; jedis.del(key); // 删数据 jedis.set(key, "WuShuicheng"); // 存数据 value = jedis.get(key); // 取数据 log.info(key + "=" + value); jedis.set(key, "WuShuicheng2"); // 存数据 value = jedis.get(key); // 取数据 log.info(key + "=" + value); //jedis.del(key); // 删数据 //value = jedis.get(key); // 取数据 //log.info(key + "=" + value); } }
RedisPringTest.java(集成了spring的测试类)
/** 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 **/ package wusc.edu.demo.redis; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.springframework.context.support.ClassPathXmlApplicationContext; import redis.clients.jedis.ShardedJedis; import redis.clients.jedis.ShardedJedisPool; /** * * @描述: Redis测试 . * @做者: WuShuicheng . * @建立时间: 2015-3-23,上午1:30:40 . * @版本号: V1.0 . */ public class RedisSpringTest { private static final Log log = LogFactory.getLog(RedisSpringTest.class); public static void main(String[] args) { try { ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("classpath:spring/spring-context.xml"); context.start(); ShardedJedisPool pool = (ShardedJedisPool) context.getBean("shardedJedisPool"); ShardedJedis jedis = pool.getResource(); String key = "wusc"; String value = ""; jedis.del(key); // 删数据 jedis.set(key, "WuShuicheng"); // 存数据 value = jedis.get(key); // 取数据 log.info(key + "=" + value); jedis.set(key, "WuShuicheng2"); // 存数据 value = jedis.get(key); // 取数据 log.info(key + "=" + value); jedis.del(key); // 删数据 value = jedis.get(key); // 取数据 log.info(key + "=" + value); context.stop(); } catch (Exception e) { log.error("==>RedisSpringTest context start error:", e); System.exit(0); } finally { log.info("===>System.exit"); System.exit(0); } } }
经过执行上述两个类中的某个,自行回到linux redis目录经过目录查看结果。
1client询问tracker下载文件的storage,参数为文件标识(组名和文件名);
2 tracker返回一台可用的storage;
3 client直接和storage通信完成文件下载。
${base_path} |__data | |__storage_groups.dat:存储分组信息 | |__storage_servers.dat:存储服务器列表 |__logs |__trackerd.log:tracker server日志文件
${base_path} |__data | |__.data_init_flag:当前storage server初始化信息 | |__storage_stat.dat:当前storage server统计信息 | |__sync:存放数据同步相关文件 | | |__binlog.index:当前的binlog文件索引号 | | |__binlog.###:存放更新操做记录(日志) | | |__${ip_addr}_${port}.mark:存放同步的完成状况 | | | |__一级目录:256个存放数据文件的目录,如:00, 1F | |__二级目录:256个存放数据文件的目录 |__logs |__storaged.log:storage server日志文件
#step 1. download FastDFS source package and unpack it, # if you use HTTP to download file, please download libevent 1.4.x and install it tar xzf FastDFS_v1.x.tar.gz #for example: tar xzf FastDFS_v1.20.tar.gz #step 2. enter the FastDFS dir cd FastDFS #step 3. if HTTP supported, modify make.sh, uncomment the line: # WITH_HTTPD=1, then execute: ./make.sh #step 4. make install ./make.sh install #step 5. edit/modify the config file of tracker and storage #step 6. run server programs #start the tracker server: /usr/local/bin/fdfs_trackerd <tracker_conf_filename> #start the storage server: /usr/local/bin/fdfs_storaged <storage_conf_filename>
指标 | FastDFS | NFS | 集中存储设备如NetApp、NAS |
---|---|---|---|
线性扩容性 | 高 | 差 | 差 |
文件高并发访问性能 | 高 | 差 | 通常 |
文件访问方式 | 专有API | POSIX | 支持POSIX |
硬件成本 | 较低 | 中等 | 高 |
相同内容文件只保存一份 | 支持 | 不支持 | 不支持 |
指标 | FastDFS | mogileFS | 指标 |
---|---|---|---|
系统简洁性 | 简洁 | 系统简洁性 | 简洁 |
只有两个角色:tracker和storage | 通常 | 只有两个角色:tracker和storage | 通常 |
有三个角色:tracker、storage和存储文件信息的mysql db | 有三个角色:tracker、storage和存储文件信息的mysql db | 有三个角色:tracker、storage和存储文件信息的mysql db | 有三个角色:tracker、storage和存储文件信息的mysql db |
系统性能 | 很高(没有使用数据库,文件同步直接点对点,不通过tracker中转) | 高(使用mysql来存储文件索引等信息,文件同步经过tracker调度和中转) | 系统性能 |
系统稳定性 | 高(C语言开发,能够支持高并发和高负载) | 通常(Perl语言开发,高并发和高负载支持通常) | 系统稳定性 |
RAID方式 | 分组(组内冗余),灵活性较大 | 动态冗余,灵活性通常 | RAID方式 |
通讯协议 | 专有协议 | 通讯协议 | 专有协议 |
下载文件支持HTTP | HTTP | 下载文件支持HTTP | HTTP |
技术文档 | 较详细 | 较少 | 技术文档 |
文件附加属性(meta data) | 支持 | 不支持 | 文件附加属性(meta data) |
相同内容文件只保存一份 | 支持 | 不支持 | 相同内容文件只保存一份 |
下载文件时支持文件偏移量 | 支持 | 不支持 | 下载文件时支持文件偏移量 |
跟踪服务器:192.168.4.121 (edu-dfs-tracker-01) 存储服务器:192.168.4.125 (edu-dfs-storage-01) 环境:CentOS 6.6
用户:root
数据目录:/fastdfs (注:数据目录按你的数据盘挂载路径而定
)未来文件都会上传到这里来
安装包:
FastDFS v5.05
libfastcommon-master.zip(是从 FastDFS 和 FastDHT 中提取出来的公共 C 函数库) fastdfs-nginx-module_v1.16.tar.gz
nginx-1.6.2.tar.gz fastdfs_client_java._v1.25.tar.gz
源码地址:https://github.com/happyfish100/
下载地址:http://sourceforge.net/projects/fastdfs/files/
官方论坛:http://bbs.chinaunix.net/forum-240-1.html
1、 全部跟踪服务器和存储服务器均执行以下操做
一、编译和安装所需的依赖包:
# yum install make cmake gcc gcc-c++
二、安装 libfastcommon:
(1)上传或下载 libfastcommon-master.zip 到/usr/local/src 目录
(2)解压
# cd /usr/local/src/ # unzip libfastcommon-master.zip # cd libfastcommon-master
(3) 编译、安装
# ./make.sh # ./make.sh install libfastcommon 默认安装到了 /usr/lib64/libfastcommon.so /usr/lib64/libfdfsclient.so
(4)由于 FastDFS 主程序设置的 lib 目录是/usr/local/lib,因此须要建立软连接.
# ln -s /usr/lib64/libfastcommon.so /usr/local/lib/libfastcommon.so # ln -s /usr/lib64/libfastcommon.so /usr/lib/libfastcommon.so # ln -s /usr/lib64/libfdfsclient.so /usr/local/lib/libfdfsclient.so # ln -s /usr/lib64/libfdfsclient.so /usr/lib/libfdfsclient.so
三、安装 FastDFS
(1)上传或下载 FastDFS 源码包(FastDFS_v5.05.tar.gz)到 /usr/local/src 目录
(2)解压
# cd /usr/local/src/ # tar -zxvf FastDFS_v5.05.tar.gz # cd FastDFS
(3)编译、安装(编译前要确保已经成功安装了 libfastcommon
)
# ./make.sh # ./make.sh install
采用默认安装的方式安装,安装后的相应文件与目录:
A、服务脚本在:
/etc/init.d/fdfs_storaged /etc/init.d/fdfs_tracker
B、配置文件在(样例配置文件):
/etc/fdfs/client.conf.sample /etc/fdfs/storage.conf.sample /etc/fdfs/tracker.conf.sample
C、命令工具在/usr/bin/目录下的:
fdfs_appender_test fdfs_appender_test1 fdfs_append_file fdfs_crc32 fdfs_delete_file fdfs_download_file fdfs_file_info fdfs_monitor fdfs_storaged fdfs_test fdfs_test1 fdfs_trackerd fdfs_upload_appender fdfs_upload_file stop.sh restart.sh
(4)由于 FastDFS 服务脚本设置的 bin 目录是/usr/local/bin,但实际命令安装在/usr/bin,能够进入 /user/bin 目录使用如下命令查看 fdfs 的相关命令:
# cd /usr/bin/ # ls | grep fdfs
所以须要修改 FastDFS 服务脚本中相应的命令路径,也就是把/etc/init.d/fdfs_storaged 和/etc/init.d/fdfs_tracker 两个脚本中的/usr/local/bin 修改为/usr/bin:
# vi fdfs_trackerd
使用查找替换命令进统一修改:
%s+/usr/local/bin+/usr/bin
# vi fdfs_storaged
使用查找替换命令进统一修改:
%s+/usr/local/bin+/usr/bin
2、配置 FastDFS 跟踪器(192.168.4.121)
一、 复制 FastDFS 跟踪器样例配置文件,并重命名:
# cd /etc/fdfs/
# cp tracker.conf.sample tracker.conf
二、 编辑跟踪器配置文件:
# vi /etc/fdfs/tracker.conf
修改的内容以下:
disabled=false port=22122 base_path=/fastdfs/tracker
(其它参数保留默认配置
,具体配置解释请参考官方文档说明: http://bbs.chinaunix.net/thread-1941456-1-1.html )
三、 建立基础数据目录(参考基础目录 base_path 配置):
# mkdir -p /fastdfs/tracker
四、 防火墙中打开跟踪器端口(默认为 22122):
# vi /etc/sysconfig/iptables
添加以下端口行:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22122 -j ACCEPT
重启防火墙:
# service iptables restart
五、 启动 Tracker:
# /etc/init.d/fdfs_trackerd start
(初次成功启动,会在/fastdfs/tracker 目录下建立 data、logs 两个目录
) 查看 FastDFS Tracker 是否已成功启动:
# ps -ef | grep fdfs
六、 关闭 Tracker:
# /etc/init.d/fdfs_trackerd stop
七、 设置 FastDFS 跟踪器开机启动:
# vi /etc/rc.d/rc.local
添加如下内容:
## FastDFS Tracker /etc/init.d/fdfs_trackerd start
3、配置 FastDFS 存储(192.168.4.125)
一、 复制 FastDFS 存储器样例配置文件,并重命名: # cd /etc/fdfs/
# cp storage.conf.sample storage.conf
二、 编辑存储器样例配置文件:
# vi /etc/fdfs/storage.conf
修改的内容以下:
disabled=false port=23000 base_path=/fastdfs/storage store_path0=/fastdfs/storage tracker_server=192.168.4.121:22122 http.server_port=8888
(其它参数保留默认配置
,具体配置解释请参考官方文档说明:
http://bbs.chinaunix.net/thread-1941456-1-1.html )
三、 建立基础数据目录(参考基础目录 base_path 配置): # mkdir -p /fastdfs/storage
四、 防火墙中打开存储器端口(默认为 23000):
# vi /etc/sysconfig/iptables
添加以下端口行:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 23000 -j ACCEPT
重启防火墙:
# service iptables restart
五、 启动 Storage:
# /etc/init.d/fdfs_storaged start
(初次成功启动,会在/fastdfs/storage 目录下建立 data、logs 两个目录
) 查看 FastDFS Storage 是否已成功启动
# ps -ef | grep fdfs
六、 关闭 Storage:
# /etc/init.d/fdfs_storaged stop
七、 设置 FastDFS 存储器开机启动:
# vi /etc/rc.d/rc.local
添加:
## FastDFS Storage /etc/init.d/fdfs_storaged start
4、文件上传测试(192.168.4.121)
一、修改 Tracker 服务器中的客户端配置文件:
# cp /etc/fdfs/client.conf.sample /etc/fdfs/client.conf # vi /etc/fdfs/client.conf base_path=/fastdfs/tracker tracker_server=192.168.4.121:22122
二、执行以下文件上传命令:
# /usr/bin/fdfs_upload_file /etc/fdfs/client.conf /usr/local/src/FastDFS_v5.05.tar.gz
返回 ID 号:group1/M00/00/00/wKgEfVUYNYeAb7XFAAVFOL7FJU4.tar.gz
(能返回以上文件 ID,说明文件上传成功)
6、在每一个存储节点上安装 nginx
一、fastdfs-nginx-module 做用说明
FastDFS 经过 Tracker 服务器,将文件放在 Storage 服务器存储,可是同组存储服务器之间须要进入
文件复制,有同步延迟的问题。假设 Tracker 服务器将文件上传到了 192.168.4.125,上传成功后文件 ID 已经返回给客户端。此时 FastDFS 存储集群机制会将这个文件同步到同组存储 192.168.4.126,在文件还 没有复制完成的状况下,客户端若是用这个文件 ID 在 192.168.4.126 上取文件,就会出现文件没法访问的 错误。而 fastdfs-nginx-module 能够重定向文件链接到源服务器取文件,避免客户端因为复制延迟致使的 文件没法访问错误。(解压后的 fastdfs-nginx-module 在 nginx 安装时使用)
注意这个模块只须要在存储节点安装。
二、上传 fastdfs-nginx-module_v1.16.tar.gz 到/usr/local/src
三、解压
# cd /usr/local/src/ # tar -zxvf fastdfs-nginx-module_v1.16.tar.gz
四、修改 fastdfs-nginx-module 的 config 配置文件
# cd fastdfs-nginx-module/src # vi config CORE_INCS="$CORE_INCS /usr/local/include/fastdfs /usr/local/include/fastcommon/" 修改成: CORE_INCS="$CORE_INCS /usr/include/fastdfs /usr/include/fastcommon/" //由于安装common的时候不是再local下面
(注意:这个路径修改是很重要的,否则在 nginx 编译的时候会报错的
)
五、上传当前的稳定版本 Nginx(nginx-1.6.2.tar.gz)到/usr/local/src 目录
六、安装编译 Nginx 所需的依赖包
# yum install gcc gcc-c++ make automake autoconf libtool pcre* zlib openssl openssl-devel
七、编译安装 Nginx(添加 fastdfs-nginx-module 模块)
# cd /usr/local/src/ # tar -zxvf nginx-1.6.2.tar.gz # cd nginx-1.6.2 # ./configure --add-module=/usr/local/src/fastdfs-nginx-module/src # make && make install
八、复制 fastdfs-nginx-module 源码中的配置文件到/etc/fdfs 目录,并修改
# cp /usr/local/src/fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs/ # vi /etc/fdfs/mod_fastdfs.conf
修改如下配置:
主要是配置tracker_server=192.168.4.121:22122
url_have_group_name = true
store_path0=/fastdfs/storage
connect_timeout=10 base_path=/tmp tracker_server=192.168.4.121:22122 storage_server_port=23000 group_name=group1 url_have_group_name = true store_path0=/fastdfs/storage
九、复制 FastDFS 的部分配置文件到/etc/fdfs 目录
# cd /usr/local/src/FastDFS/conf # cp http.conf mime.types /etc/fdfs/
十、在/fastdfs/storage 文件存储目录下建立软链接,将其连接到实际存放数据的目录
# ln -s /fastdfs/storage/data/ /fastdfs/storage/data/M00
十一、配置 Nginx
简洁版 nginx 配置样例:
主要关注:
user root;
listen 8888;
location ~/group([0-9])/M00 {
#alias /fastdfs/storage/data;
ngx_fastdfs_module;
}
listen 8888; 将80修改成8888,为了和上面存储中的httpserver 8888对应
~/group([0-9])/M00,当组号设置为true,这里就有了,m00就是刚才设置的软链接
ngx_fastdfs_module;把这个模块引进来
user root; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 8888; server_name localhost; location ~/group([0-9])/M00 { #alias /fastdfs/storage/data; ngx_fastdfs_module; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } }
注意、说明:
A、8888 端口值是要与/etc/fdfs/storage.conf 中的 http.server_port=8888 相对应,
由于 http.server_port 默认为 8888,若是想改为 80,则要对应修改过来。
B、Storage 对应有多个 group 的状况下,访问路径带 group 名,如/group1/M00/00/00/xxx, 对应的 Nginx 配置为(为了之后横向扩展多个组作准备):
location ~/group([0-9])/M00 { ngx_fastdfs_module; }
C、如查下载时如发现老报 404,将 nginx.conf 第一行 user nobody 修改成 user root 后从新启动。
十二、防火墙中打开 Nginx 的 8888 端口
# vi /etc/sysconfig/iptables
添加:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 8888 -j ACCEPT # service iptables restart
1三、启动 Nginx
# /usr/local/nginx/sbin/nginx ngx_http_fastdfs_set pid=xxx
(重启 Nginx 的命令为:/usr/local/nginx/sbin/nginx -s reload
)
1四、经过浏览器访问测试时上传的文件
http://192.168.4.125:8888/group1/M00/00/00/wKgEfVUYNYeAb7XFAAVFOL7FJU4.tar.gz
发现浏览器直接就开始下载了。
7、FastDFS 的使用的 Demo 样例讲解与演示:
具体内容请参考样例代码和视频教程
注意:千万不要使用 kill -9 命令强杀 FastDFS 进程,不然可能会致使 binlog 数据丢失。
文件结构
common和fastdfs中有官方的一些java文件。
pom.xml
<!-- 基于Dubbo的分布式系统架构视频教程,吴水成,wu-sc@foxmail.com,学习交流QQ群:367211134 --> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>wusc.edu.demo</groupId> <artifactId>edu-demo-fdfs</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <name>edu-demo-fdfs</name> <url>http://maven.apache.org</url> <build> <finalName>edu-demo-fdfs</finalName> <resources> <resource> <targetPath>${project.build.directory}/classes</targetPath> <directory>src/main/resources</directory> <filtering>true</filtering> <includes> <include>**/*.xml</include> <include>**/*.properties</include> </includes> </resource> </resources> </build> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> </dependency> <dependency> <groupId>commons-fileupload</groupId> <artifactId>commons-fileupload</artifactId> <version>1.3.1</version> </dependency> <dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> <version>2.0.1</version> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-lang3</artifactId> <version>3.1</version> </dependency> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1.1.3</version> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.17</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.5</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.7.5</version> </dependency> </dependencies> </project>
fdfs_client.conf
connect_timeout = 10 network_timeout = 30 charset = UTF-8 http.tracker_http_port = 8080 http.anti_steal_token = no http.secret_key = FastDFS1234567890 tracker_server = 192.168.4.121:22122
FastDFSClient.java
package wusc.edu.demo.fdfs; import java.io.ByteArrayInputStream; import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; import org.apache.commons.lang3.StringUtils; import org.apache.log4j.Logger; import org.csource.common.NameValuePair; import org.csource.fastdfs.ClientGlobal; import org.csource.fastdfs.StorageClient1; import org.csource.fastdfs.StorageServer; import org.csource.fastdfs.TrackerClient; import org.csource.fastdfs.TrackerServer; /** * * @描述: FastDFS分布式文件系统操做客户端 . * @做者: WuShuicheng . * @建立时间: 2015-3-29,下午8:13:49 . * @版本号: V1.0 . */ public class FastDFSClient { //private static final String CONF_FILENAME = Thread.currentThread().getContextClassLoader().getResource("").getPath() + "fdfs_client.conf"; private static final String CONF_FILENAME = "src/main/resources/fdfs/fdfs_client.conf"; private static StorageClient1 storageClient1 = null; private static Logger logger = Logger.getLogger(FastDFSClient.class); /** * 只加载一次. */ static { try { logger.info("=== CONF_FILENAME:" + CONF_FILENAME); ClientGlobal.init(CONF_FILENAME); TrackerClient trackerClient = new TrackerClient(ClientGlobal.g_tracker_group); TrackerServer trackerServer = trackerClient.getConnection(); if (trackerServer == null) { logger.error("getConnection return null"); } StorageServer storageServer = trackerClient.getStoreStorage(trackerServer); if (storageServer == null) { logger.error("getStoreStorage return null"); } storageClient1 = new StorageClient1(trackerServer, storageServer); } catch (Exception e) { logger.error(e); } } /** * * @param file * 文件 * @param fileName * 文件名 * @return 返回Null则为失败 */ public static String uploadFile(File file, String fileName) { FileInputStream fis = null; try { NameValuePair[] meta_list = null; // new NameValuePair[0]; fis = new FileInputStream(file); byte[] file_buff = null; if (fis != null) { int len = fis.available(); file_buff = new byte[len]; fis.read(file_buff); } String fileid = storageClient1.upload_file1(file_buff, getFileExt(fileName), meta_list); return fileid; } catch (Exception ex) { logger.error(ex); return null; }finally{ if (fis != null){ try { fis.close(); } catch (IOException e) { logger.error(e); } } } } /** * 根据组名和远程文件名来删除一个文件 * * @param groupName * 例如 "group1" 若是不指定该值,默认为group1 * @param fileName * 例如"M00/00/00/wKgxgk5HbLvfP86RAAAAChd9X1Y736.jpg" * @return 0为成功,非0为失败,具体为错误代码 */ public static int deleteFile(String groupName, String fileName) { try { int result = storageClient1.delete_file(groupName == null ? "group1" : groupName, fileName); return result; } catch (Exception ex) { logger.error(ex); return 0; } } /** * 根据fileId来删除一个文件(咱们如今用的就是这样的方式,上传文件时直接将fileId保存在了数据库中) * * @param fileId * file_id源码中的解释file_id the file id(including group name and filename);例如 group1/M00/00/00/ooYBAFM6MpmAHM91AAAEgdpiRC0012.xml * @return 0为成功,非0为失败,具体为错误代码 */ public static int deleteFile(String fileId) { try { int result = storageClient1.delete_file1(fileId); return result; } catch (Exception ex) { logger.error(ex); return 0; } } /** * 修改一个已经存在的文件 * * @param oldFileId * 原来旧文件的fileId, file_id源码中的解释file_id the file id(including group name and filename);例如 group1/M00/00/00/ooYBAFM6MpmAHM91AAAEgdpiRC0012.xml * @param file * 新文件 * @param filePath * 新文件路径 * @return 返回空则为失败 */ public static String modifyFile(String oldFileId, File file, String filePath) { String fileid = null; try { // 先上传 fileid = uploadFile(file, filePath); if (fileid == null) { return null; } // 再删除 int delResult = deleteFile(oldFileId); if (delResult != 0) { return null; } } catch (Exception ex) { logger.error(ex); return null; } return fileid; } /** * 文件下载 * * @param fileId * @return 返回一个流 */ public static InputStream downloadFile(String fileId) { try { byte[] bytes = storageClient1.download_file1(fileId); InputStream inputStream = new ByteArrayInputStream(bytes); return inputStream; } catch (Exception ex) { logger.error(ex); return null; } } /** * 获取文件后缀名(不带点). * * @return 如:"jpg" or "". */ private static String getFileExt(String fileName) { if (StringUtils.isBlank(fileName) || !fileName.contains(".")) { return ""; } else { return fileName.substring(fileName.lastIndexOf(".") + 1); // 不带最后的点 } } }
FastDFSTest.java
package wusc.edu.demo.fdfs.test; import java.io.File; import java.io.InputStream; import org.apache.commons.io.FileUtils; import wusc.edu.demo.fdfs.FastDFSClient; /** * * @描述: FastDFS测试 . * @做者: WuShuicheng . * @建立时间: 2015-3-29,下午8:11:36 . * @版本号: V1.0 . */ public class FastDFSTest { /** * 上传测试. * @throws Exception */ public static void upload() throws Exception { String filePath = "E:/WorkSpaceSpr10.6/edu-demo-fdfs/TestFile/DubboVideo.jpg"; File file = new File(filePath); String fileId = FastDFSClient.uploadFile(file, filePath); System.out.println("Upload local file " + filePath + " ok, fileid=" + fileId); // fileId: group1/M00/00/00/wKgEfVUYPieAd6a0AAP3btxj__E335.jpg // url: http://192.168.4.125:8888/group1/M00/00/00/wKgEfVUYPieAd6a0AAP3btxj__E335.jpg } /** * 下载测试. * @throws Exception */ public static void download() throws Exception { String fileId = "group1/M00/00/00/wKgEfVUYPieAd6a0AAP3btxj__E335.jpg"; InputStream inputStream = FastDFSClient.downloadFile(fileId); File destFile = new File("E:/WorkSpaceSpr10.6/edu-demo-fdfs/TestFile/DownloadTest.jpg"); FileUtils.copyInputStreamToFile(inputStream, destFile); } /** * 删除测试 * @throws Exception */ public static void delete() throws Exception { String fileId = "group1/M00/00/00/wKgEfVUYPieAd6a0AAP3btxj__E335.jpg"; int result = FastDFSClient.deleteFile(fileId); System.out.println(result == 0 ? "删除成功" : "删除失败:" + result); } /** * @param args * @throws Exception */ public static void main(String[] args) throws Exception { //upload(); //download(); delete(); } }
经过上面的FastDFSTest进行简单的测试
注意:上传成功能够经过浏览器(整合了ngix的状况下)
经过跟踪器访问文件系统的文件
为何要引入文件系统?
作支付系统或者作网站也好图片怎么办?必须有一个集中管理文件的地方,这样是为了作集群
作集群首先要解决两个点,一个是会话共享,一个是文件图片的共享 ,必须把这两个分出来才能更好的作集群
http://bbs.chinaunix.net/thread-1941456-1-1.html
首先是 tracker.conf # is this config file disabled # false for enabled # true for disabled disabled=false # 这个配置文件是否不生效,呵呵(改为是否生效是否是会让人感受好点呢?) false 为生效(否 则不生效) true 反之 # bind an address of this host # empty for bind all addresses of this host bind_addr= # 是否绑定IP, # bind_addr= 后面为绑定的 IP 地址 (经常使用于服务器有多个 IP 但只但愿一个 IP 提供服务)。如 果不填则表示全部的(通常不填就 OK),相信较熟练的 SA 都经常使用到相似功能,不少系统和应用 都有 # the tracker server port port=22122 # 提供服务的端口,不做过多解释了 # connect timeout in seconds # default value is 30s connect_timeout=30 #链接超时时间,针对 socket 套接字函数 connect # network timeout in seconds network_timeout=60 # tracker server 的网络超时,单位为秒。发送或接收数据时,若是在超时时间后还不能发 送或接收数据,则本次网络通讯失败。 # the base path to store data and log files base_path=/home/yuqing/fastdfs # base_path 目录地址(根目录必须存在,子目录会自动建立) # 附目录说明: tracker server 目录及文件结构: ${base_path} |__data | |__storage_groups.dat:存储分组信息 | |__storage_servers.dat:存储服务器列表 |__logs |__trackerd.log:tracker server 日志文件 数据文件 storage_groups.dat 和 storage_servers.dat 中的记录之间以换行符(\n)分隔,字段 之间以西文逗号(,)分隔。 storage_groups.dat 中的字段依次为: 1. group_name:组名 2. storage_port:storage server 端口号 storage_servers.dat 中记录 storage server 相关信息,字段依次为: 1. group_name:所属组名 2. ip_addr:ip 地址 3. status:状态 4. sync_src_ip_addr:向该 storage server 同步已有数据文件的源服务器 5. sync_until_timestamp:同步已有数据文件的截至时间(UNIX 时间戳) 6. stat.total_upload_count:上传文件次数 7. stat.success_upload_count:成功上传文件次数 8. stat.total_set_meta_count:更改 meta data 次数 9. stat.success_set_meta_count:成功更改 meta data 次数 10. stat.total_delete_count:删除文件次数 11. stat.success_delete_count:成功删除文件次数 12. stat.total_download_count:下载文件次数 13. stat.success_download_count:成功下载文件次数 14. stat.total_get_meta_count:获取 meta data 次数 15. stat.success_get_meta_count:成功获取 meta data 次数 16. stat.last_source_update:最近一次源头更新时间(更新操做来自客户端) 17. stat.last_sync_update:最近一次同步更新时间(更新操做来自其余 storage server 的同 步) # max concurrent connections this server supported # max_connections worker threads start when this service startup max_connections=256 # 系统提供服务时的最大链接数。对于 V1.x,因一个链接由一个线程服务,也就是工做线程 数。 # 对于 V2.x,最大链接数和工做线程数没有任何关系 # work thread count, should <= max_connections # default value is 4 # since V2.00 # V2.0 引入的这个参数,工做线程数,一般设置为 CPU 数 work_threads=4 # the method of selecting group to upload files # 0: round robin # 1: specify group # 2: load balance, select the max free space group to upload file store_lookup=2 # 上传组(卷) 的方式 0:轮询方式 1: 指定组 2: 平衡负载(选择最大剩余空间的组(卷)上传) # 这里若是在应用层指定了上传到一个固定组,那么这个参数被绕过 # which group to upload file # when store_lookup set to 1, must set store_group to the group name store_group=group2 # 当上一个参数设定为 1 时 (store_lookup=1,即指定组名时),必须设置本参数为系统中存 在的一个组名。若是选择其余的上传方式,这个参数就没有效了。 # which storage server to upload file # 0: round robin (default) # 1: the first server order by ip address # 2: the first server order by priority (the minimal) store_server=0 # 选择哪一个 storage server 进行上传操做(一个文件被上传后,这个 storage server 就至关于 这个文件的 storage server 源,会对同组的 storage server 推送这个文件达到同步效果) # 0: 轮询方式 # 1: 根据 ip 地址进行排序选择第一个服务器(IP 地址最小者) # 2: 根据优先级进行排序(上传优先级由 storage server 来设置,参数名为 upload_priority) # which path(means disk or mount point) of the storage server to upload file # 0: round robin # 2: load balance, select the max free space path to upload file store_path=0 # 选择 storage server 中的哪一个目录进行上传。storage server 能够有多个存放文件的 base path(能够理解为多个磁盘)。 # 0: 轮流方式,多个目录依次存放文件 #2: 选择剩余空间最大的目录存放文件(注意:剩余磁盘空间是动态的,所以存储到的目录 或磁盘可能也是变化的) # which storage server to download file # 0: round robin (default) # 1: the source storage server which the current file uploaded to download_server=0 # 选择哪一个 storage server 做为下载服务器 # 0: 轮询方式,能够下载当前文件的任一 storage server # 1: 哪一个为源 storage server 就用哪个 (前面说过了这个 storage server 源 是怎样产生的) 就是以前上传到哪一个 storage server 服务器就是哪一个了 # reserved storage space for system or other applications. # if the free(available) space of any stoarge server in # a group <= reserved_storage_space, # no file can be uploaded to this group. # bytes unit can be one of follows: ### G or g for gigabyte(GB) ### M or m for megabyte(MB) ### K or k for kilobyte(KB) ### no unit for byte(B) ### XX.XX% as ratio such as reserved_storage_space = 10% reserved_storage_space = 10% # storage server 上保留的空间,保证系统或其余应用需求空间。能够用绝对值或者百分比 (V4 开始支持百分比方式)。 #(指出 若是同组的服务器的硬盘大小同样,以最小的为准,也就是只要同组中有一台服务器 达到这个标准了,这个标准就生效,缘由就是由于他们进行备份) #standard log level as syslog, case insensitive, value list: ### emerg for emergency ### alert ### crit for critical ### error ### warn for warning ### notice ### info ### debug log_level=info # 选择日志级别(日志写在哪?看前面的说明了,有目录介绍哦 呵呵) #unix group name to run this program, #not set (empty) means run by the group of current user run_by_group= # 操做系统运行 FastDFS 的用户组 (不填 就是当前用户组,哪一个启动进程就是哪一个) #unix username to run this program, #not set (empty) means run by current user run_by_user= # 操做系统运行 FastDFS 的用户 (不填 就是当前用户,哪一个启动进程就是哪一个) # allow_hosts can ocur more than once, host can be hostname or ip address, # "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or # host[01-08,20-25].domain.com, for example: # allow_hosts=10.0.1.[1-15,20] # allow_hosts=host[01-08,20-25].domain.com allow_hosts=* # 能够链接到此 tracker server 的 ip 范围(对全部类型的链接都有影响,包括客户端,storage server) # sync log buff to disk every interval seconds # default value is 10 seconds sync_log_buff_interval = 10 # 同步或刷新日志信息到硬盘的时间间隔,单位为秒 # 注意:tracker server 的日志不是时时写硬盘的,而是先写内存。 # check storage server alive interval check_active_interval = 120 # 检测 storage server 存活的时间隔,单位为秒。 #storageserver按期向trackerserver 发心跳,若是trackerserver在一个check_active_interval 内尚未收到 storage server 的一次心跳,那边将认为该 storage server 已经下线。因此本参 数值必须大于 storage server 配置的心跳时间间隔。一般配置为 storage server 心跳时间间隔 的 2 倍或 3 倍。 # thread stack size, should > 512KB # default value is 1MB thread_stack_size=1MB # 线程栈的大小。FastDFSserver端采用了线程方式。更正一下,trackerserver线程栈不该小 于 64KB,不是 512KB。 # 线程栈越大,一个线程占用的系统资源就越多。若是要启动更多的线程(V1.x 对应的参数 为 max_connections, V2.0 为 work_threads),能够适当下降本参数值。 # auto adjust when the ip address of the storage server changed # default value is true storage_ip_changed_auto_adjust=true # 这个参数控制当 storage server IP 地址改变时,集群是否自动调整。注:只有在 storage server 进程重启时才完成自动调整。 # storage sync file max delay seconds # default value is 86400 seconds (one day) # since V2.00 storage_sync_file_max_delay = 86400 # V2.0 引入的参数。存储服务器之间同步文件的最大延迟时间,缺省为 1 天。根据实际状况 进行调整 # 注:本参数并不影响文件同步过程。本参数仅在下载文件时,判断文件是否已经被同步完 成的一个阀值(经验值) # the max time of storage sync a file # default value is 300 seconds # since V2.00 storage_sync_file_max_time = 300 # V2.0 引入的参数。存储服务器同步一个文件须要消耗的最大时间,缺省为 300s,即 5 分 钟。 # 注:本参数并不影响文件同步过程。本参数仅在下载文件时,做为判断当前文件是否被同 步完成的一个阀值(经验值) # if use a trunk file to store several small files # default value is false # since V3.00 use_trunk_file = false # V3.0 引入的参数。是否使用小文件合并存储特性,缺省是关闭的。 # the min slot size, should <= 4KB # default value is 256 bytes # since V3.00 slot_min_size = 256 # V3.0 引入的参数。 # trunk file 分配的最小字节数。好比文件只有 16 个字节,系统也会分配 slot_min_size 个字 节。 # the max slot size, should > slot_min_size # store the upload file to trunk file when it's size <= # default value is 16MB # since V3.00 slot_max_size = 16MB # V3.0 引入的参数。 # 只有文件大小<=这个参数值的文件,才会合并存储。若是一个文件的大小大于这个参数值, 将直接保存到一个文件中(即不采用合并存储方式)。 # the trunk file size, should >= 4MB # default value is 64MB # since V3.00 trunk_file_size = 64MB # V3.0 引入的参数。 # 合并存储的 trunk file 大小,至少 4MB,缺省值是 64MB。不建议设置得过大。 # if create trunk file advancely # default value is false trunk_create_file_advance = false # 是否提早建立 trunk file。只有当这个参数为 true,下面 3 个以 trunk_create_file_打头的参 数才有效。 # the time base to create trunk file # the time format: HH:MM # default value is 02:00 trunk_create_file_time_base = 02:00 # 提早建立 trunk file 的起始时间点(基准时间),02:00 表示第一次建立的时间点是凌晨 2 点。 # the interval of create trunk file, unit: second # default value is 38400 (one day) trunk_create_file_interval = 86400 # 建立 trunk file 的时间间隔,单位为秒。若是天天只提早建立一次,则设置为 86400 # the threshold to create trunk file # when the free trunk file size less than the threshold, will create # the trunk files # default value is 0 trunk_create_file_space_threshold = 20G # 提早建立 trunk file 时,须要达到的空闲 trunk 大小 # 好比本参数为 20G,而当前空闲 trunk 为 4GB,那么只须要建立 16GB 的 trunk file 便可。 # if check trunk space occupying when loading trunk free spaces # the occupied spaces will be ignored # default value is false # since V3.09 # NOTICE: set this parameter to true will slow the loading of trunk spaces # when startup. you should set this parameter to true when neccessary. trunk_init_check_occupying = false #trunk 初始化时,是否检查可用空间是否被占用 # if ignore storage_trunk.dat, reload from trunk binlog # default value is false # since V3.10 # set to true once for version upgrade when your version less than V3.10 trunk_init_reload_from_binlog = false # 是否无条件从 trunk binlog 中加载 trunk 可用空间信息 # FastDFS 缺省是从快照文件 storage_trunk.dat 中加载 trunk 可用空间, # 该文件的第一行记录的是 trunk binlog 的 offset,而后从 binlog 的 offset 开始加载 # if use storage ID instead of IP address # default value is false # since V4.00 use_storage_id = false # 是否使用 server ID 做为 storage server 标识 # specify storage ids filename, can use relative or absolute path # since V4.00 storage_ids_filename = storage_ids.conf # use_storage_id 设置为 true,才须要设置本参数 # 在文件中设置组名、server ID 和对应的 IP 地址,参见源码目录下的配置示例: conf/storage_ids.conf # if store slave file use symbol link # default value is false # since V4.01 store_slave_file_use_link = false # 存储从文件是否采用 symbol link(符号连接)方式 # 若是设置为 true,一个从文件将占用两个文件:原始文件及指向它的符号连接。 # if rotate the error log every day # default value is false # since V4.02 rotate_error_log = false # 是否认期轮转 error log,目前仅支持一天轮转一次 # rotate error log time base, time format: Hour:Minute # Hour from 0 to 23, Minute from 0 to 59 # default value is 00:00 # since V4.02 error_log_rotate_time=00:00 # error log 按期轮转的时间点,只有当 rotate_error_log 设置为 true 时有效 # rotate error log when the log file exceeds this size # 0 means never rotates log file by log file size # default value is 0 # since V4.02 rotate_error_log_size = 0 # error log 按大小轮转 # 设置为 0 表示不按文件大小轮转,不然当 error log 达到该大小,就会轮转到新文件中 # 如下是关于 http 的设置了 默认编译是不生效的 要求更改 #WITH_HTTPD=1 将 注释#去 掉 再编译 # 关于 http 的应用 说实话 不是很了解 没有见到 相关说明 ,望 版主能够完善一下 如下 是字面解释了 #HTTP settings http.disabled=false # HTTP 服务是否不生效 h ttp.server_port=8080 # HTTP 服务端口 #use "#include" directive to include http other settiongs ##include http.conf # 若是加载 http.conf 的配置文件 去掉第一个# 哈哈 完成了一个 下面是 storage.conf # is this config file disabled # false for enabled # true for disabled disabled=false #同上文了 就很少说了 # the name of the group this storage server belongs to group_name=group1 # 指定 此 storage server 所在 组(卷) # bind an address of this host # empty for bind all addresses of this host bind_addr= # 同上文 # if bind an address of this host when connect to other servers # (this storage server as a client) # true for binding the address configed by above parameter: "bind_addr" # false for binding any address of this host client_bind=true # bind_addr 一般是针对 server 的。当指定 bind_addr 时,本参数才有效。 # 本 storage server 做为 client 链接其余服务器(如 tracker server、其余 storage server),是 否绑定 bind_addr。 # the storage server port port=23000 # storage server 服务端口 # connect timeout in seconds # default value is 30s connect_timeout=30 #链接超时时间,针对 socket 套接字函数 connect # network timeout in seconds network_timeout=60 # storageserver 网络超时时间,单位为秒。发送或接收数据时,若是在超时时间后还不能 发送或接收数据,则本次网络通讯失败。 # heart beat interval in seconds heart_beat_interval=30 # 心跳间隔时间,单位为秒 (这里是指主动向 tracker server 发送心跳) # disk usage report interval in seconds stat_report_interval=60 # storage server 向 tracker server 报告磁盘剩余空间的时间间隔,单位为秒。 # the base path to store data and log files base_path=/home/yuqing/fastdfs # base_path 目录地址,根目录必须存在 子目录会自动生成 (注 :这里不是上传的文件存放 的地址,以前是的,在某个版本后更改了) # 目录结构 由于 版主没有更新到 论谈上 这里就不发了 你们能够看一下置顶贴: # max concurrent connections server supported # max_connections worker threads start when this service startup max_connections=256 # 同上文 # work thread count, should <= max_connections # default value is 4 # since V2.00 # V2.0 引入的这个参数,工做线程数,一般设置为 CPU 数 work_threads=4 # the buff size to recv / send data # default value is 64KB # since V2.00 buff_size = 256KB # V2.0 引入本参数。设置队列结点的 buffer 大小。工做队列消耗的内存大小 = buff_size * max_connections # 设置得大一些,系统总体性能会有所提高。 # 消耗的内存请不要超过系统物理内存大小。另外,对于 32 位系统,请注意使用到的内存 不要超过 3GB # if read / write file directly # if set to true, open file will add the O_DIRECT flag to avoid file caching # by the file system. be careful to set this parameter. # default value is false disk_rw_direct = false # V2.09 引入本参数。设置为 true,表示不使用操做系统的文件内容缓冲特性。 # 若是文件数量不少,且访问很分散,能够考虑将本参数设置为 true # if disk read / write separated ## false for mixed read and write ## true for separated read and write # default value is true # since V2.00 disk_rw_separated = true # V2.0 引入本参数。磁盘 IO 读写是否分离,缺省是分离的。 # disk reader thread count per store base path # for mixed read / write, this parameter can be 0 # default value is 1 # since V2.00 disk_reader_threads = 1 # V2.0 引入本参数。针对单个存储路径的读线程数,缺省值为 1。 # 读写分离时,系统中的读线程数 = disk_reader_threads * store_path_count # 读写混合时,系统中的读写线程数 = (disk_reader_threads + disk_writer_threads) * store_path_count # disk writer thread count per store base path # for mixed read / write, this parameter can be 0 # default value is 1 # since V2.00 disk_writer_threads = 1 # V2.0 引入本参数。针对单个存储路径的写线程数,缺省值为 1。 # 读写分离时,系统中的写线程数 = disk_writer_threads * store_path_count # 读写混合时,系统中的读写线程数 = (disk_reader_threads + disk_writer_threads) * store_path_count # when no entry to sync, try read binlog again after X milliseconds # 0 for try again immediately (not need to wait) sync_wait_msec=200 # 同步文件时,若是从 binlog 中没有读到要同步的文件,休眠 N 毫秒后从新读取。0 表示不 休眠,当即再次尝试读取。 # 出于 CPU 消耗考虑,不建议设置为 0。如何但愿同步尽量快一些,能够将本参数设置得 小一些,好比设置为 10ms # after sync a file, usleep milliseconds # 0 for sync successively (never call usleep) sync_interval=0 # 同步上一个文件后,再同步下一个文件的时间间隔,单位为毫秒,0 表示不休眠,直接同 步下一个文件。 # sync start time of a day, time format: Hour:Minute # Hour from 0 to 23, Minute from 0 to 59 sync_start_time=00:00 # sync end time of a day, time format: Hour:Minute # Hour from 0 to 23, Minute from 0 to 59 sync_end_time=23:59 # 上面二个一块儿解释。容许系统同步的时间段 (默认是全天) 。通常用于避免高峰同步产生 一些问题而设定,相信 sa 都会明白 # write to the mark file after sync N files # default value is 500 write_mark_file_freq=500 # 同步完 N 个文件后,把 storage 的 mark 文件同步到磁盘 # 注:若是 mark 文件内容没有变化,则不会同步 # path(disk or mount point) count, default value is 1 store_path_count=1 # 存放文件时 storage server 支持多个路径(例如磁盘)。这里配置存放文件的基路径数目, 一般只配一个目录。 # store_path#, based 0, if store_path0 not exists, it's value is base_path # the paths must be exist store_path0=/home/yuqing/fastdfs #store_path1=/home/yuqing/fastdfs2 # 逐一配置 store_path 个路径,索引号基于 0。注意配置方法后面有 0,1,2 ......,须要配置 0 到 store_path - 1。 # 若是不配置 base_path0,那边它就和 base_path 对应的路径同样。 # subdir_count * subdir_count directories will be auto created under each # store_path (disk), value can be 1 to 256, default value is 256 subdir_count_per_path=256 # FastDFS 存储文件时,采用了两级目录。这里配置存放文件的目录个数 (系统的存储机制, 你们看看文件存储的目录就知道了) # 若是本参数只为 N(如:256),那么 storage server 在初次运行时,会自动建立 N * N 个 存放文件的子目录。 # tracker_server can ocur more than once, and tracker_server format is # "host:port", host can be hostname or ip address tracker_server=10.62.164.84:22122 tracker_server=10.62.245.170:22122 # tracker_server 的列表 要写端口的哦 (再次提醒是主动链接 tracker_server ) # 有多个 tracker server 时,每一个 tracker server 写一行 #standard log level as syslog, case insensitive, value list: ### emerg for emergency ### alert ### crit for critical ### error ### warn for warning ### notice ### info ### debug log_level=info # 日志级别很少说 #unix group name to run this program, #not set (empty) means run by the group of current user run_by_group= # 同上文了 #unix username to run this program, #not set (empty) means run by current user run_by_user= # 同上文了 (提醒注意权限 若是和 webserver 不搭 能够会产生错误 哦) # allow_hosts can ocur more than once, host can be hostname or ip address, # "*" means match all ip addresses, can use range like this: 10.0.1.[1-15,20] or # host[01-08,20-25].domain.com, for example: # allow_hosts=10.0.1.[1-15,20] # allow_hosts=host[01-08,20-25].domain.com allow_hosts=* # 容许链接本 storage server 的 IP 地址列表 (不包括自带 HTTP 服务的全部链接) # 能够配置多行,每行都会起做用 # the mode of the files distributed to the data path # 0: round robin(default) # 1: random, distributted by hash code file_distribute_path_mode=0 # 文件在 data 目录下分散存储策略。 # 0: 轮流存放,在一个目录下存储设置的文件数后(参数 file_distribute_rotate_count 中设置 文件数),使用下一个目录进行存储。 # 1: 随机存储,根据文件名对应的 hash code 来分散存储。 # valid when file_distribute_to_path is set to 0 (round robin), # when the written file count reaches this number, then rotate to next path # default value is 100 file_distribute_rotate_count=100 # 当上面的参数 file_distribute_path_mode 配置为 0(轮流存放方式)时,本参数有效。 # 当一个目录下的文件存放的文件数达到本参数值时,后续上传的文件存储到下一个目录 中。 # call fsync to disk when write big file # 0: never call fsync # other: call fsync when written bytes >= this bytes # default value is 0 (never call fsync) fsync_after_written_bytes=0 # 当写入大文件时,每写入 N 个字节,调用一次系统函数 fsync 将内容强行同步到硬盘。0 表示从不调用 fsync # sync log buff to disk every interval seconds # default value is 10 seconds sync_log_buff_interval=10 # 同步或刷新日志信息到硬盘的时间间隔,单位为秒 # 注意:storage server 的日志信息不是时时写硬盘的,而是先写内存。 # sync binlog buff / cache to disk every interval seconds # this parameter is valid when write_to_binlog set to 1 # default value is 60 seconds sync_binlog_buff_interval=60 # 同步 binglog(更新操做日志)到硬盘的时间间隔,单位为秒 # 本参数会影响新上传文件同步延迟时间 # sync storage stat info to disk every interval seconds # default value is 300 seconds sync_stat_file_interval=300 # 把 storage 的 stat 文件同步到磁盘的时间间隔,单位为秒。 # 注:若是 stat 文件内容没有变化,不会进行同步 # thread stack size, should >= 512KB # default value is 512KB thread_stack_size=512KB # 线程栈的大小。FastDFS server 端采用了线程方式。 # 对于 V1.x,storage server 线程栈不该小于 512KB;对于 V2.0,线程栈大于等于 128KB 即 可。 # 线程栈越大,一个线程占用的系统资源就越多。 # 对于 V1.x,若是要启动更多的线程(max_connections),能够适当下降本参数值。 # the priority as a source server for uploading file. # the lower this value, the higher its uploading priority. # default value is 10 upload_priority=10 # 本 storage server 做为源服务器,上传文件的优先级,能够为负数。值越小,优先级越高。 这里就和 tracker.conf 中 store_server= 2 时的配置相对应了 # if check file duplicate, when set to true, use FastDHT to store file indexes # 1 or yes: need check # 0 or no: do not check # default value is 0 check_file_duplicate=0 # 是否检测上传文件已经存在。若是已经存在,则不存在文件内容,创建一个符号连接以节 省磁盘空间。 # 这个应用要配合 FastDHT 使用,因此打开前要先安装 FastDHT #1或yes 是检测,0或no 是不检测 # file signature method for check file duplicate ## hash: four 32 bits hash code ## md5: MD5 signature # default value is hash # since V4.01 file_signature_method=hash # 文件去重时,文件内容的签名方式: ## hash: 4 个 hash code ## md5:MD5 # namespace for storing file indexes (key-value pairs) # this item must be set when check_file_duplicate is true / on key_namespace=FastDFS # 当上个参数设定为 1 或 yes 时 (true/on 也是能够的) , 在 FastDHT 中的命名空间。 # set keep_alive to 1 to enable persistent connection with FastDHT servers # default value is 0 (short connection) keep_alive=0 # 与 FastDHT servers 的链接方式 (是否为持久链接) ,默认是 0(短链接方式)。能够考虑使 用长链接,这要看 FastDHT server 的链接数是否够用。 # 下面是关于 FastDHT servers 的设定 须要对 FastDHT servers 有所了解,这里只说字面意思 了 # you can use "#include filename" (not include double quotes) directive to # load FastDHT server list, when the filename is a relative path such as # pure filename, the base path is the base path of current/this config file. # must set FastDHT server list when check_file_duplicate is true / on # please see INSTALL of FastDHT for detail ##include /home/yuqing/fastdht/conf/fdht_servers.conf # 能够经过 #include filename 方式来加载 FastDHT servers 的配置,装上 FastDHT 就知道 该如何配置啦。 # 一样要求 check_file_duplicate=1 时才有用,否则系统会忽略 # fdht_servers.conf 记载的是 FastDHT servers 列表 # if log to access log # default value is false # since V4.00 use_access_log = false # 是否将文件操做记录到 access log # if rotate the access log every day # default value is false # since V4.00 rotate_access_log = false # 是否认期轮转 access log,目前仅支持一天轮转一次 # rotate access log time base, time format: Hour:Minute # Hour from 0 to 23, Minute from 0 to 59 # default value is 00:00 # since V4.00 access_log_rotate_time=00:00 # access log 按期轮转的时间点,只有当 rotate_access_log 设置为 true 时有效 # if rotate the error log every day # default value is false # since V4.02 rotate_error_log = false # 是否认期轮转 error log,目前仅支持一天轮转一次 # rotate error log time base, time format: Hour:Minute # Hour from 0 to 23, Minute from 0 to 59 # default value is 00:00 # since V4.02 error_log_rotate_time=00:00 # error log 按期轮转的时间点,只有当 rotate_error_log 设置为 true 时有效 # rotate access log when the log file exceeds this size # 0 means never rotates log file by log file size # default value is 0 # since V4.02 rotate_access_log_size = 0 # access log 按文件大小轮转 # 设置为 0 表示不按文件大小轮转,不然当 access log 达到该大小,就会轮转到新文件中 # rotate error log when the log file exceeds this size # 0 means never rotates log file by log file size # default value is 0 # since V4.02 rotate_error_log_size = 0 # error log 按文件大小轮转 # 设置为 0 表示不按文件大小轮转,不然当 error log 达到该大小,就会轮转到新文件中 # if skip the invalid record when sync file # default value is false # since V4.02 file_sync_skip_invalid_record=false # 文件同步的时候,是否忽略无效的 binlog 记录 下面是 http 的配置了。若是系统较大,这个服务有可能支持不了,能够自行换一个 webserver, 我喜欢 lighttpd,固然 ng 也很好了。具体不说明了。相应这一块的说明你们都懂,不明白见 上文。 #HTTP settings http.disabled=false # the port of the web server on this storage server http.server_port=8888 http.trunk_size=256KB # http.trunk_size 表示读取文件内容的 buffer 大小(一次读取的文件内容大小),也就是回复 给 HTTP client 的块大小。 # use the ip address of this storage server if domain_name is empty, # else this domain name will ocur in the url redirected by the tracker server http.domain_name= # storage server 上 web server 域名,一般仅针对单独部署的 web server。这样 URL 中就能够 经过域名方式来访问 storage server 上的文件了, # 这个参数为空就是 IP 地址的方式。 #use "#include" directive to include HTTP other settiongs ##include http.conf 补充: storage.conf 中影响 storage server 同步速度的参数有以下几个: # when no entry to sync, try read binlog again after X milliseconds # 0 for try again immediately (not need to wait) sync_wait_msec=200 # 同步文件时,若是从 binlog 中没有读到要同步的文件,休眠 N 毫秒后从新读取。0 表示 不休眠,当即再次尝试读取。 # 不建议设置为0,如何但愿同步尽量快一些,能够将本参数设置得小一些,好比设置为 10ms # after sync a file, usleep milliseconds # 0 for sync successively (never call usleep) sync_interval=0 # 同步上一个文件后,再同步下一个文件的时间间隔,单位为毫秒,0 表示不休眠,直接同 步下一个文件。 # sync start time of a day, time format: Hour:Minute # Hour from 0 to 23, Minute from 0 to 59 sync_start_time=00:00 # sync end time of a day, time format: Hour:Minute # Hour from 0 to 23, Minute from 0 to 59 sync_end_time=23:59 # 上面二个一块儿解释。容许系统同步的时间段 (默认是全天) 。通常用于避免高峰同步产 生一些问题而设定,相信 sa 都会明白 # sync binlog buff / cache to disk every interval seconds # this parameter is valid when write_to_binlog set to 1 # default value is 60 seconds sync_binlog_buff_interval=60 # 同步 binglog(更新操做日志)到硬盘的时间间隔,单位为秒 # 本参数会影响新上传文件同步延迟时间