ginx hmux与resin(3.0版本) session sticky的结合javascript
下载:
svn checkout http://nginx-upstream-jvm-route.googlecode.com/svn/trunk/ nginx-upstream-jvm-route-read-only
(暂时不要从download中下载session stick代码,有bug)
wget http://nginx-hmux-module.googlecode.com/files/nginx_hmux_module_v0.2.tar.gz
打补丁
patch -p1 <hmux/hmux.patch
patch -p0 <nginx-upstream-jvm-route-read-only/jvm_route.patch
./configure --add-module=hmux/ --add-module=/home/wangbin/work/memcached/keepalive/
--add-module=nginx-upstream-jvm-route-read-only/ --with-debug
在obj/Makefile中去掉优化参数
make
make install
修改resin.conf(每台机器都要设置)
实例a:
<http server-id="a" host="61.135.250.217" port="18080"/>
<cluster>
<srun server-id="a" host="61.135.250.217" port="6800"/>
<srun server-id="b" host="61.135.250.217" port="6801"/>
</cluster>
实例b:
<http server-id="b" host="61.135.250.217" port="18081"/>
<cluster>
<srun server-id="a" host="61.135.250.217" port="6800"/>
<srun server-id="b" host="61.135.250.217" port="6801"/>
</cluster>
启动resin:
sh httpd.sh -server a start
sh httpd.sh -server b start
修改nginx.conf
upstream resins{
server 61.135.250.217:6800 srun_id=a;
server 61.135.250.217:6801 srun_id=b;
jvm_route $cookie_JSESSIONID;
keepalive 1024;
}
server {
location /{
hmux_pass resins;
}
}
启动nginxcss
hmux_session_sticky html
http://www.blogjava.net/gentoo1439/archive/2007/07/11/129527.html前端
选取Apache HTTP Server做为前端的负载服务器,后端选取两个Tomcat做集群,这次选择的配置方式为Session Sticky(粘性Session),这种方式将同一用户的请求转发到特定的Tomcat服务器上,避免了集群中Session的复制,缺点是用户只跟一种的一台服务器通讯,若是此服务器down掉,那就废了。
采用的model为mod_proxy_ajp.so,整个配置在tomcat的配置文件中都有相关的注释,只需做相应修改就OK。
咱们选取的是Apache HTTP Server2.2.4,Tomcat5.5.16。
首先安装Apache HTTP Server,而后修改其配置文件http.conf,首先load三个model,代码以下:java
LoadModule proxy_module modules
/
mod_proxy.so
LoadModule proxy_ajp_module modules
/
mod_proxy_ajp.so
LoadModule proxy_balancer_module modules
/
mod_proxy_balancer.so
而后在此配置文件末端加入如下代码:

ProxyPass / balancer://tomcatcluster/ lbmethod=byrequests stickysession=JSESSIONID nofailover=Off timeout=5 maxattempts=3

ProxyPa***everse / balancer://tomcatcluster/

<Proxy balancer://tomcatcluster>

BalancerMember ajp://localhost:8009 route=a

BalancerMember ajp://localhost:9009 route=b

</Proxy>
以上代码配置了Proxy的相关参数,<Proxy>模块定义了均衡负载的配置,其中两个Tomcat Server都配置在同一台服务器上,端口分别为800九、9009,并配置各自的route,这样Apache Server就能根据route将请求转发给特定的Tomcat。
接下来修改Tomcat的server.xml文件,以下:
<!--
Define an AJP 1.3 Connector on port 8009
-->
<
Connector
port
="8009"
enableLookups
="false"
redirectPort
="8443"
protocol
="AJP/1.3"
/>
其中的port为前面<Proxy>中设定的端口,还要配置其route,代码以下:
<!--
Define the top level container in our container hierarchy
-->
<
Engine
name
="Catalina"
defaultHost
="localhost"
jvmRoute
="a"
>
jvmRoute也须同前面的设置同样。
下面用JMeter对配置后的负载均衡作一测试,首先先启动两个Tomcat Server,随后启动Apache Server,在JMeter中新建测试计划,在两个Tomcat Server中的jsp-examples下新建test.jsp(此jsp本身随便写两句就成),而后进行测试,如下是部分取样器结果:

HTTP response headers:

HTTP/1.1 200 OK

Date: Wed, 11 Jul 2007 02:17:55 GMT

Set-Cookie:
JSESSIONID=AC7EF1CAA8C6B0FEB68E77D7D375E2AF.b; Path=/jsp-examples

Content-Type: text/html;charset=ISO-8859-1

Content-Length: 3

Keep-Alive: timeout=5, max=79

Connection: Keep-Alive
以上红色代码表示用户的http请求中的JSESSIONID中已经附带了route后缀,.b表示此请求将转发到route为b的Tomcat Server上,你将会发现其中的一部分请求的JSESSIONID后缀为.a,也就是转发给route为a的Tomcat Server上
-------------------------------------node
http://blog.sina.com.cn/s/blog_5dc960cd0100ipgt.htmlnginx
nginx+resin(tomcat) session 问题解决
(2010-05-19 09:34:07)
转自:http://deidara.blog.51cto.com/400447/193887
在web服务器中须要中修改配置:
resion中:
shell $> vim resin.conf
##
查找
<http address="*" port="8080"/>
## 注释掉 <!--http address="*" port="8080"/-->
## 查找
<server id="" address="127.0.0.1" port="6800">
## 替换成
<server id="a" address="192.168.6.121" port="6800">
<!-- server2 address=192.168.6.162 -->
<http id="" port="8080"/>
</server>
<server id="b" address="192.168.6.121" port="6801">
<!-- server2 address=192.168.6.162 -->
<http id="" port="8081"/>
</server>
tomcat中:(通过试验确认,虚拟主机也支持,只需按下面修改一次便可)
设置tomcat的server.xml, 在两台服务器的tomcat的配置文件中分别找到:
<Engine name="Catalina" defaultHost="localhost" >
分别修改成:
Tomcat01:(192.168.0.100)
<Engine name="Catalina" defaultHost="localhost" jvmRoute="a">
Tomcat02:(192.168.0.101)
<Engine name="Catalina" defaultHost="localhost" jvmRoute="b"> nginx的修改: nginx_upstream_jvm_route 是一个 Nginx 的扩展模块,用来实现基于 Cookie 的 Session Sticky 的功能。 安装方法: 1.先得到nginx_upstream_jvm_route模块: 地址:http://sh0happly.blog.51cto.com/p_w_upload/201004/1036375_1271836572.zip,解压后上传到/root下 2.进入Nginx源码目录: cd nginx-0.7.61 patch -p0 < ../nginx-upstream-jvm-route/jvm_route.patch 会出现如下提示: patching file src/http/ngx_http_upstream.c Hunk #1 succeeded at 3869 (offset 132 lines). Hunk #3 succeeded at 4001 (offset 132 lines). Hunk #5 succeeded at 4100 (offset 132 lines). patching file src/http/ngx_http_upstream.h 3.安装nginx: shell $> ./configure --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --add-module=/root/nginx-upstream-jvm-route/ shell $> make shell $> make install 4.修改配置,例如: 1.For resin upstream backend { server 192.168.0.100 srun_id=a; #这里srun_id=a对应的是 server1 resin配置里的 server id="a" server 192.168.0.101 srun_id=b; jvm_route $cookie_JSESSIONID|sessionid; } 2.For tomcat upstream tomcat { server 192.168.0.100:8080 srun_id=a; #这里srun_id=a对应的是 tomcat01 配置里的 jvmRoute="a" server 192.168.0.101:8080 srun_id=b; #这里srun_id=a对应的是 tomcat02 配置里的 jvmRoute="b" jvm_route $cookie_JSESSIONID|sessionid reverse; } server { server_name test.com; charset utf-8,GB2312; index index.html; if (-d $request_filename) { rewrite ^/(.*)([^/])$ http://$host/$1$2/ permanent; } location / { proxy_pass http://tomcat/; proxy_redirect off; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; } 在两台的tomcat上增长配置: <Host name="test.com" debug="0" appBase="/usr/local/tomcat/apps/" unpackWARs="true" > <Logger className="org.apache.catalina.logger.FileLogger" directory="logs" prefix="crm_log." suffix=".txt" timestamp="true"/> <Context path="" docBase="/usr/local/tomcat/apps/jsp" reloadable="true" debug="0" crossContext="false"> </Context> </Host> 在/usr/local/tomcat/apps/jsp的下面新增index.jsp <HTML> <HEAD> <TITLE>JSP TESTPAGE</TITLE> </HEAD> <BODY> <% String name=request.getParameter("name"); out.println("<h1>this is 192.168.0.100:hello "+name+"!<br></h1>"); #或192.168.0.101 %> </BODY> </HTML> 经过访问:http://test.com,页面会一直保持在192.168.0.100的页面,当清空cookies和session后,再次刷新,页面会保持在192.168.0.101上。 一个实例:http://hi.baidu.com/scenkoy/blog/item/2cd89da9b57696f71e17a29e .html 测试环境: server1 服务器上安装了 nginx + tomcat01 server2 服务器上只安装了 tomcat02 server1 IP 地址: 192.168.2.88 server2 IP 地址: 192.168.2.89 安装步骤: 1. 在server1 上安装配置 nginx + nginx_upstream_jvm_route shell $> wget -c http://sysoev.ru/nginx/nginx-0.7.61.tar.gz shell $> svn checkout http://nginx-upstream-jvm-route.googlecode.com/svn/trunk/ nginx-upstream-jvm-route-read-only shell $> tar zxvf nginx-0.7.61 shell $> cd nginx-0.7.61 shell $> patch -p0 < ../nginx-upstream-jvm-route-read-only/jvm_route.patch shell $> useradd www shell $> ./configure --user=www --group=www --prefix=/usr/local//nginx --with-http_stub_status_module --with-http_ssl_module --add-module=/root/nginx-upstream-jvm-route-read-only shell $> make shell $> make install 2.分别在两台机器上安装 tomcat和java (略) 设置tomcat的server.xml, 在两台服务器的tomcat的配置文件中分别找到: <Engine name="Catalina" defaultHost="localhost" > 分别修改成: Tomcat01: <Engine name="Catalina" defaultHost="localhost" jvmRoute="a"> Tomcat02: <Engine name="Catalina" defaultHost="localhost" jvmRoute="b"> 并在webapps下面创建aa文件夹,在里面创建要测试的index.jsp文件,内容以下: <%@ page language="java" import="java.util.*" pageEncoding="UTF-8"%> <% %> <html> <head> </head> <body> 88 <!--server1 这里为 88 --> <br /> <%out.print(request.getSession()) ;%> <!--输出session--> <br /> <%out.println(request.getHeader("Cookie")); %> <!--输出Cookie--> </body> </html> 两个tomcat同样只须要修改红色的部分 分别启动两个tomcat 3.设置nginx shell $> cd /usr/local/nginx/conf shell $> mv nginx.conf nginx.bak shell $> vi nginx.conf ## 如下是配置 ### user www www; worker_processes 4; error_log logs/nginx_error.log crit; pid /usr/local/nginx/nginx.pid; #Specifies the value for maximum file descriptors that can be opened by this process. worker_rlimit_nofile 51200; events { use epoll; worker_connections 2048; } http { upstream backend { server 192.168.2.88:8080 srun_id=a; server 192.168.2.89:8080 srun_id=b; jvm_route $cookie_JSESSIONID|sessionid reverse; } include mime.types; default_type application/octet-stream; #charset gb2312; charset UTF-8; server_names_hash_bucket_size 128; client_header_buffer_size 32k; large_client_header_buffers 4 32k; client_max_body_size 20m; limit_rate 1024k; sendfile on; tcp_nopush on; keepalive_timeout 60; tcp_nodelay on; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; gzip on; #gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.0; gzip_comp_level 2; gzip_types text/plain application/x-javascript text/css application/xml; gzip_vary on; #limit_zone crawler $binary_remote_addr 10m; server { listen 80; server_name 192.168.2.88; index index.html index.htm index.jsp; root /var/www; #location ~ .*\.jsp$ location / aa/ { proxy_pass http://backend; proxy_redirect off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; } location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$ { expires 30d; } location ~ .*\.(js|css)?$ { expires 1h; } location /Nginxstatus { stub_status on; access_log off; } log_format access '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" $http_x_forwarded_for'; # access_log off; } } 4.测试 打开浏览器,输入:http://192.168.2.88/aa/ 刷新了N次还都是88,也就是补丁起做用了,cookie 值也得到了,为了测试,我又打开了“遨游浏览器”(由于session 和 cookie问题因此重新打开别的浏览器),输入网址: http://192.168.2.88/aa/ 显示89,刷新N次后仍是89,你们测试的时候若是有疑问可一把 nginx 配置文件的 srun_id=a srun_id=b 去掉,而后在访问,就会知道页面是轮询访问得了!!