User guide for Netty 4.x

Preface

前言html

The Problem

问题java

Nowadays we use general purpose applications or libraries to communicate with each other. For example, we often use an HTTP client library to retrieve information from a web server and to invoke a remote procedure call via web services.git

今天咱们使用通用程序或类库来相互通讯.例如, 咱们常用HTTP客户端类库来从web服务器获取信息, 或者经过web services进行远程过程调用.github

However, a general purpose protocol or its implementation sometimes does not scale very well. It is like we don't use a general purpose HTTP server to exchange huge files, e-mail messages, and near-realtime messages such as financial information and multiplayer game data. What's required is a highly optimized protocol implementation which is dedicated to a special purpose. For example, you might want to implement an HTTP server which is optimized for AJAX-based chat application, media streaming, or large file transfer. You could even want to design and implement a whole new protocol which is precisely tailored to your need.web

然而,一个通用的协议或者实现有时候扩展性并不那么好.就像咱们不会使用通用的HTTP 服务器来交换大文件, e-mail消息, 以及诸如经济信息和多用户游戏数据的近实时消息.咱们须要的是一个专一于特殊目的的高度优化实现.例如,你可能实现一个用来给基于ajax聊天或者媒体流或者大文件传输的程序使用的HTTP Serverajax

Another inevitable case is when you have to deal with a legacy proprietary protocol to ensure the interoperability with an old system. What matters in this case is how quickly we can implement that protocol while not sacrificing the stability and performance of the resulting application.编程

另外一个不可避免的状况是你必须处理遗留的专有协议来保证和一个老系统的互操做性. 这个状况的关键是咱们在不牺牲稳定性和结果程序的性能的状况下, 实现这样一个协议.bootstrap

The Solution

The Netty project is an effort to provide an asynchronous event-driven network application framework and tooling for the rapid development of maintainable high-performance · high-scalability protocol servers and clients.api

Netty项目致力于为快速开发可维护和高性能,高稳定性协议服务器和客户端提供一个异步的事件驱动网络编程框架和工具.promise

In other words, Netty is a NIO client server framework which enables quick and easy development of network applications such as protocol servers and clients. It greatly simplifies and streamlines network programming such as TCP and UDP socket server development.

换句话说, Netty 是一个 NIO 客户端和服务端框架, 他能够快速简单的开发出一个诸如协议服务器和客户端的网络应用程序. 他最大程度的简化和流线化了诸如TCP和UDP socket服务器的网络编程

'Quick and easy' does not mean that a resulting application will suffer from a maintainability or a performance issue. Netty has been designed carefully with the experiences earned from the implementation of a lot of protocols such as FTP, SMTP, HTTP, and various binary and text-based legacy protocols. As a result, Netty has succeeded to find a way to achieve ease of development, performance, stability, and flexibility without a compromise.

'快且简单'并不意味着应用程序会产生维护和性能问题. Netty是一个吸取多了多种协议的设计经验, 包括FTP, SMTP, HTTP, 各类二进制, 文本协议, 的精心设计的框架.因此Netty已经找到了能够在不牺牲性能,稳定性,灵活性的状况下简单的开发的方法

Some users might already have found other network application framework that claims to have the same advantage, and you might want to ask what makes Netty so different from them. The answer is the philosophy where it is built on. Netty is designed to give you the most comfortable experience both in terms of the API and the implementation from the day one. It is not something tangible but you will realize that this philosophy will make your life much easier as you read this guide and play with Netty.

有些用户可能已经找到了一些其余声称有相同优势的网络编程框架, 你可能想问是什么让Netty和他们如此不一样. 答案是Netty的设计哲学.Netty的设计的目的是从今天起给在API和实现方法面给你最温馨的体验. 这并非有形的, 可是随着你在阅读想到和使用netty过程当中,你会感觉到这种哲学为你生活带来的改变

Getting Started

This chapter tours around the core constructs of Netty with simple examples to let you get started quickly. You will be able to write a client and a server on top of Netty right away when you are at the end of this chapter.

If you prefer top-down approach in learning something, you might want to start from Chapter 2, Architectural Overview and get back here.

这章教程围绕Netty的核心构建进行, 我会会用一些简单的例子让你快速开始. 本章最后,你立刻就能够熟练的使用netty写一个客户端和一个服务端了. 你过想自顶向下的学习, 最好从第二章(架构总览)开始, 而后在回来这里看 

Before Getting Started

The minimum requirements to run the examples which are introduced in this chapter are only two; the latest version of Netty and JDK 1.7 or above. The latest version of Netty is available in the project download page. To download the right version of JDK, please refer to your preferred JDK vendor's web site.

跑起本章的实例程序的要求有两个: 最新版本的Netty和JDK 1.7或更高版本的jdk. 最新版本的netty能够在这里下载the project download page. JDK请到官网下.

As you read, you might have more questions about the classes introduced in this chapter. Please refer to the API reference whenever you want to know more about them. All class names in this document are linked to the online API reference for your convenience. Also, please don't hesitate to contact the Netty project community and let us know if there's any incorrect information, errors in grammar and typo, and if you have a good idea to improve the documentation.

在本章你可能会有不少关于class介绍的问题.想知道更多关于这些类具体的状况请到api参考手册查看.全部类都会连接到在线API手册. 若是你发现有任何错误信息,错误语法和错别字,或者你有提高这个文档的好主意,请绝不犹豫的联系the Netty project community

Writing a Discard Server

The most simplistic protocol in the world is not 'Hello, World!' but DISCARD. It's a protocol which discards any received data without any response.

这个世界最简单的协议不是'hello word'而是DISCARD(拒绝协议).这是一个丢弃全部收到的数据而且没有任何响应的协议.

To implement the DISCARD protocol, the only thing you need to do is to ignore all received data. Let us start straight from the handler implementation, which handles I/O events generated by Netty.

要实现DISCARD协议, 惟一一件事情是忽略全部接收到的数据. 让咱们直接使用netty建立的I/O事件处理程序来实现一下.

package io.netty.example.discard;

import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;

/**
 * Handles a server-side channel.
 */
public class DiscardServerHandler extends ChannelInboundHandlerAdapter { // (1)

    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) { // (2)
        // Discard the received data silently.
        ((ByteBuf) msg).release(); // (3)
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { // (4)
        // Close the connection when an exception is raised.
        cause.printStackTrace();
        ctx.close();
    }
}

 

  1. DiscardServerHandler extends ChannelInboundHandlerAdapter, which is an implementation of ChannelInboundHandlerChannelInboundHandler provides various event handler methods that you can override. For now, it is just enough to extend ChannelInboundHandlerAdapter rather than to implement the handler interface by yourself.
  2. We override the channelRead() event handler method here. This method is called with the received message, whenever new data is received from a client. In this example, the type of the received message is ByteBuf.
  3. To implement the DISCARD protocol, the handler has to ignore the received message. ByteBuf is a reference-counted object which has to be released explicitly via the release() method. Please keep in mind that it is the handler's responsibility to release any reference-counted object passed to the handler. Usually, channelRead() handler method is implemented like the following:

    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
        try {
            // Do something with msg
        } finally {
            ReferenceCountUtil.release(msg);
        }
    }

     

  4. The exceptionCaught() event handler method is called with a Throwable when an exception was raised by Netty due to an I/O error or by a handler implementation due to the exception thrown while processing events. In most cases, the caught exception should be logged and its associated channel should be closed here, although the implementation of this method can be different depending on what you want to do to deal with an exceptional situation. For example, you might want to send a response message with an error code before closing the connection.

  1. DiscardServerHander继承了ChannelInboundHandlerAdapter, 他是ChannelInboundHandler的实现. ChannelInboundHandler提供了可变事件处理方法, 你能够重写他们. 目前你只能继承ChannelInboundHanderAdapter, 而不是本身去实现handler接口
  2. 咱们重写了channelRead()时间处理方法. 这个方法当信息收到时被调用. 这个例子中, 收到的信息类型是ByteBuf
  3. 为了实现DISCARD协议,  处理器必须忽略到收到的信息. ByteBuf是一个引用计数对象, 他经过调用release()方法来显式释放. 请注意, 释听任何传到handler的引用计数对象是handler的责任. 一般, channelRead()处理方法实现以下: (看原文代码)
  4. exceptionCaught()时间处理方法会在以下状况被调用,在netty抛出一个I/O异常, 或hander的实如今处理时间过程当中抛出异常时. 大部分状况, 被捕获的异常应该被logged, 而且和他相关的通道应该被关闭, 即便这个方法的实现可能会因你想处理的异常情况不一样而不一样. 例如,你可能想在关闭链接以前发送一个响应消息.

So far so good. We have implemented the first half of the DISCARD server. What's left now is to write the main() method which starts the server with the DiscardServerHandler.

目前为止一切都很好. 咱们已经实现了DISCARD服务器的一半了. 剩下的就是写一个main()方法来启动服务器.

package io.netty.example.discard;
    
import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
    
/**
 * Discards any incoming data.
 */
public class DiscardServer {
    
    private int port;
    
    public DiscardServer(int port) {
        this.port = port;
    }
    
    public void run() throws Exception {
        EventLoopGroup bossGroup = new NioEventLoopGroup(); // (1)
        EventLoopGroup workerGroup = new NioEventLoopGroup();
        try {
            ServerBootstrap b = new ServerBootstrap(); // (2)
            b.group(bossGroup, workerGroup)
             .channel(NioServerSocketChannel.class) // (3)
             .childHandler(new ChannelInitializer<SocketChannel>() { // (4)
                 @Override
                 public void initChannel(SocketChannel ch) throws Exception {
                     ch.pipeline().addLast(new DiscardServerHandler());
                 }
             })
             .option(ChannelOption.SO_BACKLOG, 128)          // (5)
             .childOption(ChannelOption.SO_KEEPALIVE, true); // (6)
    
            // Bind and start to accept incoming connections.
            ChannelFuture f = b.bind(port).sync(); // (7)
    
            // Wait until the server socket is closed.
            // In this example, this does not happen, but you can do that to gracefully
            // shut down your server.
            f.channel().closeFuture().sync();
        } finally {
            workerGroup.shutdownGracefully();
            bossGroup.shutdownGracefully();
        }
    }
    
    public static void main(String[] args) throws Exception {
        int port;
        if (args.length > 0) {
            port = Integer.parseInt(args[0]);
        } else {
            port = 8080;
        }
        new DiscardServer(port).run();
    }
}

 

  1. NioEventLoopGroup is a multithreaded event loop that handles I/O operation. Netty provides various EventLoopGroupimplementations for different kind of transports. We are implementing a server-side application in this example, and therefore two NioEventLoopGroup will be used. The first one, often called 'boss', accepts an incoming connection. The second one, often called 'worker', handles the traffic of the accepted connection once the boss accepts the connection and registers the accepted connection to the worker. How many Threads are used and how they are mapped to the created Channels depends on the EventLoopGroup implementation and may be even configurable via a constructor.
  2. ServerBootstrap is a helper class that sets up a server. You can set up the server using a Channel directly. However, please note that this is a tedious process, and you do not need to do that in most cases.
  3. Here, we specify to use the NioServerSocketChannel class which is used to instantiate a new Channel to accept incoming connections.
  4. The handler specified here will always be evaluated by a newly accepted Channel. The ChannelInitializer is a special handler that is purposed to help a user configure a new Channel. It is most likely that you want to configure the ChannelPipeline of the new Channel by adding some handlers such as DiscardServerHandler to implement your network application. As the application gets complicated, it is likely that you will add more handlers to the pipeline and extract this anonymous class into a top level class eventually.
  5. You can also set the parameters which are specific to the Channel implementation. We are writing a TCP/IP server, so we are allowed to set the socket options such as tcpNoDelay and keepAlive. Please refer to the apidocs of ChannelOption and the specific ChannelConfig implementations to get an overview about the supported ChannelOptions.
  6. Did you notice option() and childOption()option() is for the NioServerSocketChannel that accepts incoming connections. childOption() is for the Channels accepted by the parent ServerChannel, which is NioServerSocketChannel in this case.
  7. We are ready to go now. What's left is to bind to the port and to start the server. Here, we bind to the port 8080 of all NICs (network interface cards) in the machine. You can now call the bind() method as many times as you want (with different bind addresses.)
  1. NioEventLoopGroup是一个用来处理IO操做的多线程事件循环.netty为不一样类型的传输提供了不一样的EventLoopGroup实现.咱们在这个例子实现实现了一个服务端程序, 所以会用到两个NioEventLoopGroup.第一个叫作'boss', 接受即未来到的链接.第二个叫worker, 用来在boos接受请求并把这些请求注册给worker的时候处理这些请求的流通.有多少线程会被建立, 他们是如何被映射到建立的通道上的取决于 EventLoopGroup的实现, 而且能够经过构造器来控制
  2. ServerBootstrp是一个用来启动服务器的帮助类.你可使用Channel直接启动服务器.然而, 请注意这是一个繁琐的过程, 大部分状况下你不须要那样作
  3. 这里咱们指定了使用NioServerSocketChannel来实例化一个新的Channel来接受到来的链接.
  4. 这里指定的handler总会被最近接受的Channel评估. ChannelInitializer是一个用来帮助用户配置新Channel的特殊处理器.你颇有可能经过添加诸如DiscardServerHandler来设置新的Channel的ChannelPipeline来实现你的网络程序.随着程序愈来愈复杂, 你可能想添加更多的处理器到popeline中并最终将这个匿名类提取到顶层类
  5. 你也能够为特定的Channel实现设置特定参数.咱们写的是一个TCP/IP服务器, 因此咱们容许设置socket选项, 例如 tcpNoDelay和keepAlive. 请查阅ChannelOption和特定的ChannelConfig实现的API文档来获取ChannelOption的总览
  6. 你是否注意到了option()和childOption()? option()是设置用来接收到来的链接的NioServerSocketChannel的.childOption()是用来设置被父ServerChannel接收的Channel的, 在这个例子中父ServerChannel就是NioServerSocketChannel.
  7. 咱们已经准备好了.剩下的就是绑定一个端口并启动服务器. 这里咱们将绑定到全部机器的NICs的端口8080.你如今能够任意调用bind()方法了(到不一样的绑定地址)

Congratulations! You've just finished your first server on top of Netty.

恭喜! 你已经完成了你第一个使用netty搭建的服务器

Looking into the Received Data

Now that we have written our first server, we need to test if it really works. The easiest way to test it is to use the telnet command. For example, you could enter telnet localhost 8080 in the command line and type something.

如今咱们已经写好了咱们的第一个server, 咱们须要测试他是否正确工做. 最简单的方法是使用telnet命令来测试.例如,你能够在命令行输入 telnet localhost 8080 并输入一些东西.

However, can we say that the server is working fine? We cannot really know that because it is a discard server. You will not get any response at all. To prove it is really working, let us modify the server to print what it has received.

然而要如何肯定服务器正常工做呢?咱们并不知道由于他是个DISCARD服务器.你没法获得任何响应.为了证实服务器真的在工做, 咱们将服务器改成输出他收到的东西.

We already know that channelRead() method is invoked whenever data is received. Let us put some code into the channelRead()method of the DiscardServerHandler:

咱们已经知道了channelRead()方法在服务器收到数据的时候会被调用. 让咱们改写一下DiscardServerHandler的channelRead()的代码

@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
    ByteBuf in = (ByteBuf) msg;
    try {
        while (in.isReadable()) { // (1)
            System.out.print((char) in.readByte());
            System.out.flush();
        }
    } finally {
        ReferenceCountUtil.release(msg); // (2)
    }
}

 

  1. This inefficient loop can actually be simplified to: System.out.println(buf.toString(io.netty.util.CharsetUtil.US_ASCII))
  2. Alternatively, you could do in.release() here.
  1. 这个低效的循环能够简化为: System.out.println(buf.toString(io.netty.util.CharsetUtil.US_ASCII))
  2. 你也可使用in.release()做为替代方法

If you run the telnet command again, you will see the server prints what has received.

若是你再次运行telnet命令,你将会看到服务器输出了收到的东西

The full source code of the discard server is located in the io.netty.example.discard package of the distribution.

discard服务器源代码在io.netty.example.discard 能够找到

Writing an Echo Server

So far, we have been consuming data without responding at all. A server, however, is usually supposed to respond to a request. Let us learn how to write a response message to a client by implementing the ECHO protocol, where any received data is sent back.

目前咱们已经使用了数据可是未做出任何响应. 然而一个服务器一般是用来响应一个请求的. 让咱们学习如何写一个实现ECHO协议响应消息给客户端的服务器, 任何收到的消息都会被发回去

The only difference from the discard server we have implemented in the previous sections is that it sends the received data back instead of printing the received data out to the console. Therefore, it is enough again to modify the channelRead() method:

和discard服务器惟一的不一样是他发回收到的数据, 而不是将收到的数据答应道控制台.所以, 修改一下channelRead()方法:

  @Override
  public void channelRead(ChannelHandlerContext ctx, Object msg) {
      ctx.write(msg); // (1)
      ctx.flush(); // (2)
  }

 

  1. ChannelHandlerContext object provides various operations that enable you to trigger various I/O events and operations. Here, we invoke write(Object) to write the received message in verbatim. Please note that we did not release the received message unlike we did in the DISCARD example. It is because Netty releases it for you when it is written out to the wire.
  2. ctx.write(Object) does not make the message written out to the wire. It is buffered internally, and then flushed out to the wire by ctx.flush(). Alternatively, you could call ctx.writeAndFlush(msg) for brevity.
  1. 一个 ChannelHanderContext对象提供了不一样的操做用来触发不一样的IO事件和操做. 这里咱们调用wirte(Object)方法来逐字写回接收到的数据.请注意咱们没有像DISCARD例子中那样释放收到的message.这是由于Netty会在它被写出到线上之后为你释放它
  2. ctx.write(Object)并不会让消息写出到线上.他会被马上缓存, 而且在调用ctx.flush()之后才会被刷新到线上.替代的, 为了简介起见你能够调用ctx.writeAndFlush(msg)

If you run the telnet command again, you will see the server sends back whatever you have sent to it.

若是你再次运行telnet命令, 你会看到server将你输入的数据返回了

The full source code of the echo server is located in the io.netty.example.echo package of the distribution.

完整的源代码在这里io.netty.example.echo

Writing a Time Server

The protocol to implement in this section is the TIME protocol. It is different from the previous examples in that it sends a message, which contains a 32-bit integer, without receiving any requests and loses the connection once the message is sent. In this example, you will learn how to construct and send a message, and to close the connection on completion.

这个章节要实现的协议是TIME协议. 他和前面的例子的区别是他会发送一个32位的正数, 他不会接受任何请求数据, 而且一旦消息发送完毕则会关闭链接. 这个例子中, 你将会学到如何构建和发送一个消息, 而且在发送完毕后关闭链接

Because we are going to ignore any received data but to send a message as soon as a connection is established, we cannot use the channelRead() method this time. Instead, we should override the channelActive() method. The following is the implementation:

由于咱们须要忽略全部收到的数据, 可是一旦链接创建则发送一个消息, 因此咱们不能使用channelRead()方法, 而是使用channelActive()方法代替.下面是实现

package io.netty.example.time;

public class TimeServerHandler extends ChannelInboundHandlerAdapter {

    @Override
    public void channelActive(final ChannelHandlerContext ctx) { // (1)
        final ByteBuf time = ctx.alloc().buffer(4); // (2)
        time.writeInt((int) (System.currentTimeMillis() / 1000L + 2208988800L));
        
        final ChannelFuture f = ctx.writeAndFlush(time); // (3)
        f.addListener(new ChannelFutureListener() {
            @Override
            public void operationComplete(ChannelFuture future) {
                assert f == future;
                ctx.close();
            }
        }); // (4)
    }
    
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
        cause.printStackTrace();
        ctx.close();
    }
}

 

  1. As explained, the channelActive() method will be invoked when a connection is established and ready to generate traffic. Let's write a 32-bit integer that represents the current time in this method.
  2. To send a new message, we need to allocate a new buffer which will contain the message. We are going to write a 32-bit integer, and therefore we need a ByteBuf whose capacity is at least 4 bytes. Get the current ByteBufAllocator via ChannelHandlerContext.alloc() and allocate a new buffer.
  3. As usual, we write the constructed message.

    But wait, where's the flip? Didn't we used to call java.nio.ByteBuffer.flip() before sending a message in NIO? ByteBufdoes not have such a method because it has two pointers; one for read operations and the other for write operations. The writer index increases when you write something to a ByteBuf while the reader index does not change. The reader index and the writer index represents where the message starts and ends respectively.

    In contrast, NIO buffer does not provide a clean way to figure out where the message content starts and ends without calling the flip method. You will be in trouble when you forget to flip the buffer because nothing or incorrect data will be sent. Such an error does not happen in Netty because we have different pointer for different operation types. You will find it makes your life much easier as you get used to it -- a life without flipping out!

    Another point to note is that the ChannelHandlerContext.write() (and writeAndFlush()) method returns a ChannelFuture. A ChannelFuture represents an I/O operation which has not yet occurred. It means, any requested operation might not have been performed yet because all operations are asynchronous in Netty. For example, the following code might close the connection even before a message is sent:

    Channel ch = ...;
    ch.writeAndFlush(message);
    ch.close();

    Therefore, you need to call the close() method after the ChannelFuture is complete, which was returned by the write()method, and it notifies its listeners when the write operation has been done. Please note that, close() also might not close the connection immediately, and it returns a ChannelFuture.

  4. How do we get notified when a write request is finished then? This is as simple as adding a ChannelFutureListener to the returned ChannelFuture. Here, we created a new anonymous ChannelFutureListener which closes the Channel when the operation is done.

    Alternatively, you could simplify the code using a pre-defined listener:

    f.addListener(ChannelFutureListener.CLOSE);
  1. 如届时的那样, channelActive()方法会在一个链接被创建并准备产生流量的时候被调用.让咱们在这个方法写一个表明当前时间的32位正数
  2. 为了发送一个新的消息, 咱们须要申请一块新的buffer, 他会包含这个消息. 咱们打算写一个32位整数,因此咱们须要一个容量至少是4字节的ByteBuf. 获取经过ChannelHandlerContext.alloc()当前ByteBufAllocator并分配一个新的buffer
  3. 和往常同样,咱们写一个构造的消息.

          可是等等, flip在哪里? 咱们在发送一个消息以前不是都会先用 java.nio.ByteBuffer.flip()吗? ByteBuf不须要这个方法, 由于他有两个指针; 一个用来读操做, 一个用来写操做.写的索引会在你写东西的时候增长,可是读的索引不会改变.读的索引和写的索引分别表明消息的开始和结束

     相比之下, NIO buffer并无提供一个清晰的方法来指出消息内容在哪里开始和结束.若是你忘记调用flip这个buffer可能会有麻烦, 由于将不会发送任何数据, 或者发送错误的数据.这样的错误不会再netty中发生, 由于咱们对不一样的操做有不一样的指针.你将会发现你使用他的时候,你的生活变得更加     简单 -- 一个没有 flipping out 的生活!

    另一个要注意的是ChannelHandlerContext.write()(还有writeAndFlush())方法返回一个ChannelFuture. 一个 ChannelFuture表明一个还没发生的IO操做. 他的意思是, 任何请求操做都尚未发生, 由于全部的操做在netty中都是异步的. 例如, 下面的代码可能会在消息发送出去以前就关闭链接:

    

Channel ch = ...;
ch.writeAndFlush(message);
ch.close();

 

    所以, 你须要在ChannelFuture完成之后再调用close()方法, 这个对象会在write()方法调用以后返回, 当他的写操做完成后他会通知他的监听器. 请注意, close()也有可能不会当即关闭链接, 他返回一个ChannelFuture.

      4. 当一个写的请求完成之后咱们如何被通知? 只须要讲一个ChannelFutureListener到返回的ChannelFuture. 这里,咱们会建立一个匿名的ChannelFutureListener, 用来当错作完成的时候关闭Channel. 二选一, 你可使用预约义的listener来简化以下的代码

    f.addListener(ChannelFutureListener.CLOSE);

To test if our time server works as expected, you can use the UNIX rdate command:

为了测试咱们的时间服务器是否按预期工做, 你可使用UNIX rdate命令

$ rdate -o <port> -p <host>
where  <port> is the port number you specified in the  main() method and  <host> is usually  localhost.
<port>是你在main()方法指定的端口号, host通常使用localhost

Writing a Time Client

Unlike DISCARD and ECHO servers, we need a client for the TIME protocol because a human cannot translate a 32-bit binary data into a date on a calendar. In this section, we discuss how to make sure the server works correctly and learn how to write a client with Netty.

The biggest and only difference between a server and a client in Netty is that different Bootstrap and Channel implementations are used. Please take a look at the following code:

不像DISCARD和ECHO服务器, 咱们须要为TIME协议建立一个客户端, 由于一我的类不能讲32位二进制数据转换成一个日期.在本章节, 咱们会讨论如何保证服务器正确工做, 并学习如何使用netty写一个客户端.

package io.netty.example.time;

public class TimeClient {
    public static void main(String[] args) throws Exception {
        String host = args[0];
        int port = Integer.parseInt(args[1]);
        EventLoopGroup workerGroup = new NioEventLoopGroup();
        
        try {
            Bootstrap b = new Bootstrap(); // (1)
            b.group(workerGroup); // (2)
            b.channel(NioSocketChannel.class); // (3)
            b.option(ChannelOption.SO_KEEPALIVE, true); // (4)
            b.handler(new ChannelInitializer<SocketChannel>() {
                @Override
                public void initChannel(SocketChannel ch) throws Exception {
                    ch.pipeline().addLast(new TimeClientHandler());
                }
            });
            
            // Start the client.
            ChannelFuture f = b.connect(host, port).sync(); // (5)

            // Wait until the connection is closed.
            f.channel().closeFuture().sync();
        } finally {
            workerGroup.shutdownGracefully();
        }
    }
}

 

  1. Bootstrap is similar to ServerBootstrap except that it's for non-server channels such as a client-side or connectionless channel.
  2. If you specify only one EventLoopGroup, it will be used both as a boss group and as a worker group. The boss worker is not used for the client side though.
  3. Instead of NioServerSocketChannelNioSocketChannel is being used to create a client-side Channel.
  4. Note that we do not use childOption() here unlike we did with ServerBootstrap because the client-side SocketChannel does not have a parent.
  5. We should call the connect() method instead of the bind() method.
  1. Bootstrap相似于ServerBootstrap, 除了他是给非服务器通道使用的, 好比客户端或无链接通道
  2. 若是你只指定一个EventLoopGroup, 他会被同时做为boss group和worker group使用. 即便boss group在客户端根本没用
  3. 替代NioServerSocketChannel, NioSocketChannel用来建立一个客户端Channel
  4. 注意咱们没有像ServerBootstrap那样使用childOption(), 由于客户端SocketChannel没有双亲
  5. 咱们调用connet()方法而不是bind()方法

As you can see, it is not really different from the the server-side code. What about the ChannelHandler implementation? It should receive a 32-bit integer from the server, translate it into a human readable format, print the translated time, and close the connection:

如你所见, 并非真的和服务端的代码不一样. 那么 ChannelHandler的实现呢? 他应该接受一个来自服务器的32位正数, 转换成一我的类可读的格式, 打印转换后的时间, 并关闭链接

package io.netty.example.time;

import java.util.Date;

public class TimeClientHandler extends ChannelInboundHandlerAdapter {
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
        ByteBuf m = (ByteBuf) msg; // (1)
        try {
            long currentTimeMillis = (m.readUnsignedInt() - 2208988800L) * 1000L;
            System.out.println(new Date(currentTimeMillis));
            ctx.close();
        } finally {
            m.release();
        }
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
        cause.printStackTrace();
        ctx.close();
    }
}

 

  1. In TCP/IP, Netty reads the data sent from a peer into a `ByteBuf`.
  2. 在 TCP/IP, Netty将另外一端发送过来的数据读入到一个`ByteBuf`

It looks very simple and does not look any different from the server side example. However, this handler sometimes will refuse to work raising an IndexOutOfBoundsException. We discuss why this happens in the next section.

他看起来很是简单, 而且和服务端的例子没有任何区别. 然而, 这个handler有时候会拒绝工做而是抛出IndexOutOfBoundsException. 咱们会在下个章节讨论为何会这样.

Dealing with a Stream-based Transport

One Small Caveat of Socket Buffer

In a stream-based transport such as TCP/IP, received data is stored into a socket receive buffer. Unfortunately, the buffer of a stream-based transport is not a queue of packets but a queue of bytes. It means, even if you sent two messages as two independent packets, an operating system will not treat them as two messages but as just a bunch of bytes. Therefore, there is no guarantee that what you read is exactly what your remote peer wrote. For example, let us assume that the TCP/IP stack of an operating system has received three packets:

在一个基于流的传输, 例如TCP/IP, 收到的数据是存储在socket接受缓存里的. 不幸的是, 基于流的传输的缓存并非一个数据包队列, 而是一个字节队列. 这意味着, 就算你用两个单独的数据包来发送两个消息, 操做系统也不会将他们当作两条消息, 而是做为遗传字节对待. 所以, 你从远端读到的东西究竟是什么是没有保障的. 举个例子, 咱们假设操做系统的TCP/IP栈收到了三个数据包:

Three packets received as they were sent

Because of this general property of a stream-based protocol, there's high chance of reading them in the following fragmented form in your application:

由于一个基于流的协议的参数, 颇有可能你会按照下面的片断来读取他们

Three packets split and merged into four buffers

Therefore, a receiving part, regardless it is server-side or client-side, should defrag the received data into one or more meaningful frames that could be easily understood by the application logic. In case of the example above, the received data should be framed like the following:

所以, 一个接收部分, 无论是服务端仍是客户端, 都应该对收到的数据进行碎片整理, 让他们变为一个或多个更有意义的结构, 这样对于程序的逻辑来讲更好理解.对于上面的例子, 收到的数据结构应该被整理成下面这样

Four buffers defragged into three

The First Solution

Now let us get back to the TIME client example. We have the same problem here. A 32-bit integer is a very small amount of data, and it is not likely to be fragmented often. However, the problem is that it can be fragmented, and the possibility of fragmentation will increase as the traffic increases.

如今让咱们回到TIME客户端的例子. 咱们这里也有相同的问题. 一个32位的整形是很小的数据, 他并不常常会被分裂.然而, 问题是他也是有可能被分裂的, 特别是当流量增长的时候分裂的可能性也会增长

The simplistic solution is to create an internal cumulative buffer and wait until all 4 bytes are received into the internal buffer. The following is the modified TimeClientHandler implementation that fixes the problem:

最简单的办法是在创建一个内部的累计缓存, 并一直等到全部4个字节全都接收到内部缓存中. 下面是对TimeClientHandler实现的修改, 他修复了这个问题:

package io.netty.example.time;

import java.util.Date;

public class TimeClientHandler extends ChannelInboundHandlerAdapter {
    private ByteBuf buf;
    
    @Override
    public void handlerAdded(ChannelHandlerContext ctx) {
        buf = ctx.alloc().buffer(4); // (1)
    }
    
    @Override
    public void handlerRemoved(ChannelHandlerContext ctx) {
        buf.release(); // (1)
        buf = null;
    }
    
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
        ByteBuf m = (ByteBuf) msg;
        buf.writeBytes(m); // (2)
        m.release();
        
        if (buf.readableBytes() >= 4) { // (4)
            long currentTimeMillis = (buf.readInt() - 2208988800L) * 1000L;
            System.out.println(new Date(currentTimeMillis));
            ctx.close();
        }
    }
    
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
        cause.printStackTrace();
        ctx.close();
    }
}

 

  1. ChannelHandler has two life cycle listener methods: handlerAdded() and handlerRemoved(). You can perform an arbitrary (de)initialization task as long as it does not block for a long time.
  2. First, all received data should be cumulated into buf.
  3. And then, the handler must check if buf has enough data, 4 bytes in this example, and proceed to the actual business logic. Otherwise, Netty will call the channelRead() method again when more data arrives, and eventually all 4 bytes will be cumulated.
  1. 一个 ChannelHandler有两个生命周期监听方法: handlerAdded()和handlerRemoved(). 只要他没有长时间阻塞, 你就能够执行任意的初始化任务
  2. 第一, 全部接收到的数据应该被累积到buf中
  3. 而后, handler必须检查buf是否有足够的数据, 在这个例子中是4个字节, 并继续真正的业务逻辑. 不然, 当更多数据到达的时候, netty会再次调用channelRead()方法, 并最终全部4个字节都会被累加起来.

The Second Solution

Although the first solution has resolved the problem with the TIME client, the modified handler does not look that clean. Imagine a more complicated protocol which is composed of multiple fields such as a variable length field. Your ChannelInboundHandlerimplementation will become unmaintainable very quickly.

虽然第一种解决方案已经解决了TIME clinet的问题, 可是修改事后的handler看起来不够干净. 想象一下更复杂的协议, 它能够组合多个字段, 例如变量长度字段. 你的 ChannelInboundHandler实现将会很快变得不可维护

As you may have noticed, you can add more than one ChannelHandler to a ChannelPipeline, and therefore, you can split one monolithic ChannelHandler into multiple modular ones to reduce the complexity of your application. For example, you could split TimeClientHandler into two handlers:

可能已经意识到了, 你能够加入超过一个ChannelHandler到ChannelPipeline中, 所以, 你能够将一个巨大的ChannelHandler切分为多个模块化的getinstance, 来减小你的程序的复杂程度. 例如, 你能够将TimeClientHandler切分为两个handler

  • TimeDecoder which deals with the fragmentation issue, and
  • TimeDecoder 用来处理碎片问题
  • the initial simple version of TimeClientHandler.
  • TimeClientHandler的初始简单版本

Fortunately, Netty provides an extensible class which helps you write the first one out of the box:

幸运的是, netty提供了一个可扩展的类, 用来帮助你写第一个可用的类:

package io.netty.example.time;

public class TimeDecoder extends ByteToMessageDecoder { // (1)
    @Override
    protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) { // (2)
        if (in.readableBytes() < 4) {
            return; // (3)
        }
        
        out.add(in.readBytes(4)); // (4)
    }
}

 

  1. ByteToMessageDecoder is an implementation of ChannelInboundHandler which makes it easy to deal with the fragmentation issue.
  2. ByteToMessageDecoder calls the decode() method with an internally maintained cumulative buffer whenever new data is received.
  3. decode() can decide to add nothing to out where there is not enough data in the cumulative buffer. ByteToMessageDecoderwill call decode() again when there is more data received.
  4. If decode() adds an object to out, it means the decoder decoded a message successfully. ByteToMessageDecoder will discard the read part of the cumulative buffer. Please remember that you don't need to decode multiple messages. ByteToMessageDecoder will keep calling the decode() method until it adds nothing to out.
  1. ByteToMessageDecoder是一个ChannelInboundHandler的实现, 它让处理碎片问题变得简单.
  2. ByteToMessageDecoder在收到新数据后, 会调用decode()方法来填充一个内部维护的积累缓存 -- in.
  3. decode()能够决定当累积buffer中的数据不足的时候不将数据添加到out中
  4. 若是decode()添加了一个对象到out中, 他表示decoder成功decode了一个消息. ByteToMessageDecoder会丢弃累积缓存中已经读过的部分.请记住你不须要解码多个消息. ByteToMessageDecoder会一直调用decode()方法直到他被添加到out中.

Now that we have another handler to insert into the ChannelPipeline, we should modify the ChannelInitializer implementation in the TimeClient:

如今咱们有另外一个handler须要插入到ChannelPipeline中, 咱们应该修改TimeClient中ChannelInitializer的实现:

b.handler(new ChannelInitializer<SocketChannel>() {
    @Override
    public void initChannel(SocketChannel ch) throws Exception {
        ch.pipeline().addLast(new TimeDecoder(), new TimeClientHandler());
    }
});

 

If you are an adventurous person, you might want to try the ReplayingDecoder which simplifies the decoder even more. You will need to consult the API reference for more information though.

若是你是一个爱冒险的人, 你可能想尝试ReplayingDecoder, 他能够更加简化decoder. 具体详情去查询一下API手册

public class TimeDecoder extends ReplayingDecoder<VoidEnum> {
    @Override
    protected void decode(
            ChannelHandlerContext ctx, ByteBuf in, List<Object> out, VoidEnum state) {
        out.add(in.readBytes(4));
    }
}

 

Additionally, Netty provides out-of-the-box decoders which enables you to implement most protocols very easily and helps you avoid from ending up with a monolithic unmaintainable handler implementation. Please refer to the following packages for more detailed examples:

另外, netty提供了直接可用的decoder实现, 你能够用他们很容易地的实现大部分协议, 她帮助你避开了庞大的不可维护的handler实现. 请查阅下面的包查看更具体的例子

Speaking in POJO instead of ByteBuf

All the examples we have reviewed so far used a ByteBuf as a primary data structure of a protocol message. In this section, we will improve the TIME protocol client and server example to use a POJO instead of a ByteBuf.

回顾一下全部的例子, 目前咱们都是使用ByteBuf做为协议消息的主要数据结构. 这个章节, 咱们会提高TIME协议客户端和服务端例子, 使用POJO来替代ByteBuf

The advantage of using a POJO in your ChannelHandlers is obvious; your handler becomes more maintainable and reusable by separating the code which extracts information from ByteBuf out from the handler. In the TIME client and server examples, we read only one 32-bit integer and it is not a major issue to use ByteBuf directly. However, you will find it is necessary to make the separation as you implement a real world protocol.

ChannelHandler中使用POJO的优点很明显, 经过将从ByteBuf中提取信息的代码从handler中分离出来, 你的handler会变得更好维护和重用.在TIME客户端和服务端的例子中, 咱们只会读取32位整数, 这并非直接使用ByteBuf会致使的主要问题.然而,你会发如今你实现一个真正的世界级协议的时候, 作这种代码分离是很是有必要的

First, let us define a new type called UnixTime.

首先, 让咱们定义一个新的类型叫作 UnixTime

package io.netty.example.time;

import java.util.Date;

public class UnixTime {

    private final int value;
    
    public UnixTime() {
        this((int) (System.currentTimeMillis() / 1000L + 2208988800L));
    }
    
    public UnixTime(int value) {
        this.value = value;
    }
        
    public int value() {
        return value;
    }
        
    @Override
    public String toString() {
        return new Date((value() - 2208988800L) * 1000L).toString();
    }
}

 

We can now revise the TimeDecoder to produce a UnixTime instead of a ByteBuf.

咱们如今能够修正TimeDecoder来生成一个UnixTime而不是一个ByteBuf.

@Override
protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) {
    if (in.readableBytes() < 4) {
        return;
    }

    out.add(new UnixTime(in.readInt()));
}

 

With the updated decoder, the TimeClientHandler does not use ByteBuf anymore:

更新完decoder后, TimeClientHandler不会再使用ByteBuf了.

@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
    UnixTime m = (UnixTime) msg;
    System.out.println(m);
    ctx.close();
}

 

Much simpler and elegant, right? The same technique can be applied on the server side. Let us update the TimeServerHandler first this time:

更简单和优雅了,对吧? 相同的技术也能够用在服务端. 让咱们更新一下TimeServerHandler:

@Override
public void channelActive(ChannelHandlerContext ctx) {
    ChannelFuture f = ctx.writeAndFlush(new UnixTime());
    f.addListener(ChannelFutureListener.CLOSE);
}

 

Now, the only missing piece is an encoder, which is an implementation of ChannelOutboundHandler that translates a UnixTime back into a ByteBuf. It's much simpler than writing a decoder because there's no need to deal with packet fragmentation and assembly when encoding a message.

如今, 惟一缺乏的一点就是encoder, 一个ChannelOutboundHandler实现, 他将一个UnixTime转回ByteBuf. 他比写一个decoder简单得多, 由于在编码一个消息的时候没有必要去处理数据包分裂和装配.

package io.netty.example.time;

public class TimeEncoder extends ChannelOutboundHandlerAdapter {
    @Override
    public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) {
        UnixTime m = (UnixTime) msg;
        ByteBuf encoded = ctx.alloc().buffer(4);
        encoded.writeInt(m.value());
        ctx.write(encoded, promise); // (1)
    }
}

 

  1. There are quite a few important things to important in this single line.

    First, we pass the original ChannelPromise as-is so that Netty marks it as success or failure when the encoded data is actually written out to the wire.

    Second, we did not call ctx.flush(). There is a separate handler method void flush(ChannelHandlerContext ctx) which is purposed to override the flush() operation.

    这一行有几件很重要的事情.

    第一, 咱们须要传递原样的ChannelPromise, 这样Netty在编码的数据真正写入到线上的时候, 能够把它当作成功和失败的标志.

    第二, 咱们不会调用ctx.flush(). 这里有一个分离的处理器方法 void flush(ChannelHandlerContext ctx), 重来重写flush()操做

To simplify even further, you can make use of MessageToByteEncoder:

为了更加简化, 你可使用MessageToByteEncoder:

public class TimeEncoder extends MessageToByteEncoder<UnixTime> {
    @Override
    protected void encode(ChannelHandlerContext ctx, UnixTime msg, ByteBuf out) {
        out.writeInt(msg.value());
    }
}

 

The last task left is to insert a TimeEncoder into the ChannelPipeline on the server side, and it is left as a trivial exercise.

最后剩下的就是将一个TimeEncoder插入到服务端的ChannelPipeline, 这个做为练习.

Shutting Down Your Application

Shutting down a Netty application is usually as simple as shutting down all EventLoopGroups you created via shutdownGracefully(). It returns a Future that notifies you when the EventLoopGroup has been terminated completely and all Channels that belong to the group have been closed.

关闭一个netty应用一般跟使用shutdownGracefully()关闭全部你建立的EventLoopGroup同样简单. 他返回一个Future, 用来通知你EventLoopGroup已经被彻底终止 以及 全部属于这个group的Channel都已经被关闭.

Summary

In this chapter, we had a quick tour of Netty with a demonstration on how to write a fully working network application on top of Netty.

There is more detailed information about Netty in the upcoming chapters. We also encourage you to review the Netty examples in the io.netty.example package.

Please also note that the community is always waiting for your questions and ideas to help you and keep improving Netty and its documentation based on your feed back.

这个章节, 咱们展现了一个如何使用Netty写一个完整工做的网络应用程序的范例

在接下来的章节还有更多关于netty的细节.咱们鼓励你回顾一下io.netty.example包的例子

同时请注意若是你有问题和idea, the community 永远在等着你, 它能够帮助你并经过你的反馈继续完善netty和它的文档.

相关文章
相关标签/搜索