Java截图优化

应为要用java截图,而后传递给opencv处理,java中截图是使用下边的代码:java

Robot robot = new Robot();
        BufferedImage screenCapture = robot.createScreenCapture(new Rectangle(0, 0, 1920, 1080));

可是在个人电脑的全屏截图要50ms左右,但当我减小截图区域后,耗时会有一个明显的成比例减小,这让我想要看一下为何有这样。api

private synchronized BufferedImage[]
            createCompatibleImage(Rectangle screenRect, boolean isHiDPI) {

        checkScreenCaptureAllowed();

        checkValidRect(screenRect);

        BufferedImage lowResolutionImage;
        BufferedImage highResolutionImage;
        DataBufferInt buffer;
        WritableRaster raster;
        BufferedImage[] imageArray;
        //由于调用截图api返回的像素为int[],全部这里肯定RGB的值分别在那些位
        if (screenCapCM == null) {
            /*
             * Fix for 4285201
             * Create a DirectColorModel equivalent to the default RGB ColorModel,
             * except with no Alpha component.
             */

            screenCapCM = new DirectColorModel(24,
                    /* red mask */ 0x00FF0000,
                    /* green mask */ 0x0000FF00,
                    /* blue mask */ 0x000000FF);
        }

        int[] bandmasks = new int[3];
        bandmasks[0] = screenCapCM.getRedMask();
        bandmasks[1] = screenCapCM.getGreenMask();
        bandmasks[2] = screenCapCM.getBlueMask();

        // 感受是防止重复截图,等待下一帧吧
        Toolkit.getDefaultToolkit().sync();

        //
        GraphicsConfiguration gc = GraphicsEnvironment
                .getLocalGraphicsEnvironment()
                .getDefaultScreenDevice().
                getDefaultConfiguration();
        gc = SunGraphicsEnvironment.getGraphicsConfigurationAtPoint(
                gc, screenRect.getCenterX(), screenRect.getCenterY());

        AffineTransform tx = gc.getDefaultTransform();
        double uiScaleX = tx.getScaleX();
        double uiScaleY = tx.getScaleY();
        int[] pixels;
        //我电脑这里 uiScaleX  和 uiScaleY 都是1.x,因此走的else分支
        if (uiScaleX == 1 && uiScaleY == 1) {

            pixels = peer.getRGBPixels(screenRect);
            buffer = new DataBufferInt(pixels, pixels.length);

            bandmasks[0] = screenCapCM.getRedMask();
            bandmasks[1] = screenCapCM.getGreenMask();
            bandmasks[2] = screenCapCM.getBlueMask();

            raster = Raster.createPackedRaster(buffer, screenRect.width,
                    screenRect.height, screenRect.width, bandmasks, null);
            SunWritableRaster.makeTrackable(buffer);

            highResolutionImage = new BufferedImage(screenCapCM, raster,
                    false, null);
            imageArray = new BufferedImage[1];
            imageArray[0] = highResolutionImage;

        } else {
            Rectangle scaledRect;
            if (peer.useAbsoluteCoordinates()) {
                scaledRect = toDeviceSpaceAbs(gc, screenRect.x,
                        screenRect.y, screenRect.width, screenRect.height);
            } else {
                scaledRect = toDeviceSpace(gc, screenRect.x,
                        screenRect.y, screenRect.width, screenRect.height);
            }
            // 这里调用本地方法截图,这里是一个耗时点
            pixels = peer.getRGBPixels(scaledRect);
            //构建解析pixels中数据的对象,
            buffer = new DataBufferInt(pixels, pixels.length);
            raster = Raster.createPackedRaster(buffer, scaledRect.width,
                    scaledRect.height, scaledRect.width, bandmasks, null);
            SunWritableRaster.makeTrackable(buffer);

            highResolutionImage = new BufferedImage(screenCapCM, raster,
                    false, null);


            // 这里大概意思就是,根据高分辨率图像,生成低分辨率图像,可是drawImage方法也挺耗时的
            lowResolutionImage = new BufferedImage(screenRect.width,
                    screenRect.height, highResolutionImage.getType());
            Graphics2D g = lowResolutionImage.createGraphics();
            g.setRenderingHint(RenderingHints.KEY_INTERPOLATION,
                    RenderingHints.VALUE_INTERPOLATION_BILINEAR);
            g.setRenderingHint(RenderingHints.KEY_RENDERING,
                    RenderingHints.VALUE_RENDER_QUALITY);
            g.setRenderingHint(RenderingHints.KEY_ANTIALIASING,
                    RenderingHints.VALUE_ANTIALIAS_ON);
            g.drawImage(highResolutionImage, 0, 0,
                    screenRect.width, screenRect.height,
                    0, 0, scaledRect.width, scaledRect.height, null);
            g.dispose();

            if(!isHiDPI) {
                imageArray = new BufferedImage[1];
                imageArray[0] = lowResolutionImage;
            } else {
                imageArray = new BufferedImage[2];
                imageArray[0] = lowResolutionImage;
                imageArray[1] = highResolutionImage;
            }

        }

        return imageArray;
    }

我把pixels = peer.getRGBPixels(scaledRect);方法单独拿出来测试,速度是26ms,也就是其余操做是优化的。
我最开始是查看opencv中是否有用一个int报错一个像素的rgb值的内容,可是好像并无。因此我就尝试手动解析pixels,从中提取r,g,b的值。测试

//截图
        pixels = peer.getRGBPixels(screenRect);
        //解析像素
        int length = screenRect.width * screenRect.height;
        byte[] imgBytes = new byte[length * 3];
        int byteIndex = 0;
        for (int i = 0, pixel = 0; i < length; i++) {
            pixel = pixels[i];
            //  pixel中是按照rgb格式排序,可是opencv默认是bgr格式
            imgBytes[byteIndex++] = (byte) (pixel);
            pixel = pixel >> 8;
            imgBytes[byteIndex++] = (byte) (pixel);
            imgBytes[byteIndex++] = (byte) (pixel >> 8);
        }

经过测试,这里解析只须要3~4ms,而后把这个byte[]传递给mat就行了。总体截一张图并传递给opencv的耗时为30ms。优化

最后试了下在屏幕画面持续变化的状况下,一次截图并传给opencv的耗时为35ms。ui

相关文章
相关标签/搜索