运动识别php
利用运动识别(motion detection)来进行近景识别是最有意思的一种方式。实现运动识别的基本原理是设置一个起始的基准RGB图像,而后将从摄像头获取的每一帧影像和这个基准图像进行比较。若是发现了差别,咱们能够认为有东西进入到了摄像头的视野范围。html
不难看出这种策略是有缺陷的。在现实生活中,物体是运动的。在一个房间里,某我的可能会轻微移动家具。在户外,一辆汽车可能会启动,风可能会将一些小树吹的摇摇晃晃。在这些场景中,尽然没有连续的移动动做,可是物体的状态仍是发生了变化,依据以前的策略,系统会判断错误。所以,在这些状况下,咱们须要间歇性的更改基准图像才能解决这一问题。c++
EmguCV项目的官方网站为http://www.emgu.com/wiki/index.php/Main_Page 实际的源代码和安装包放在SourceForge(http://sourceforge.net/projects/emgucv/files/ )上。本文使用的Emgu版本为2.3.0。Emgu的安装过程很简单直观,只须要点击下载好的可执行文件便可。不过有一点须要注意的是EmguCV彷佛在x86架构的计算机上运行的最好。若是在64位的机器上开发,最好为Emgu库的目标平台指定为x86,以下图所示(你也能够在官网上下载源码而后本身在x64平台上编译)。架构
要使用Emgu库,须要添加对下面三个dll的引用:Emgu.CV、Emgu.CV.UI以及Emgu.Util。ide
由于Emgu是对C++类库的一个.Net包装,因此须要在dll所在的目录放一些额外的非托管的dll,使得Emgu可以找到这些dll进行处理。Emgu在应用程序的执行目录查找这些dll。若是在debug模式下面,则在bin/Debug目录下面查找。在release模式下,则在bin/Release目录下面。共有11个非托管的C++ dll须要放置在相应目录下面,他们是opencv_calib3d231.dll, opencv_conrib231.dll, opencv_core231.dll,opencv_features2d231.dll, opencv_ffmpeg.dll, opencv_highgui231.dll, opencv_imgproc231.dll,opencv_legacy231.dll, opencv_ml231.dll, opencv_objectdetect231.dll, and opencv_video231.dll。这些dll能够在Emgu的安装目录下面找到。为了方便,能够拷贝全部以opencv_开头的dll。性能
在咱们的扩展方法库中,咱们须要一些额外的扩展帮助方法。上一篇文章讨论过,每一种类库都有其本身可以理解的核心图像类型。在Emgu中,这个核心的图像类型是泛型的Image<TColor,TDepth>类型,它实现了Emgu.CV.IImage接口。下面的代码展示了一些咱们熟悉的影像数据格式和Emgu特定的影像格式之间转换的扩展方法。新建一个名为EmguExtensions.cs的静态类,并将其命名空间改成ImageManipulationMethods,和咱们以前ImageExtensions类的命名空间相同。咱们能够将全部的的扩展方法放到同一个命名空间中。这个类负责三种不一样影像数据类型之间的转换:从Microsoft.Kinect.ColorFrameImage到Emgu.CV.Image<TColor,TDepth>,从System.Drawing.Bitmap到Emgu.CV.Image<TColor,TDepth>以及Emgu.CV.Image<TColor,TDepth>到System.Windows.Media.Imaging.BitmapSource之间的转换。网站
使用Emgu类库来实现运动识别,咱们将用到在以前文章中讲到的“拉数据”(polling)模型而不是基于事件的机制来获取数据。这是由于图像处理很是消耗系统计算和内存资源,咱们但愿可以调节处理的频率,而这只能经过“拉数据”这种模式来实现。须要指出的是本例子只是演示如何进行运动识别,因此注重的是代码的可读性,而不是性能,你们看了理解了以后能够对其进行改进。ui
由于彩色影像数据流用来更新Image控件数据源,咱们使用深度影像数据流来进行运动识别。须要指出的是,咱们全部用于运动追踪的数据都是经过深度影像数据流提供的。如前面文章讨论,CompositionTarget.Rendering事件一般是用来进行从彩色影像数据流中“拉”数据。可是对于深度影像数据流,咱们将会建立一个BackgroundWorker对象来对深度影像数据流进行处理。BackgroundWorker对象将会调用Pulse方法来“拉”取深度影像数据,并执行一些消耗计算资源的处理。当BackgroundWorker完成了一个循环,接着从深度影像数据流中“拉”取下一幅影像继续处理。代码中声明了两个名为MotionHistory和IBGFGDetector的Emgu成员变量。这两个变量一块儿使用,经过相互比较来不断更新基准影像来探测运动。this
KinectSensor _kinectSensor; private MotionHistory _motionHistory; private IBGFGDetector<Bgr> _forgroundDetector; bool _isTracking = false; public MainWindow() { InitializeComponent(); this.Unloaded += delegate { _kinectSensor.ColorStream.Disable(); }; this.Loaded += delegate { _motionHistory = new MotionHistory( 1.0, //in seconds, the duration of motion history you wants to keep 0.05, //in seconds, parameter for cvCalcMotionGradient 0.5); //in seconds, parameter for cvCalcMotionGradient _kinectSensor = KinectSensor.KinectSensors[0]; _kinectSensor.ColorStream.Enable(); _kinectSensor.Start(); BackgroundWorker bw = new BackgroundWorker(); bw.DoWork += (a, b) => Pulse(); bw.RunWorkerCompleted += (c, d) => bw.RunWorkerAsync(); bw.RunWorkerAsync(); }; }
下面的代码是执行图象处理来进行运动识别的关键部分。代码在Emgu的示例代码的基础上进行了一些修改。Pluse方法中的第一个任务是将彩色影像数据流产生的ColorImageFrame对象转换到Emgu中能处理的图象数据类型。_forgroundDetector对象被用来更新_motionHistory对象,他是持续更新的基准影像的容器。_forgroundDetector还被用来与基准影像进行比较,以判断是否发生变化。当从当前彩色影像数据流中获取到的影像和基准影像有不一样时,建立一个影像来反映这两张图片之间的差别。而后将这张影像转换为一系列更小的图片,而后对运动识别进行分解。咱们遍历这一些列运动的图像来看他们是否超过咱们设定的运动识别的阈值。若是这些运动很明显,咱们就在界面上显示视频影像,不然什么都不显示。spa
private void Pulse() { using (ColorImageFrame imageFrame = _kinectSensor.ColorStream.OpenNextFrame(200)) { if (imageFrame == null) return; using (Image<Bgr, byte> image = imageFrame.ToOpenCVImage<Bgr, byte>()) using (MemStorage storage = new MemStorage()) //create storage for motion components { if (_forgroundDetector == null) { _forgroundDetector = new BGStatModel<Bgr>(image , Emgu.CV.CvEnum.BG_STAT_TYPE.GAUSSIAN_BG_MODEL); } _forgroundDetector.Update(image); //update the motion history _motionHistory.Update(_forgroundDetector.ForgroundMask); //get a copy of the motion mask and enhance its color double[] minValues, maxValues; System.Drawing.Point[] minLoc, maxLoc; _motionHistory.Mask.MinMax(out minValues, out maxValues , out minLoc, out maxLoc); Image<Gray, Byte> motionMask = _motionHistory.Mask .Mul(255.0 / maxValues[0]); //create the motion image Image<Bgr, Byte> motionImage = new Image<Bgr, byte>(motionMask.Size); motionImage[0] = motionMask; //Threshold to define a motion area //reduce the value to detect smaller motion double minArea = 100; storage.Clear(); //clear the storage Seq<MCvConnectedComp> motionComponents = _motionHistory.GetMotionComponents(storage); bool isMotionDetected = false; //iterate through each of the motion component for (int c = 0; c < motionComponents.Count(); c++) { MCvConnectedComp comp = motionComponents[c]; //reject the components that have small area; if (comp.area < minArea) continue; OnDetection(); isMotionDetected = true; break; } if (isMotionDetected == false) { OnDetectionStopped(); this.Dispatcher.Invoke(new Action(() => rgbImage.Source = null)); return; } this.Dispatcher.Invoke( new Action(() => rgbImage.Source = imageFrame.ToBitmapSource()) ); } } } private void OnDetection() { if (!_isTracking) _isTracking = true; } private void OnDetectionStopped() { _isTracking = false; }
运动模板 —— 运动检测(只用到RGB信息)
<Window x:Class="KinectMovementDetection.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="400" Width="525"> <Grid > <Image Name="rgbImage" Stretch="Fill"/> </Grid> </Window>
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing; using Microsoft.Kinect; using System.Drawing.Imaging; using System.Runtime.InteropServices; using Emgu.CV; using Emgu.CV.Structure; using System.Windows; using System.IO; namespace ImageManipulationExtensionMethods { public static class EmguImageExtensions { public static Image<TColor, TDepth> ToOpenCVImage<TColor, TDepth>(this ColorImageFrame image) where TColor : struct, IColor where TDepth : new() { var bitmap = image.ToBitmap(); return new Image<TColor, TDepth>(bitmap); } public static Image<TColor, TDepth> ToOpenCVImage<TColor, TDepth>(this Bitmap bitmap) where TColor : struct, IColor where TDepth : new() { return new Image<TColor, TDepth>(bitmap); } public static System.Windows.Media.Imaging.BitmapSource ToBitmapSource(this IImage image) { var source = image.Bitmap.ToBitmapSource(); return source; } } }
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing; using Microsoft.Kinect; using System.Drawing.Imaging; using System.Runtime.InteropServices; using Emgu.CV; using Emgu.CV.Structure; using System.Windows; using System.IO; namespace ImageManipulationExtensionMethods { public static class EmguImageExtensions { public static Image<TColor, TDepth> ToOpenCVImage<TColor, TDepth>(this ColorImageFrame image) where TColor : struct, IColor where TDepth : new() { var bitmap = image.ToBitmap(); return new Image<TColor, TDepth>(bitmap); } public static Image<TColor, TDepth> ToOpenCVImage<TColor, TDepth>(this Bitmap bitmap) where TColor : struct, IColor where TDepth : new() { return new Image<TColor, TDepth>(bitmap); } public static System.Windows.Media.Imaging.BitmapSource ToBitmapSource(this IImage image) { var source = image.Bitmap.ToBitmapSource(); return source; } } }
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Navigation; using System.Windows.Shapes; using Emgu.CV; using Emgu.CV.Structure; using Emgu.CV.VideoSurveillance; using Microsoft.Kinect; using System.ComponentModel; using ImageManipulationExtensionMethods; namespace KinectMovementDetection { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { KinectSensor _kinectSensor; private MotionHistory _motionHistory; // 历史运动模板 private IBGFGDetector<Bgr> _forgroundDetector; bool _isTracking = false; public MainWindow() { InitializeComponent(); this.Unloaded += delegate { _kinectSensor.ColorStream.Disable(); }; this.Loaded += delegate { _motionHistory = new MotionHistory( 1.0, //in seconds, the duration of motion history you wants to keep 0.05, //in seconds, parameter for cvCalcMotionGradient 0.5); //in seconds, parameter for cvCalcMotionGradient _kinectSensor = KinectSensor.KinectSensors[0]; _kinectSensor.ColorStream.Enable(); _kinectSensor.Start(); BackgroundWorker bw = new BackgroundWorker(); // 单独线程上执行操做 bw.DoWork += (a, b) => Pulse(); bw.RunWorkerCompleted += (c, d) => bw.RunWorkerAsync(); bw.RunWorkerAsync(); }; } private void Pulse() { using (ColorImageFrame imageFrame = _kinectSensor.ColorStream.OpenNextFrame(200)) { if (imageFrame == null) return; using (Image<Bgr, byte> image = imageFrame.ToOpenCVImage<Bgr, byte>()) using (MemStorage storage = new MemStorage()) //create storage for motion components { if (_forgroundDetector == null) { _forgroundDetector = new BGStatModel<Bgr>(image , Emgu.CV.CvEnum.BG_STAT_TYPE.GAUSSIAN_BG_MODEL); } _forgroundDetector.Update(image); //update the motion history _motionHistory.Update(_forgroundDetector.ForegroundMask); //get a copy of the motion mask and enhance its color double[] minValues, maxValues; System.Drawing.Point[] minLoc, maxLoc; _motionHistory.Mask.MinMax(out minValues, out maxValues , out minLoc, out maxLoc); Image<Gray, Byte> motionMask = _motionHistory.Mask .Mul(255.0 / maxValues[0]); //create the motion image Image<Bgr, Byte> motionImage = new Image<Bgr, byte>(motionMask.Size); motionImage[0] = motionMask; //Threshold to define a motion area //reduce the value to detect smaller motion double minArea = 100; storage.Clear(); //clear the storage Seq<MCvConnectedComp> motionComponents = _motionHistory.GetMotionComponents(storage); bool isMotionDetected = false; //iterate through each of the motion component for (int c = 0; c < motionComponents.Count(); c++) { MCvConnectedComp comp = motionComponents[c]; //reject the components that have small area; if (comp.area < minArea) continue; OnDetection(); isMotionDetected = true; break; } if (isMotionDetected == false) { OnDetectionStopped(); this.Dispatcher.Invoke(new Action(() => rgbImage.Source = null)); return; } this.Dispatcher.Invoke( new Action(() => rgbImage.Source = imageFrame.ToBitmapSource()) ); } } } private void OnDetection() { if (!_isTracking) _isTracking = true; } private void OnDetectionStopped() { _isTracking = false; } } }