stride能够翻译为:跨距app
stride指在内存中每行像素所占的空间。以下图所示,为了实现内存对齐(或者其它的什么缘由),每行像素在内存中所占的空间并非图像的宽度。ide
plane通常是以luma plane、chroma plane的形式出现,其实就是luma层和chroma层,就像RGB,要用三个plane来存。ui
最近在作HI5321的一个项目,其中遇到一个关键性的技术问题,咱们的图像处理程序需 要的是rgb888格式的图像文件,而我从hi3521获取的视频流是yuv420sp格式的图片帧,问题来了,如今我须要将yuv420sp格式的一帧 图像转换成rgb888格式的图片,其实个人目的是要rgb888图像数据。yuv420sp的yuv份量在内存储存的方式是y份量单独存储,uv份量交 叉存储。好了,能够作了,可是当我打印yuv420sp帧信息的时候发现这个720*576的一帧图片的stride(也就是跨距)竟然是768,不解, 知道如今即使我已经成功将yuv420sp的一帧图片转成了bmp位图,我依然不明白这个多出来的768-720=48字节是什么。在没有考虑跨距的状况 下,我直接从yuv份量的地址出读取个份量然后获取rgb数据保存成bmp位图,但bmp彻底错乱,哪里出了问题。确定是跨距,跨距:必定会大于等于帧宽 度而且是4的倍数,720和768之间是4的倍数的数多了,为何是768?好吧!既然是在不足4的倍数的状况下须要在行末补0,那我权当这48字节就在 每行的末尾。那么在读取yuv份量的时候一定要偏移地址。试一试,bmp果然保存成功,就像抓拍的图片同样,固然其中技术细节你们都知道,yuv换算成 rgb的公式我知道的很多于3个。this
写这段记录的目的就是想说这个stride的问题,因此在进行yuv420p,yuv420sp,等视频帧转换时必定要注意跨距stride这个参数。spa
When a video image is stored in memory, the memory buffer might contain extra padding bytes after each row of pixels. The padding bytes affect how the image is stored in memory, but do not affect how the image is displayed.翻译
The stride is the number of bytes from one row of pixels in memory to the next row of pixels in memory. Stride is also called pitch. If padding bytes are present, the stride is wider than the width of the image, as shown in the following illustration.code
Two buffers that contain video frames with equal dimensions can have two different strides. If you process a video image, you must take the stride into account.orm
In addition, there are two ways that an image can be arranged in memory. In a top-down image, the top row of pixels in the image appears first in memory. In a bottom-up image, the last row of pixels appears first in memory. The following illustration shows the difference between a top-down image and a bottom-up image.视频
A bottom-up image has a negative stride, because stride is defined as the number of bytes need to move down a row of pixels, relative to the displayed image. YUV images should always be top-down, and any image that is contained in a Direct3D surface must be top-down. RGB images in system memory are usually bottom-up.blog
Video transforms in particular need to handle buffers with mismatched strides, because the input buffer might not match the output buffer. For example, suppose that you want to convert a source image and write the result to a destination image. Assume that both images have the same width and height, but might not have the same pixel format or the same image stride.
The following example code shows a generalized approach for writing this kind of function. This is not a complete working example, because it abstracts many of the specific details.
下面是一个转换的例子,能够经过它很好的理解
最近拿到了一块液晶显示屏,采用NTSC隔行扫描制式输出图像,其采用的颜色格式为YUV4:2:2的UYVY格式,但是某视频解码器输出的颜色格式是YUV4:2:0的I420格式。那么,就必须在二者之间进行一次转换,其中I420是以平面(planner)格式存放的,而UYVY则是以紧缩(packet)格式存放的。这个转换过程并不复杂,原理如图 1所示。
图2中的每个颜色份量都采用一个字节表示,U0Y0V0Y1这样一个存放序列表示的其实是两个像素点,总共须要4个字节表示。所以,每个像素点平均占据的空间是2字节。YUV这种颜色格式的理论依据是HVS(Human Visual System,人类视觉系统)对亮度敏感,而对色度的敏感程度次之。所以经过对每一行像素点的色差份量亚采样来减小所需的存储空间。YUV4:2:2紧缩格式的颜色占据的存储空间是YUV4:4:4格式占据的存储空间的2/3。好比,若是采用YUV4:4:4格式,则每一个像素点都须要用三个份量表示,也即须要用3字节表示一个像素点。
代码实现
void rv_csp_i420_uyvy( uint8_t *y_plane, // Y plane of I420 uint8_t *u_plane, // U plane of I420 uint8_t *v_plane, // V plane of I420 int y_stride, // Y stride of I420, in pixel int uv_stride, // U and V stride of I420, in pixel uint8_t *image, // output UYVY image int width, // image width int height) // image height { int row; int col; uint8_t *pImg = image; for (row = 0; row < height; row = row+1)
{ for (col = 0; col < width; col = col+2)
{ pImg[0] = u_plane[row/2 * uv_stride + col/2]; pImg[1] = y_plane[row * y_stride + col]; pImg[2] = v_plane[row/2 * uv_stride + col/2]; pImg[3] = y_plane[row * y_stride + col + 1]; pImg += 4; } } }
代码好像有点问题,保存的时候没有考虑YUV4:2:2的stride,不过上面的代码已经把原理说的很清除了。