iOS中编码视频数据,通常状况而言一个项目仅须要一个编码器,不过有时特殊需求可能须要两个编码器同时工做.本例中实现了编码器类.仅经过指定不一样编码器的枚举值就能够快速生成须要的编码器,且支持两个编码器一块儿工做.ios
iOS中利用VideoToolBox框架完成视频硬编码操做,支持H.264,H.265编码器.git
软编码:使用CPU进行编码。github
硬编码:不使用CPU进行编码,使用显卡GPU,专用的DSP、FPGA、ASIC芯片等硬件进行编码。macos
本例经过将编码后的文件写成.mov文件, 来测试h264, h265编码效率, 录制时间相同,场景基本相同,结果显示h265仅须要h264一半的内存就能够完成一样的画质.注意,录制出来的文件只能用ffmpeg相关工具播放.bash
本例中的编码器类不是单例,由于咱们能够生成出h264编码器,h265编码器,以及让生成两个不一样类型编码器对象同时工做.这里指定的宽高帧率须要与相机保持一致. 比特率即播放过程当中平均码率,是否支持实时编码,若是支持实时编码码率则没法控制.最后咱们仅仅能够经过指定编码器的类型来决定建立h264编码器仍是h265编码器.网络
判断是否支持hevc编码器,并非全部的设备都支持h265编码器,这由硬件决定,可是没有直接的API去判断是否支持h265编码器,在这里借助AVAssetExportPresetHEVCHighestQuality
属性来间接判断是否支持h265编码.session
注意: h265编码的软件API须要在iOS 11以上的操做系统才能使用. 目前全部流行的iPhone已都支持h264编码器.数据结构
// You could select h264 / h265 encoder.
self.videoEncoder = [[XDXVideoEncoder alloc] initWithWidth:1280
height:720
fps:30
bitrate:2048
isSupportRealTimeEncode:NO
encoderType:XDXH265Encoder]; // XDXH264Encoder
-(instancetype)initWithWidth:(int)width height:(int)height fps:(int)fps bitrate:(int)bitrate isSupportRealTimeEncode:(BOOL)isSupportRealTimeEncode encoderType:(XDXVideoEncoderType)encoderType {
if (self = [super init]) {
mSession = NULL;
mVideoFile = NULL;
_width = width;
_height = height;
_fps = fps;
_bitrate = bitrate << 10; //convert to bps
_errorCount = 0;
_isSupportEncoder = NO;
_encoderType = encoderType;
_lock = [[NSLock alloc] init];
_isSupportRealTimeEncode = isSupportRealTimeEncode;
_needResetKeyParamSetBuffer = YES;
if (encoderType == XDXH265Encoder) {
if (@available(iOS 11.0, *)) {
if ([[AVAssetExportSession allExportPresets] containsObject:AVAssetExportPresetHEVCHighestQuality]) {
_isSupportEncoder = YES;
}
}
}else if (encoderType == XDXH264Encoder){
_isSupportEncoder = YES;
}
log4cplus_info("Video Encoder:","Init encoder width:%d, height:%d, fps:%d, bitrate:%d, is support encoder:%d, encoder type:H%lu", width, height, fps, bitrate, isSupportRealTimeEncode, (unsigned long)encoderType);
}
return self;
}
复制代码
初始化一个编码器分为如下三个步骤, 首先新建一个VTCompressionSessionRef
引用对象管理编码器, 而后将编码器全部属性赋值给该对象.最后在编码前预先分配一些资源(即为要编码的数据预先分配内存)以便编码buffer使用.框架
- (void)configureEncoderWithWidth:(int)width height:(int)height {
log4cplus_info("Video Encoder:", "configure encoder with and height for init,with = %d,height = %d",width, height);
if(width == 0 || height == 0) {
log4cplus_error("Video Encoder:", "encoder param can't is null. width:%d, height:%d",width, height);
return;
}
self.width = width;
self.height = height;
mSession = [self configureEncoderWithEncoderType:self.encoderType
callback:EncodeCallBack
width:self.width
height:self.height
fps:self.fps
bitrate:self.bitrate
isSupportRealtimeEncode:self.isSupportRealTimeEncode
iFrameDuration:30
lock:self.lock];
}
- (VTCompressionSessionRef)configureEncoderWithEncoderType:(XDXVideoEncoderType)encoderType callback:(VTCompressionOutputCallback)callback width:(int)width height:(int)height fps:(int)fps bitrate:(int)bitrate isSupportRealtimeEncode:(BOOL)isSupportRealtimeEncode iFrameDuration:(int)iFrameDuration lock:(NSLock *)lock {
log4cplus_info("Video Encoder:","configure encoder width:%d, height:%d, fps:%d, bitrate:%d, is support realtime encode:%d, I frame duration:%d", width, height, fps, bitrate, isSupportRealtimeEncode, iFrameDuration);
[lock lock];
// Create compression session
VTCompressionSessionRef session = [self createCompressionSessionWithEncoderType:encoderType
width:width
height:height
callback:callback];
// Set compresssion property
[self setCompressionSessionPropertyWithSession:session
fps:fps
bitrate:bitrate
isSupportRealtimeEncode:isSupportRealtimeEncode
iFrameDuration:iFrameDuration
EncoderType:encoderType];
// Prepare to encode
OSStatus status = VTCompressionSessionPrepareToEncodeFrames(session);
[lock unlock];
if(status != noErr) {
log4cplus_error("Video Encoder:", "create encoder failed, status: %d",(int)status);
return NULL;
}else {
log4cplus_info("Video Encoder:","create encoder success");
return session;
}
}
复制代码
VTCompressionSessionRef
对象VTCompressionSessionCreate
: 建立视频编码器session, 即管理编码器上下文的对象.异步
a pixel buffer pool
.VTCompressionSessionEncodeFrame
函数线程保持一致,若是用异步会新建一条线程接收.该参数也可传NULL不过当且仅当咱们使用VTCompressionSessionEncodeFrameWithOutputHandler
函数做编码时.VT_EXPORT OSStatus
VTCompressionSessionCreate(
CM_NULLABLE CFAllocatorRef allocator,
int32_t width,
int32_t height,
CMVideoCodecType codecType,
CM_NULLABLE CFDictionaryRef encoderSpecification,
CM_NULLABLE CFDictionaryRef sourceImageBufferAttributes,
CM_NULLABLE CFAllocatorRef compressedDataAllocator,
CM_NULLABLE VTCompressionOutputCallback outputCallback,
void * CM_NULLABLE outputCallbackRefCon,
CM_RETURNS_RETAINED_PARAMETER CM_NULLABLE VTCompressionSessionRef * CM_NONNULL compressionSessionOut) API_AVAILABLE(macosx(10.8), ios(8.0), tvos(10.2));
复制代码
下面是具体用法.注意若是相机采集的分辨率改变,须要销毁当前编码器session从新建立.
- (VTCompressionSessionRef)createCompressionSessionWithEncoderType:(XDXVideoEncoderType)encoderType width:(int)width height:(int)height callback:(VTCompressionOutputCallback)callback {
CMVideoCodecType codecType;
if (encoderType == XDXH264Encoder) {
codecType = kCMVideoCodecType_H264;
}else if (encoderType == XDXH265Encoder) {
codecType = kCMVideoCodecType_HEVC;
}else {
return nil;
}
VTCompressionSessionRef session;
OSStatus status = VTCompressionSessionCreate(NULL,
width,
height,
codecType,
NULL,
NULL,
NULL,
callback,
(__bridge void *)self,
&session);
if (status != noErr) {
log4cplus_error("Video Encoder:", "%s: Create session failed:%d",__func__,(int)status);
return nil;
}else {
return session;
}
}
复制代码
建立好session后,调用VTSessionCopySupportedPropertyDictionary
函数能够将当前session支持的全部属性拷贝到指定的字典中,之后在设置属性前先在字典中查询是否支持便可.
- (BOOL)isSupportPropertyWithSession:(VTCompressionSessionRef)session key:(CFStringRef)key {
OSStatus status;
static CFDictionaryRef supportedPropertyDictionary;
if (!supportedPropertyDictionary) {
status = VTSessionCopySupportedPropertyDictionary(session, &supportedPropertyDictionary);
if (status != noErr) {
return NO;
}
}
BOOL isSupport = [NSNumber numberWithBool:CFDictionaryContainsKey(supportedPropertyDictionary, key)].intValue;
return isSupport;
}
复制代码
使用VTSessionSetProperty
函数指定key, value便可设置属性.
- (OSStatus)setSessionPropertyWithSession:(VTCompressionSessionRef)session key:(CFStringRef)key value:(CFTypeRef)value {
if (value == nil || value == NULL || value == 0x0) {
return noErr;
}
OSStatus status = VTSessionSetProperty(session, key, value);
if (status != noErr) {
log4cplus_error("Video Encoder:", "Set session of %s Failed, status = %d",CFStringGetCStringPtr(key, kCFStringEncodingUTF8),status);
}
return status;
}
复制代码
kVTCompressionPropertyKey\_MaxKeyFrameInterval
一块儿设置,而且将强制执行这两个限制 - 每X帧或每Y秒须要一个关键帧,以先到者为准。// Set compresssion property
[self setCompressionSessionPropertyWithSession:session
fps:fps
bitrate:bitrate
isSupportRealtimeEncode:isSupportRealtimeEncode
iFrameDuration:iFrameDuration
EncoderType:encoderType];
- (void)setCompressionSessionPropertyWithSession:(VTCompressionSessionRef)session fps:(int)fps bitrate:(int)bitrate isSupportRealtimeEncode:(BOOL)isSupportRealtimeEncode iFrameDuration:(int)iFrameDuration EncoderType:(XDXVideoEncoderType)encoderType {
int maxCount = 3;
if (!isSupportRealtimeEncode) {
if([self isSupportPropertyWithSession:session key:kVTCompressionPropertyKey_MaxFrameDelayCount]) {
CFNumberRef ref = CFNumberCreate(NULL, kCFNumberSInt32Type, &maxCount);
[self setSessionPropertyWithSession:session key:kVTCompressionPropertyKey_MaxFrameDelayCount value:ref];
CFRelease(ref);
}
}
if(fps) {
if([self isSupportPropertyWithSession:session key:kVTCompressionPropertyKey_ExpectedFrameRate]) {
int value = fps;
CFNumberRef ref = CFNumberCreate(NULL, kCFNumberSInt32Type, &value);
[self setSessionPropertyWithSession:session key:kVTCompressionPropertyKey_ExpectedFrameRate value:ref];
CFRelease(ref);
}
}else {
log4cplus_error("Video Encoder:", "Current fps is 0");
return;
}
if(bitrate) {
if([self isSupportPropertyWithSession:session key:kVTCompressionPropertyKey_AverageBitRate]) {
int value = bitrate << 10;
CFNumberRef ref = CFNumberCreate(NULL, kCFNumberSInt32Type, &value);
[self setSessionPropertyWithSession:session key:kVTCompressionPropertyKey_AverageBitRate value:ref];
CFRelease(ref);
}
}else {
log4cplus_error("Video Encoder:", "Current bitrate is 0");
return;
}
if([self isSupportPropertyWithSession:session key:kVTCompressionPropertyKey_RealTime]) {
log4cplus_info("Video Encoder:", "use realTimeEncoder");
[self setSessionPropertyWithSession:session key:kVTCompressionPropertyKey_RealTime value:isSupportRealtimeEncode ? kCFBooleanTrue : kCFBooleanFalse];
}
// Ban B frame.
if([self isSupportPropertyWithSession:session key:kVTCompressionPropertyKey_AllowFrameReordering]) {
[self setSessionPropertyWithSession:session key:kVTCompressionPropertyKey_AllowFrameReordering value:kCFBooleanFalse];
}
if (encoderType == XDXH264Encoder) {
if (isSupportRealtimeEncode) {
if([self isSupportPropertyWithSession:session key:kVTCompressionPropertyKey_ProfileLevel]) {
[self setSessionPropertyWithSession:session key:kVTCompressionPropertyKey_ProfileLevel value:kVTProfileLevel_H264_Main_AutoLevel];
}
}else {
if([self isSupportPropertyWithSession:session key:kVTCompressionPropertyKey_ProfileLevel]) {
[self setSessionPropertyWithSession:session key:kVTCompressionPropertyKey_ProfileLevel value:kVTProfileLevel_H264_Baseline_AutoLevel];
}
if([self isSupportPropertyWithSession:session key:kVTCompressionPropertyKey_H264EntropyMode]) {
[self setSessionPropertyWithSession:session key:kVTCompressionPropertyKey_H264EntropyMode value:kVTH264EntropyMode_CAVLC];
}
}
}else if (encoderType == XDXH265Encoder) {
if([self isSupportPropertyWithSession:session key:kVTCompressionPropertyKey_ProfileLevel]) {
[self setSessionPropertyWithSession:session key:kVTCompressionPropertyKey_ProfileLevel value:kVTProfileLevel_HEVC_Main_AutoLevel];
}
}
if([self isSupportPropertyWithSession:session key:kVTCompressionPropertyKey_MaxKeyFrameIntervalDuration]) {
int value = iFrameDuration;
CFNumberRef ref = CFNumberCreate(NULL, kCFNumberSInt32Type, &value);
[self setSessionPropertyWithSession:session key:kVTCompressionPropertyKey_MaxKeyFrameIntervalDuration value:ref];
CFRelease(ref);
}
log4cplus_info("Video Encoder:", "The compression session max frame delay count = %d, expected frame rate = %d, average bitrate = %d, is support realtime encode = %d, I frame duration = %d",maxCount, fps, bitrate, isSupportRealtimeEncode,iFrameDuration);
}
复制代码
您能够选择调用此函数,以便为编码器提供在开始编码帧以前执行任何须要资源分配的机会。此可选调用可用于为编码器提供在开始编码帧以前分配所需资源的机会。若是未调用此方法,则将在第一个VTCompressionSessionEncodeFrame调用上分配任何须要的资源。额外调用此函数将不起做用。
// Prepare to encode
OSStatus status = VTCompressionSessionPrepareToEncodeFrames(session);
[lock unlock];
if(status != noErr) {
log4cplus_error("Video Encoder:", "create encoder failed, status: %d",(int)status);
return NULL;
}else {
log4cplus_info("Video Encoder:","create encoder success");
return session;
}
复制代码
执行到这里,初始化编码器的工做已经作完,接下来咱们须要将视频帧数据进行编码. 本例中使用AVCaptureSession采集视频帧以传给编码器编码.
注意,由于编码线程与建立,销毁编码器过程属于异步操做,因此须要加锁.
首先咱们取第一帧视频数据为基准点,取系统当前时间,做为编码第一帧数据的基准时间. 此操做主要用于后期的音视频同步,本例中不做过多说明,另外,时间戳同步生成机制也不像本例中这么简单.能够自行制定生成规则.
判断当前编码的视频帧中的时间戳是否大于前一帧, 由于视频是严格按时间戳排序播放的,因此时间戳应该是一直递增的,可是考虑到传给编码器的可能不是一个视频源,好比一开始是摄像头采集的,后面换成从网络流解码的视频原始数据,此时时间戳一定不一样步,若是强行将其传给编码器,则画面会出现卡顿.
VT_EXPORT OSStatus
VTCompressionSessionEncodeFrame(
CM_NONNULL VTCompressionSessionRef session,
CM_NONNULL CVImageBufferRef imageBuffer,
CMTime presentationTimeStamp,
CMTime duration, // may be kCMTimeInvalid
CM_NULLABLE CFDictionaryRef frameProperties,
void * CM_NULLABLE sourceFrameRefcon,
VTEncodeInfoFlags * CM_NULLABLE infoFlagsOut ) API_AVAILABLE(macosx(10.8), ios(8.0), tvos(10.2));
复制代码
-(void)startEncodeWithBuffer:(CMSampleBufferRef)sampleBuffer session:(VTCompressionSessionRef)session isNeedFreeBuffer:(BOOL)isNeedFreeBuffer isDrop:(BOOL)isDrop needForceInsertKeyFrame:(BOOL)needForceInsertKeyFrame lock:(NSLock *)lock {
[lock lock];
if(session == NULL) {
log4cplus_error("Video Encoder:", "%s,session is empty",__func__);
[self handleEncodeFailedWithIsNeedFreeBuffer:isNeedFreeBuffer sampleBuffer:sampleBuffer];
return;
}
//the first frame must be iframe then create the reference timeStamp;
static BOOL isFirstFrame = YES;
if(isFirstFrame && g_capture_base_time == 0) {
CMTime pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
g_capture_base_time = CMTimeGetSeconds(pts);// system absolutly time(s)
// g_capture_base_time = g_tvustartcaptureTime - (ntp_time_offset/1000);
isFirstFrame = NO;
log4cplus_error("Video Encoder:","start capture time = %u",g_capture_base_time);
}
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CMTime presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
// Switch different source data will show mosaic because timestamp not sync.
static int64_t lastPts = 0;
int64_t currentPts = (int64_t)(CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)) * 1000);
if (currentPts - lastPts < 0) {
log4cplus_error("Video Encoder:","Switch different source data the timestamp < last timestamp, currentPts = %lld, lastPts = %lld, duration = %lld",currentPts, lastPts, currentPts - lastPts);
[self handleEncodeFailedWithIsNeedFreeBuffer:isNeedFreeBuffer sampleBuffer:sampleBuffer];
return;
}
lastPts = currentPts;
OSStatus status = noErr;
NSDictionary *properties = @{(__bridge NSString *)kVTEncodeFrameOptionKey_ForceKeyFrame:@(needForceInsertKeyFrame)};
status = VTCompressionSessionEncodeFrame(session,
imageBuffer,
presentationTimeStamp,
kCMTimeInvalid,
(__bridge CFDictionaryRef)properties,
NULL,
NULL);
if(status != noErr) {
log4cplus_error("Video Encoder:", "encode frame failed");
[self handleEncodeFailedWithIsNeedFreeBuffer:isNeedFreeBuffer sampleBuffer:sampleBuffer];
}
[lock unlock];
if (isNeedFreeBuffer) {
if (sampleBuffer != NULL) {
CFRelease(sampleBuffer);
log4cplus_debug("Video Encoder:", "release the sample buffer");
}
}
}
复制代码
如下关于码流部分的代码若是看不懂,建议必定要先看下标题推荐的连接,里面是了解编解码器的基础知识以及iOS中VideoToolbox框架中数据结构的解析.
若是status中有错误信息,表示编码失败.能够作一些特殊处理.
咱们须要为编码后的数据填充时间戳,这里咱们能够根据本身的规则制定一套时间戳生成规则,咱们这里仅仅用最简单的偏移量,即用第一帧视频数据编码前系统时间为基准点,而后每帧编码后的时间取采集到的时间戳减去基准时间获得的值做为编码后数据的时间戳.
原始视频数据通过编码后分为I帧,B帧,P帧.iOS端通常不开启B帧,B帧须要从新排序,咱们拿到编码后的数据首先经过kCMSampleAttachmentKey_DependsOnOthers
属性判断是否为I帧,若是是I帧,要从I帧中读取NALU头部关键信息,即vps,sps,pps. vps仅在h265编码器中才有.没有这些编码的视频没法在另外一端播放,也没法录制成文件.
从I帧中能够读取到vps,sps,pps数据具体的内容.若是是h264编码器调用CMVideoFormatDescriptionGetH264ParameterSetAtIndex
函数,若是是h265编码器调用CMVideoFormatDescriptionGetHEVCParameterSetAtIndex
函数,其中第二个参数的索引值0,1,2就分别表明这些数据的索引值.
找到这些数据后咱们须要将它们拼接起来,由于它们是独立的NALU,即以0x00, 0x00, 0x00, 0x01
做为隔断符以区分sps,pps.
因此,咱们按照规则将拿到的vps,sps,pps中间分别以00 00 00 01
做为隔断符以拼接成一个完整连续的buffer.本例以写文件为例,咱们首先要将NALU头信息写入文件,也就是将I帧先写进去,由于I帧表明一个完整图像,P帧须要依赖I帧才能产生图像,因此咱们文件的读取开头必须是一个I帧数据.
一帧图片通过 H.264 编码器以后,就被编码为一个或多个片(slice),而装载着这些片(slice)的载体,就是 NALU 了。
注意:片(slice)的概念不一样与帧(frame),帧(frame)是用做描述一张图片的,一帧(frame)对应一张图片,而片(slice),是 H.264 中提出的新概念,是经过编码图片后切分经过高效的方式整合出来的概念,一张图片至少有一个或多个片(slice)。片(slice)都是又 NALU 装载并进行网络传输的,可是这并不表明 NALU 内就必定是切片,这是充分没必要要条件,由于 NALU 还有可能装载着其余用做描述视频的信息。
首先经过CMBlockBufferGetDataPointer
获取视频帧数据.该帧表示一段H264/H265码流,其中可能包含多个NALU,咱们须要找出每一个NALU并用00 00 00 01
做为隔断符. 即while循环就是寻找码流中的NALU,由于裸流中不含有start code.咱们要将start code拷贝进去.
CFSwapInt32BigToHost
: 从h264编码的数据的大端模式(字节序)转系统端模式
static void EncodeCallBack(void *outputCallbackRefCon,void *souceFrameRefCon,OSStatus status,VTEncodeInfoFlags infoFlags, CMSampleBufferRef sampleBuffer) {
XDXVideoEncoder *encoder = (__bridge XDXVideoEncoder*)outputCallbackRefCon;
if(status != noErr) {
NSError *error = [NSError errorWithDomain:NSOSStatusErrorDomain code:status userInfo:nil];
NSLog(@"H264: vtCallBack failed with %@", error);
log4cplus_error("TVUEncoder", "encode frame failured! %s" ,error.debugDescription.UTF8String);
return;
}
if (!encoder.isSupportEncoder) {
return;
}
CMBlockBufferRef block = CMSampleBufferGetDataBuffer(sampleBuffer);
CMTime pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
CMTime dts = CMSampleBufferGetDecodeTimeStamp(sampleBuffer);
// Use our define time. (the time is used to sync audio and video)
int64_t ptsAfter = (int64_t)((CMTimeGetSeconds(pts) - g_capture_base_time) * 1000);
int64_t dtsAfter = (int64_t)((CMTimeGetSeconds(dts) - g_capture_base_time) * 1000);
dtsAfter = ptsAfter;
/*sometimes relative dts is zero, provide a workground to restore dts*/
static int64_t last_dts = 0;
if(dtsAfter == 0){
dtsAfter = last_dts +33;
}else if (dtsAfter == last_dts){
dtsAfter = dtsAfter + 1;
}
BOOL isKeyFrame = NO;
CFArrayRef attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, false);
if(attachments != NULL) {
CFDictionaryRef attachment =(CFDictionaryRef)CFArrayGetValueAtIndex(attachments, 0);
CFBooleanRef dependsOnOthers = (CFBooleanRef)CFDictionaryGetValue(attachment, kCMSampleAttachmentKey_DependsOnOthers);
isKeyFrame = (dependsOnOthers == kCFBooleanFalse);
}
if(isKeyFrame) {
static uint8_t *keyParameterSetBuffer = NULL;
static size_t keyParameterSetBufferSize = 0;
// Note: the NALU header will not change if video resolution not change.
if (keyParameterSetBufferSize == 0 || YES == encoder.needResetKeyParamSetBuffer) {
const uint8_t *vps, *sps, *pps;
size_t vpsSize, spsSize, ppsSize;
int NALUnitHeaderLengthOut;
size_t parmCount;
if (keyParameterSetBuffer != NULL) {
free(keyParameterSetBuffer);
}
CMFormatDescriptionRef format = CMSampleBufferGetFormatDescription(sampleBuffer);
if (encoder.encoderType == XDXH264Encoder) {
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format, 0, &sps, &spsSize, &parmCount, &NALUnitHeaderLengthOut);
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format, 1, &pps, &ppsSize, &parmCount, &NALUnitHeaderLengthOut);
keyParameterSetBufferSize = spsSize+4+ppsSize+4;
keyParameterSetBuffer = (uint8_t*)malloc(keyParameterSetBufferSize);
memcpy(keyParameterSetBuffer, "\x00\x00\x00\x01", 4);
memcpy(&keyParameterSetBuffer[4], sps, spsSize);
memcpy(&keyParameterSetBuffer[4+spsSize], "\x00\x00\x00\x01", 4);
memcpy(&keyParameterSetBuffer[4+spsSize+4], pps, ppsSize);
log4cplus_info("Video Encoder:", "H264 find IDR frame, spsSize : %zu, ppsSize : %zu",spsSize, ppsSize);
}else if (encoder.encoderType == XDXH265Encoder) {
CMVideoFormatDescriptionGetHEVCParameterSetAtIndex(format, 0, &vps, &vpsSize, &parmCount, &NALUnitHeaderLengthOut);
CMVideoFormatDescriptionGetHEVCParameterSetAtIndex(format, 1, &sps, &spsSize, &parmCount, &NALUnitHeaderLengthOut);
CMVideoFormatDescriptionGetHEVCParameterSetAtIndex(format, 2, &pps, &ppsSize, &parmCount, &NALUnitHeaderLengthOut);
keyParameterSetBufferSize = vpsSize+4+spsSize+4+ppsSize+4;
keyParameterSetBuffer = (uint8_t*)malloc(keyParameterSetBufferSize);
memcpy(keyParameterSetBuffer, "\x00\x00\x00\x01", 4);
memcpy(&keyParameterSetBuffer[4], vps, vpsSize);
memcpy(&keyParameterSetBuffer[4+vpsSize], "\x00\x00\x00\x01", 4);
memcpy(&keyParameterSetBuffer[4+vpsSize+4], sps, spsSize);
memcpy(&keyParameterSetBuffer[4+vpsSize+4+spsSize], "\x00\x00\x00\x01", 4);
memcpy(&keyParameterSetBuffer[4+vpsSize+4+spsSize+4], pps, ppsSize);
log4cplus_info("Video Encoder:", "H265 find IDR frame, vpsSize : %zu, spsSize : %zu, ppsSize : %zu",vpsSize,spsSize, ppsSize);
}
encoder.needResetKeyParamSetBuffer = NO;
}
if (encoder.isNeedRecord) {
if (encoder->mVideoFile == NULL) {
[encoder initSaveVideoFile];
log4cplus_info("Video Encoder:", "Start video record.");
}
fwrite(keyParameterSetBuffer, 1, keyParameterSetBufferSize, encoder->mVideoFile);
}
log4cplus_info("Video Encoder:", "Load a I frame.");
}
size_t blockBufferLength;
uint8_t *bufferDataPointer = NULL;
CMBlockBufferGetDataPointer(block, 0, NULL, &blockBufferLength, (char **)&bufferDataPointer);
size_t bufferOffset = 0;
while (bufferOffset < blockBufferLength - kStartCodeLength)
{
uint32_t NALUnitLength = 0;
memcpy(&NALUnitLength, bufferDataPointer+bufferOffset, kStartCodeLength);
NALUnitLength = CFSwapInt32BigToHost(NALUnitLength);
memcpy(bufferDataPointer+bufferOffset, kStartCode, kStartCodeLength);
bufferOffset += kStartCodeLength + NALUnitLength;
}
if (encoder.isNeedRecord && encoder->mVideoFile != NULL) {
fwrite(bufferDataPointer, 1, blockBufferLength, encoder->mVideoFile);
}else {
if (encoder->mVideoFile != NULL) {
fclose(encoder->mVideoFile);
encoder->mVideoFile = NULL;
log4cplus_info("Video Encoder:", "Stop video record.");
}
}
// log4cplus_debug("Video Encoder:","H265 encoded video:%lld, size:%lu, interval:%lld", dtsAfter,blockBufferLength, dtsAfter - last_dts);
last_dts = dtsAfter;
}
复制代码