iOS中使用Audio unit实现音频数据采集,直接采集PCM无损数据, Audio Unit不能直接采集压缩数据,在之后的文章会讲到音频压缩.ios
使用Audio Unit采集硬件输入端,如麦克风,其余外置具有麦克风功能设备(带麦的耳机,话筒等,前提是其自己要和苹果兼容).git
如上所示,咱们整体分为两大类,一个是负责采集的类,一个是负责作音频录制的类,你能够根据需求在适当时机启动,关闭Audio Unit, 而且在Audio Unit已经启动的状况下能够进行音频文件录制,前面需求仅仅须要以下四个API便可完成.github
// Start / Stop Audio Queue
[[XDXAudioCaptureManager getInstance] startAudioCapture];
[[XDXAudioCaptureManager getInstance] stopAudioCapture];
// Start / Stop Audio Record
[[XDXAudioQueueCaptureManager getInstance] startRecordFile];
[[XDXAudioQueueCaptureManager getInstance] stopRecordFile];
复制代码
本例采用单例实现,故将audio unit的实现放在初始化中,仅执行一次,若是销毁了audio unit则须要在外层从新调用初始化API,通常不建议反复销毁建立audio unit,因此最好就是在单例初始化中配置audio unit其后仅仅须要打开关闭便可.bash
iPhone设备默认仅支持单声道,若是设置双声道代码没法正常初始化. 若是须要模拟双声道,能够手动用代码对单声道数据作一次拷贝.具体方法之后文章会讲到.数据结构
注意: 这里的采样buffer大小的设置与采样时间的设置不可随意设置,换句话说,当采样时间必定,咱们设置的采样数据大小不能超过其最大值,可经过公式算出采样时间与采样数据的关系.函数
采样公式计算post
数据量(字节 / 秒)=(采样频率(Hz)* 采样位数(bit)* 声道数)/ 8
复制代码
- (instancetype)init {
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
_instace = [super init];
// Note: audioBufferSize can not more than durationSec max size.
[_instace configureAudioInfoWithDataFormat:&m_audioDataFormat
formatID:kAudioFormatLinearPCM
sampleRate:44100
channelCount:1
audioBufferSize:2048
durationSec:0.02
callBack:AudioCaptureCallback];
});
return _instace;
- (void)configureAudioInfoWithDataFormat:(AudioStreamBasicDescription *)dataFormat formatID:(UInt32)formatID sampleRate:(Float64)sampleRate channelCount:(UInt32)channelCount audioBufferSize:(int)audioBufferSize durationSec:(float)durationSec callBack:(AURenderCallback)callBack {
// Configure ASBD
[self configureAudioToAudioFormat:dataFormat
byParamFormatID:formatID
sampleRate:sampleRate
channelCount:channelCount];
// Set sample time
[[AVAudioSession sharedInstance] setPreferredIOBufferDuration:durationSec error:NULL];
// Configure Audio Unit
m_audioUnit = [self configreAudioUnitWithDataFormat:*dataFormat
audioBufferSize:audioBufferSize
callBack:callBack];
}
}
复制代码
须要注意的是,音频数据格式与硬件直接相关,若是想获取最高性能,最好直接使用硬件自己的采样率,声道数等音频属性,因此,如采样率,当咱们手动进行更改后,Audio Unit会在内部自行转换一次,虽然代码上没有感知,但必定程序上仍是下降了性能.性能
iOS中不支持直接设置双声道,若是想模拟双声道,能够自行填充音频数据,具体会在之后的文章中讲到,喜欢请持续关注.ui
理解AudioSessionGetProperty
函数,该函数代表查询当前硬件指定属性的值,以下,kAudioSessionProperty_CurrentHardwareSampleRate
为查询当前硬件采样率,kAudioSessionProperty_CurrentHardwareInputNumberChannels
为查询当前采集的声道数.由于本例中使用手动赋值方式更加灵活,因此没有使用查询到的值.spa
首先,你必须了解未压缩格式(PCM...)与压缩格式(AAC...). 使用iOS直接采集未压缩数据是能够直接拿到硬件采集到的数据,因为audio unit不能直接采集aac类型数据,因此这里仅采集原始的PCM数据.
使用PCM数据格式必须设置采样值的flag:mFormatFlags
,每一个声道中采样的值换算成二进制的位宽mBitsPerChannel
,iOS中每一个声道使用16位的位宽,每一个包中有多少帧mFramesPerPacket
,对于PCM数据而言,由于其未压缩,因此每一个包中仅有1帧数据.每一个包中有多少字节数(即每一帧中有多少字节数),能够根据以下简单计算得出
注意,若是是其余压缩数据格式,大多数不须要单独设置以上参数,默认为0.这是由于对于压缩数据而言,每一个音频采样包中压缩的帧数以及每一个音频采样包压缩出来的字节数多是不一样的,因此咱们没法预知进行设置,就像mFramesPerPacket
参数,由于压缩出来每一个包具体有多少帧只有压缩完成后才能得知.
#define kXDXAudioPCMFramesPerPacket 1
#define KXDXAudioBitsPerChannel 16
-(void)configureAudioToAudioFormat:(AudioStreamBasicDescription *)audioFormat byParamFormatID:(UInt32)formatID sampleRate:(Float64)sampleRate channelCount:(UInt32)channelCount {
AudioStreamBasicDescription dataFormat = {0};
UInt32 size = sizeof(dataFormat.mSampleRate);
// Get hardware origin sample rate. (Recommended it)
Float64 hardwareSampleRate = 0;
AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate,
&size,
&hardwareSampleRate);
// Manual set sample rate
dataFormat.mSampleRate = sampleRate;
size = sizeof(dataFormat.mChannelsPerFrame);
// Get hardware origin channels number. (Must refer to it)
UInt32 hardwareNumberChannels = 0;
AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareInputNumberChannels,
&size,
&hardwareNumberChannels);
dataFormat.mChannelsPerFrame = channelCount;
dataFormat.mFormatID = formatID;
if (formatID == kAudioFormatLinearPCM) {
dataFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
dataFormat.mBitsPerChannel = KXDXAudioBitsPerChannel;
dataFormat.mBytesPerPacket = dataFormat.mBytesPerFrame = (dataFormat.mBitsPerChannel / 8) * dataFormat.mChannelsPerFrame;
dataFormat.mFramesPerPacket = kXDXAudioPCMFramesPerPacket;
}
memcpy(audioFormat, &dataFormat, sizeof(dataFormat));
NSLog(@"%@: %s - sample rate:%f, channel count:%d",kModuleName, __func__,sampleRate,channelCount);
}
复制代码
使用AVAudioSession能够设置采样时间,注意,在采样时间必定的状况下,咱们设置的采样大小不能超过其最大值.
好比: 采样率是44.1kHz, 采样位数是16, 声道数是1, 采样时间为0.01秒,则最大的采样数据为882. 因此即便咱们设置超过此数值,系统最大也只能采集882个字节的音频数据.
[[AVAudioSession sharedInstance] setPreferredIOBufferDuration:durationSec error:NULL];
复制代码
m_audioUnit = [self configreAudioUnitWithDataFormat:*dataFormat
audioBufferSize:audioBufferSize
callBack:callBack];
- (AudioUnit)configreAudioUnitWithDataFormat:(AudioStreamBasicDescription)dataFormat audioBufferSize:(int)audioBufferSize callBack:(AURenderCallback)callBack {
AudioUnit audioUnit = [self createAudioUnitObject];
if (!audioUnit) {
return NULL;
}
[self initCaptureAudioBufferWithAudioUnit:audioUnit
channelCount:dataFormat.mChannelsPerFrame
dataByteSize:audioBufferSize];
[self setAudioUnitPropertyWithAudioUnit:audioUnit
dataFormat:dataFormat];
[self initCaptureCallbackWithAudioUnit:audioUnit callBack:callBack];
// Calls to AudioUnitInitialize() can fail if called back-to-back on different ADM instances. A fall-back solution is to allow multiple sequential calls with as small delay between each. This factor sets the max number of allowed initialization attempts.
OSStatus status = AudioUnitInitialize(audioUnit);
if (status != noErr) {
NSLog(@"%@: %s - couldn't init audio unit instance, status : %d \n",kModuleName,__func__,status);
}
return audioUnit;
}
复制代码
这里能够指定使用audio unit哪一个分类建立. 这里使用的kAudioUnitSubType_VoiceProcessingIO分类是作回声消除及加强人声的分类,若是仅仅须要原始未处理音频数据也能够改用kAudioUnitSubType_RemoteIO分类,若是想了解更多关于audio unit分类,文章最上方有相关连接能够访问.
AudioComponentFindNext:第一个参数设置为NULL表示使用系统定义的顺序查找第一个匹配的audio unit.若是你将上一个使用的audio unit引用传给该参数,则该函数将继续寻找下一个与之描述匹配的audio unit.
- (AudioUnit)createAudioUnitObject {
AudioUnit audioUnit;
AudioComponentDescription audioDesc;
audioDesc.componentType = kAudioUnitType_Output;
audioDesc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;//kAudioUnitSubType_RemoteIO;
audioDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
audioDesc.componentFlags = 0;
audioDesc.componentFlagsMask = 0;
AudioComponent inputComponent = AudioComponentFindNext(NULL, &audioDesc);
OSStatus status = AudioComponentInstanceNew(inputComponent, &audioUnit);
if (status != noErr) {
NSLog(@"%@: %s - create audio unit failed, status : %d \n",kModuleName, __func__, status);
return NULL;
}else {
return audioUnit;
}
}
复制代码
kAudioUnitProperty_ShouldAllocateBuffer
: 默认为true, 它将建立一个回调函数中接收数据的buffer, 在这里设置为false, 咱们本身定义了一个bufferList用来接收采集到的音频数据.
- (void)initCaptureAudioBufferWithAudioUnit:(AudioUnit)audioUnit channelCount:(int)channelCount dataByteSize:(int)dataByteSize {
// Disable AU buffer allocation for the recorder, we allocate our own.
UInt32 flag = 0;
OSStatus status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
INPUT_BUS,
&flag,
sizeof(flag));
if (status != noErr) {
NSLog(@"%@: %s - could not allocate buffer of callback, status : %d \n", kModuleName, __func__, status);
}
AudioBufferList * buffList = (AudioBufferList*)malloc(sizeof(AudioBufferList));
buffList->mNumberBuffers = 1;
buffList->mBuffers[0].mNumberChannels = channelCount;
buffList->mBuffers[0].mDataByteSize = dataByteSize;
buffList->mBuffers[0].mData = (UInt32 *)malloc(dataByteSize);
m_buffList = buffList;
}
复制代码
kAudioUnitProperty_StreamFormat
: 经过先前建立的ASBD设置音频数据流的格式kAudioOutputUnitProperty_EnableIO
: 启用/禁用 对于 输入端/输出端input bus / input element: 链接设备硬件输入端(如:麦克风)
output bus / output element: 链接设备硬件输出端(如:扬声器)
input scope: 每一个element/scope可能有一个input scope或output scope,以采集为例,音频从audio unit的input scope流入,咱们仅仅只能从output scope中获取音频数据.由于input scope是audio unit与硬件之间的交互.因此你能够看到代码中设置的两项INPUT_BUS
,kAudioUnitScope_Output
.
remote I/O audio unit默认是打开输出端,关闭输入端的,而本文讲的是利用audio unit作音频数据采集,因此咱们要打开输入端,禁止输出端.
- (void)setAudioUnitPropertyWithAudioUnit:(AudioUnit)audioUnit dataFormat:(AudioStreamBasicDescription)dataFormat {
OSStatus status;
status = AudioUnitSetProperty(audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
INPUT_BUS,
&dataFormat,
sizeof(dataFormat));
if (status != noErr) {
NSLog(@"%@: %s - set audio unit stream format failed, status : %d \n",kModuleName, __func__,status);
}
/*
// remove echo but can not effect by testing.
UInt32 echoCancellation = 0;
AudioUnitSetProperty(m_audioUnit,
kAUVoiceIOProperty_BypassVoiceProcessing,
kAudioUnitScope_Global,
0,
&echoCancellation,
sizeof(echoCancellation));
*/
UInt32 enableFlag = 1;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
INPUT_BUS,
&enableFlag,
sizeof(enableFlag));
if (status != noErr) {
NSLog(@"%@: %s - could not enable input on AURemoteIO, status : %d \n",kModuleName, __func__, status);
}
UInt32 disableFlag = 0;
status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
OUTPUT_BUS,
&disableFlag,
sizeof(disableFlag));
if (status != noErr) {
NSLog(@"%@: %s - could not enable output on AURemoteIO, status : %d \n",kModuleName, __func__,status);
}
}
复制代码
- (void)initCaptureCallbackWithAudioUnit:(AudioUnit)audioUnit callBack:(AURenderCallback)callBack {
AURenderCallbackStruct captureCallback;
captureCallback.inputProc = callBack;
captureCallback.inputProcRefCon = (__bridge void *)self;
OSStatus status = AudioUnitSetProperty(audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
INPUT_BUS,
&captureCallback,
sizeof(captureCallback));
if (status != noErr) {
NSLog(@"%@: %s - Audio Unit set capture callback failed, status : %d \n",kModuleName, __func__,status);
}
}
复制代码
直接调用AudioOutputUnitStart
便可开启audio unit.若是以上配置都正确,audio unit能够直接工做.
- (void)startAudioCaptureWithAudioUnit:(AudioUnit)audioUnit isRunning:(BOOL *)isRunning {
OSStatus status;
if (*isRunning) {
NSLog(@"%@: %s - start recorder repeat \n",kModuleName,__func__);
return;
}
status = AudioOutputUnitStart(audioUnit);
if (status == noErr) {
*isRunning = YES;
NSLog(@"%@: %s - start audio unit success \n",kModuleName,__func__);
}else {
*isRunning = NO;
NSLog(@"%@: %s - start audio unit failed \n",kModuleName,__func__);
}
}
复制代码
inRefCon:开发者本身定义的任何数据,通常将本类的实例传入,由于回调函数中没法直接调用OC的属性与方法,此参数能够做为OC与回调函数沟通的桥梁.即传入本类对象.
ioActionFlags: 描述上下文信息
inTimeStamp: 包含采样的时间戳
inBusNumber: 调用此回调函数的总线数量
inNumberFrames: 这次调用包含了多少帧数据
ioData: 音频数据.
AudioUnitRender
: 使用此函数将采集到的音频数据赋值给咱们定义的全局变量m_buffList
static OSStatus AudioCaptureCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioUnitRender(m_audioUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, m_buffList);
XDXAudioCaptureManager *manager = (__bridge XDXAudioCaptureManager *)inRefCon;
/* Test audio fps
static Float64 lastTime = 0;
Float64 currentTime = CMTimeGetSeconds(CMClockMakeHostTimeFromSystemUnits(inTimeStamp->mHostTime))*1000;
NSLog(@"Test duration - %f",currentTime - lastTime);
lastTime = currentTime;
*/
void *bufferData = m_buffList->mBuffers[0].mData;
UInt32 bufferSize = m_buffList->mBuffers[0].mDataByteSize;
// NSLog(@"demon = %d",bufferSize);
if (manager.isRecordVoice) {
[[XDXAudioFileHandler getInstance] writeFileWithInNumBytes:bufferSize
ioNumPackets:inNumberFrames
inBuffer:bufferData
inPacketDesc:NULL];
}
return noErr;
}
复制代码
AudioOutputUnitStop
: 中止audio unit.
-(void)stopAudioCaptureWithAudioUnit:(AudioUnit)audioUnit isRunning:(BOOL *)isRunning {
if (*isRunning == NO) {
NSLog(@"%@: %s - stop capture repeat \n",kModuleName,__func__);
return;
}
*isRunning = NO;
if (audioUnit != NULL) {
OSStatus status = AudioOutputUnitStop(audioUnit);
if (status != noErr){
NSLog(@"%@: %s - stop audio unit failed. \n",kModuleName,__func__);
}else {
NSLog(@"%@: %s - stop audio unit successful",kModuleName,__func__);
}
}
}
复制代码
当咱们完全不使用audio unit时,能够释放本类audio unit相关的资源,注意释放具备前后顺序,首先应中止audio unit, 而后将初始化状态还原,最后释放audio unit全部相关内存资源.
- (void)freeAudioUnit:(AudioUnit)audioUnit {
if (!audioUnit) {
NSLog(@"%@: %s - repeat call!",kModuleName,__func__);
return;
}
OSStatus result = AudioOutputUnitStop(audioUnit);
if (result != noErr){
NSLog(@"%@: %s - stop audio unit failed.",kModuleName,__func__);
}
result = AudioUnitUninitialize(m_audioUnit);
if (result != noErr) {
NSLog(@"%@: %s - uninitialize audio unit failed, status : %d",kModuleName,__func__,result);
}
// It will trigger audio route change repeatedly
result = AudioComponentInstanceDispose(m_audioUnit);
if (result != noErr) {
NSLog(@"%@: %s - dispose audio unit failed. status : %d",kModuleName,__func__,result);
}else {
audioUnit = nil;
}
}
复制代码
此部分可参考另外一篇文章: 音频文件录制