iOS远程推送自定义语音合成播放声音(相似支付宝收款提醒)

本文参考文章 iOS 模仿支付宝支付到帐推送,播报钱数看上面写的一些不是很详细遇到了许多问题,这里特地本身总结了一下。将我遇到的问题以及解决方案给罗列出来供你们参考。git

iOS10以后的ServiceExtends,若是不是很清楚能够自行百度或者浏览一下iOS10 推送extension之 Service Extensiongithub

首先建立一个工程:微信

打开推送通知注册接受网络

Background Modes内部的第一个我看有的demo是有够选的,我这这里没有勾选,通知是一样可以收到而且在后台播放的。那么为了避免必要的麻烦这里就不勾选了。session

 

而后咱们建立通知扩展,通知的扩展可以为咱们的app即便在被杀死的状况下也能唤醒30s左右的时间来供咱们的app进行语音播报。app

这句话的大概意思就是告诉你去激活它,而后在toolbar上面能够选择使用它,咱们点击Activate按钮就能够了。ide

而后你就会发现你的项目工程里面多了一个文件夹,就是你刚才你建立的推送扩展。工具

而后咱们在AppDelegate里面进行通知的注册。测试

代码以下:fetch

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
    // Override point for customization after application launch.
    
     [self registerRemoteNotification];
    return YES;
}
// 注册推送
- (void)registerRemoteNotification{
    UIApplication *application = [UIApplication sharedApplication];
    application.applicationIconBadgeNumber = 0;
    
    if([application respondsToSelector:@selector(registerUserNotificationSettings:)])
    {
        UIUserNotificationType notificationTypes = UIUserNotificationTypeBadge | UIUserNotificationTypeSound | UIUserNotificationTypeAlert;
        UIUserNotificationSettings *settings = [UIUserNotificationSettings settingsForTypes:notificationTypes categories:nil];
        [application registerUserNotificationSettings:settings];
    }
    
#if !TARGET_IPHONE_SIMULATOR
    //iOS8 注册APNS
    if ([application respondsToSelector:@selector(registerForRemoteNotifications)]) {
        [application registerForRemoteNotifications];
    }else{
        UIRemoteNotificationType notificationTypes = UIRemoteNotificationTypeBadge |
        UIRemoteNotificationTypeSound |
        UIRemoteNotificationTypeAlert;
        [[UIApplication sharedApplication] registerForRemoteNotificationTypes:notificationTypes];
    }
#endif
}
- (void)application:(UIApplication *)application didRegisterForRemoteNotificationsWithDeviceToken:(NSData *)deviceToken
{
    NSString *token = [[[[deviceToken description] stringByReplacingOccurrencesOfString:@"<" withString:@""] stringByReplacingOccurrencesOfString:@">" withString:@""] stringByReplacingOccurrencesOfString:@" " withString:@""];
    [UIPasteboard generalPasteboard].string =token;
    
    
    NSLog(@"device token is %@",token);
    
    
}

而后就等以后的推送消息在AppDelegate里面的方法里面接收通知:

- (void)application:(UIApplication *)application didReceiveRemoteNotification:(nonnull NSDictionary *)userInfo
{
    NSLog(@"userInfo ===== %@",userInfo);
}


- (void)application:(UIApplication *)application didReceiveRemoteNotification:(nonnull NSDictionary *)userInfo fetchCompletionHandler:(nonnull void (^)(UIBackgroundFetchResult))completionHandler
{
    NSLog(@"userInfo === %@",userInfo);
}

另外介绍两个本不属于这个专项的相关问题:Safa Area报错记得勾选去掉

咱们在拖入声音文件的时候会有如下提示,这个时候咱们的一般作法能够是所有都勾选,(这样作实际上是为了方便对iOS10一下声音文件获取),其实对咱们iOS10的扩展,咱们能够只勾选扩展,由于文件只是在扩展里面才使用。这里做简要的说明一下。

而后就是加入工程的声音文件以下图:

而后咱们在NotificationService.m文件内写入咱们的声音文件合成代码。并播放声音文件的代码。

代码以下:(亲测经过,没有问题)

#import "NotificationService.h"
#import <AVFoundation/AVFoundation.h>

#define kFileManager [NSFileManager defaultManager]

typedef void(^PlayVoiceBlock)(void);

@interface NotificationService ()<AVAudioPlayerDelegate>

@property (nonatomic, strong) void (^contentHandler)(UNNotificationContent *contentToDeliver);
@property (nonatomic, strong) UNMutableNotificationContent *bestAttemptContent;
//声音文件的播放器
@property (nonatomic, strong)AVAudioPlayer *myPlayer;
//声音文件的路径
@property (nonatomic, strong) NSString *filePath;

// 语音合成完毕以后,使用 AVAudioPlayer 播放
@property (nonatomic, copy)PlayVoiceBlock aVAudioPlayerFinshBlock;

@end

@implementation NotificationService

- (void)didReceiveNotificationRequest:(UNNotificationRequest *)request withContentHandler:(void (^)(UNNotificationContent * _Nonnull))contentHandler {
    self.contentHandler = contentHandler;
    self.bestAttemptContent = [request.content mutableCopy];
    
    // Modify the notification content here...
    self.bestAttemptContent.title = [NSString stringWithFormat:@"%@ [modified]", self.bestAttemptContent.title];
    
    __weak __typeof(self)weakSelf = self;
    
    /*******************************推荐用法*******************************************/
    
    // 方法3,语音合成,使用AVAudioPlayer播放,成功
    AVAudioSession *session = [AVAudioSession sharedInstance];
    [session setActive:YES error:nil];
    [session setCategory:AVAudioSessionCategoryPlayback error:nil];
    
    [self hechengVoiceAVAudioPlayerWithFinshBlock:^{
        weakSelf.contentHandler(weakSelf.bestAttemptContent);
    }];
//    self.contentHandler(self.bestAttemptContent);
}

#pragma mark- 合成音频使用 AVAudioPlayer 播放
- (void)hechengVoiceAVAudioPlayerWithFinshBlock:(PlayVoiceBlock )block
{
    if (block) {
        self.aVAudioPlayerFinshBlock = block;
    }
    
    /************************合成音频并播放*****************************/
    
    AVMutableComposition *composition = [AVMutableComposition composition];
    
    NSArray *fileNameArray = @[@"daozhang",@"1",@"2",@"3",@"4",@"5",@"6",@"1",@"2",@"3",@"4",@"5",@"6",@"1",@"2",@"3",@"4",@"5",@"6"];
    
    CMTime allTime = kCMTimeZero;
    
    for (NSInteger i = 0; i < fileNameArray.count; i++) {
        NSString *auidoPath = [[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:@"%@",fileNameArray[i]] ofType:@"m4a"];
        
        AVURLAsset *audioAsset = [AVURLAsset assetWithURL:[NSURL fileURLWithPath:auidoPath]];
        
        // 音频轨道
        AVMutableCompositionTrack *audioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:0];
        // 音频素材轨道
        AVAssetTrack *audioAssetTrack = [[audioAsset tracksWithMediaType:AVMediaTypeAudio] firstObject];
        
        // 音频合并 - 插入音轨文件
        [audioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, audioAsset.duration) ofTrack:audioAssetTrack atTime:allTime error:nil];
        
        // 更新当前的位置
        allTime = CMTimeAdd(allTime, audioAsset.duration);
        
    }
    
    // 合并后的文件导出 - `presetName`要和以后的`session.outputFileType`相对应。
    AVAssetExportSession *session = [[AVAssetExportSession alloc] initWithAsset:composition presetName:AVAssetExportPresetAppleM4A];
    NSString *outPutFilePath = [[self.filePath stringByDeletingLastPathComponent] stringByAppendingPathComponent:@"xindong.m4a"];
    
    if ([[NSFileManager defaultManager] fileExistsAtPath:outPutFilePath]) {
        [[NSFileManager defaultManager] removeItemAtPath:outPutFilePath error:nil];
    }
    
    // 查看当前session支持的fileType类型
    NSLog(@"---%@",[session supportedFileTypes]);
    session.outputURL = [NSURL fileURLWithPath:outPutFilePath];
    session.outputFileType = AVFileTypeAppleM4A; //与上述的`present`相对应
    session.shouldOptimizeForNetworkUse = YES;   //优化网络
    
    [session exportAsynchronouslyWithCompletionHandler:^{
        if (session.status == AVAssetExportSessionStatusCompleted) {
            NSLog(@"合并成功----%@", outPutFilePath);
            
            NSURL *url = [NSURL fileURLWithPath:outPutFilePath];
            
            self.myPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:nil];
            
            self.myPlayer.delegate = self;
            [self.myPlayer play];
            
            
        } else {
            // 其余状况, 具体请看这里`AVAssetExportSessionStatus`.
            // 播放失败
            self.aVAudioPlayerFinshBlock();
        }
    }];
    
    /************************合成音频并播放*****************************/
}
#pragma mark- AVAudioPlayerDelegate
- (void)audioPlayerDidFinishPlaying:(AVAudioPlayer *)player successfully:(BOOL)flag
{
    if (self.aVAudioPlayerFinshBlock) {
        self.aVAudioPlayerFinshBlock();
    }
}

- (void)audioPlayerDecodeErrorDidOccur:(AVAudioPlayer*)player error:(NSError *)error{
    //解码错误执行的动做
}
- (void)audioPlayerBeginInteruption:(AVAudioPlayer*)player{
    //处理中断的代码
}
- (void)audioPlayerEndInteruption:(AVAudioPlayer*)player{
    //处理中断结束的代码
}


- (NSString *)filePath {
    if (!_filePath) {
        _filePath = [NSSearchPathForDirectoriesInDomains(NSLibraryDirectory, NSUserDomainMask, YES) firstObject];
        NSString *folderName = [_filePath stringByAppendingPathComponent:@"MergeAudio"];
        BOOL isCreateSuccess = [kFileManager createDirectoryAtPath:folderName withIntermediateDirectories:YES attributes:nil error:nil];
        if (isCreateSuccess) _filePath = [folderName stringByAppendingPathComponent:@"xindong.m4a"];
    }
    return _filePath;
}
- (void)serviceExtensionTimeWillExpire {
    // Called just before the extension will be terminated by the system.
    // Use this as an opportunity to deliver your "best attempt" at modified content, otherwise the original push payload will be used.
    self.contentHandler(self.bestAttemptContent);
}

固然播放声音也可使用咱们苹果系统的开放库AVSpeechSynthesisVoice

使用AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"zh-CN"];建立嗓音

 

// 建立语音合成器

    synthesizer = [[AVSpeechSynthesizer alloc] init];

    synthesizer.delegate = self;

    

    // 实例化发声的对象

    AVSpeechUtterance *utterance = [AVSpeechUtterance speechUtteranceWithString:content];

    utterance.voice = voice;

    utterance.rate = 0.5; // 语速

    

    // 朗读的内容

    [synthesizer speakUtterance:utterance];

来实现朗读声音的播放

 

接下来,咱们开始运行咱们的代码开始体验咱们的结果:

1.咱们要选中咱们的扩展scheme

而后选中咱们的运行的app

接下来就是须要经过推送进行消息的检测的时刻了。

这里推荐一款测试软件SmartPush推送测试工具:(奉上连接地址:SmartPush

运行而后选择咱们的对应的推送证书,设备token

而后咱们若是测试在后台推送的数据消息,那么须要多加上字段:"mutable-content":1,

例如:{
  "aps":{
    "alert":{
      "title":"iOS 10 title",
      "subtitle":"iOS 10 subtitle",
      "body":"iOS 10 body"
    },
     "mutable-content":1,
    "category":"saySomethingCategory",
    "sound":"default",
    "badge":3
  }

}

而后咱们点击发送,在前台,后台,杀死状态下就能够测试听语音播报了。

以上就是关于语音播放收款信息的一些搜集和本身的实践整理出来的,

还有后期的打包上架须要选哪一个scheme的问题,我的以为是选择扩展的那个scheme进行打包,还在测试,没有上架呢。待之后有时间及时为你们更新补上。

奉上本文的demo连接地址:艾鑫文学社

 

2019.4.23号补充:(最近看到还有许多朋友也在不断的点赞,但愿再次把我发现的问题告知你们,让关注个人朋友少走弯路。)

测试发现从iOS12.0.1版本以后出如今app后台的模式下再也不进行语音播报,通过断点以及log排查,发现是语音合成没有问题,是在此以后的AVAudioPlayer不支持播报致使的问题。暂时尝试过使用iOS的voip推送,是可让系统从新支持AVAudioPlayer播放处理。可是voip的使用审核可能比较严格,相关使用的同窗能够去细致看一下voip的审核问题。(以后的iOS系统何时可以后台打开这个AVAudioPlayer播放还需期待。)

 

下面还有一个微信的收款语音提醒的一个总结微信 iOS 收款到账语音提醒开发总结,供你们有空参考!