unity 创建动画_使用Unity创建自己的动画表情符号!

unity 创建动画

With the release of the iPhone X, Apple popularized the use of “Animojis” for sending animated messages and everyone was showing off their newfound karaoke animation skills. Less known was the fact that Apple had released a face tracking API for the iPhone X which allows you to create your own animated emojis. Unity already supports the use of ARKit Face Tracking but in this blog we’ll show you how to use this API to create your own version of these animated messages, or even facial animations within your games or homemade videos.

随着iPhone X的发布,Apple普及了使用“ Animojis”来发送动画消息的功能,每个人都在炫耀其新发现的卡拉OK动画技能。 苹果公司已经发布了针对iPhone X的面部跟踪API,使您可以创建自己的动画表情符号,这一点鲜为人知。 Unity已经支持使用ARKit Face Tracking,但是在此博客中,我们将向您展示如何使用此API创建自己的动画消息版本,甚至在游戏或自制视频中创建面部动画。

As mentioned in the previous blog, ARKit face tracking returns coefficients for the expressions on your face. In the example previously included, we printed out the coefficients on the screen.  In the new example we’re describing here, we use a virtual face setup in our scene to mimic our expressions and thus create our animated emoji.

如先前的博客所述,ARKit人脸跟踪返回您脸上表情的系数 。 在前面包含的示例中,我们在屏幕上打印出系数。 在这里描述的新示例中,我们在场景中使用虚拟面部设置来模仿表情,从而创建动画表情符号。

创建混合形状 (Create blendshapes)

The content needs to be created with the intention of using the blendshape coefficients returned from face tracking as parameters into the animation of the virtual face. In our case, we created the head of a stylized sloth whose face was manipulated to conform to the shape of each of the different coefficients we have. Then we set up all the different shapes to be blended together in our content creation software (e.g. Maya or Blender). We named each of the blendshapes such that they could be easily identified and matched to the coefficients returned from the SDK.

需要创建内容,其目的是使用从面部跟踪返回的混合形状系数作为参数输入虚拟面部动画。 在我们的例子中,我们创建了一个程式化的树懒的头部,该树懒的脸部经过调整以符合我们拥有的每个不同系数的形状。 然后,我们将所有不同的形状设置为在内容创建软件(例如Maya或Blender )中融合在一起。 我们对每种混合形状进行命名,以便可以轻松识别它们并将其与从SDK返回的系数相匹配。

In the case of Mr. Sloth, we used all 51 blendshape locations that face tracking gives to us.  We could have opted to only have a fewer number of blendshapes that would still convey the characteristics of our virtual face.  E.g. we could have used a more stylized face that would only react to shape locations like eyeLook, jawOpen, or mouthClose and not to more subtle ones.
以Sloth先生为例,我们使用了脸部追踪提供给我们的所有51个融合形状位置。 我们本可以选择只包含较少数量的融合形状,这些融合形状仍可以传达虚拟脸部的特征。 例如,我们本可以使用更具风格化的脸,该脸只会对诸如eyeLook,jawOpen或utherClose的形状位置做出React,而不会对更细微的脸部做出React。

For each blendshape coefficient that we use, we would create a blendshape that would convey the expression of that particular part of the face based on the reference shape given in the ARKit SDK.

对于我们使用的每个blendshape系数,我们将创建一个融合形状,该形状将根据ARKit SDK中提供的参考形状传达面部特定部位的表情。

For example, ARKit SDK gives us this reference image for jawOpen. For our Sloth face, we create a blendshape called jawOpen that looks like this (left image is base mesh, right image is mesh with fully open jaw):

例如,ARKit SDK向我们提供了颚打开的参考图像 。 对于我们的Sloth脸,我们创建一个名为颚开放的混合形状,如下所示(左图为基础网格,右图为具有完全张开的颚的网格):

Continue to do this for all the shapes you want to support. Next, we create the mesh with the blendshapes using the guide for the content creation software we’re using. (E.g follow these steps for Maya.) Finally we exported the whole sloth head mesh as an FBX file so that we could import it into Unity.

继续对您要支持的所有形状执行此操作。 接下来,使用所用内容创建软件的指南,用blendshapes创建网格。 (例如,对于Maya,请按照以下步骤操作 。)最后,我们将整个树懒头网格导出为FBX文件,以便可以将其导入到Unity中。

在Unity中进行设置 (Set it up in Unity)

In Unity, we drop the FBX file described above into an Assets folder, where it gets imported and made into a Unity Mesh that has a SkinnedMeshRenderer containing a list of the blendshapes. We then use this mesh in a scene like FaceBlendshapeSloth which is a new example scene in the Unity ARKit Plugin code.

在Unity中,我们将上述FBX文件拖放到Assets文件夹中,在其中将其导入并制作到具有SkinnedMeshRenderer的Unity Mesh中,该SkinnedMeshRenderer包含blendshapes的列表。 然后,我们在FaceBlendshapeSloth之类的场景中使用此网格,这是Unity ARKit插件代码中的新示例场景。

We need to set a reference to the sloth mesh in the scene on the ARFaceAnchorManager GameObject that will keep track of your face, placing and rotating the sloth face however you move your head around.

我们需要在ARFaceAnchorManager GameObject上为场景中的树懒网格物体设置参考,以跟踪您的脸,放置并旋转树懒脸,无论您如何移动头部。

Then we put the script BlendshapeDriver.cs on the GameObject that has the SkinnedMeshRenderer component, which takes the blendshape coefficients from face tracking and plugs each of the values (multiplied by 100 to convert ARKit fractions to Unity percentages) into the blendshape factor with the same name on the list of blendshapes on the SkinnedMeshRenderer.

然后,将脚本BlendshapeDriver.cs放到具有SkinnedMeshRenderer组件的GameObject上,该组件从面部跟踪中获取blendshape系数,并将每个值(乘以100即可将ARKit分数转换为Unity百分比)插入具有相同值的blendshape因子SkinnedMeshRenderer上的blendshapes列表上的名称。

Now if you build out this scene to your iPhone X, you should be able to see Mr. Sloth’s head move with your head and have the same expression on its face as you do. You can use iOS video recording to send an animated Slothoji to your friends, or use the Sloth face to talk trash (slowly) at your rivals in your Unity game.

现在,如果您将此场景扩展到iPhone X,则应该可以看到Sloth先生的头部随头部移动,并且表情与您的表情相同。 您可以使用iOS视频录制将动画的Slothoji发送给您的朋友,也可以使用Sloth的面Kong在Unity游戏中向您的对手(慢慢)说话垃圾。

演示地址

玩得开心! (Have fun!)

As you can see, setting up a virtual character whose facial animation is controlled by your face is pretty easy on iPhone X using Unity. You can have a lot of fun recording animated messages and movies for your friends and loved ones. Please tweet us your creations and slowjam karaokes to @jimmy_jam_jam, and send us any questions or suggestions on the forums.

如您所见,在iPhone X上使用Unity设置虚拟角色时,其面部动画由您的脸部控制非常容易。 为您的朋友和亲人录制动画消息和电影可以带来很多乐趣。 请把您的创作和慢速卡拉OK推特发到@jimmy_jam_jam ,并在论坛上向我们发送任何问题或建议。

翻译自: https://blogs.unity3d.com/2017/12/03/create-your-own-animated-emojis-with-unity/

unity 创建动画