Yessssssss!!!!! However, we need something that doesn’t rely on iPhone. We need webcam. You can use your iPhone as a webcam. You can also use more powerful video devices as a webcam. I would love a DIY mudface map that’s a b/w displacement map so you can capture the wrinkles of the face and map that with blender trackers. Seriously though, this is a huge leap towards that future.
Is it using structured light / lidar of the iPhone, or just the camera? I don’t know how the project works, but calling out iPhone specifically makes me think it’s using a hardware feature that isn’t in a generic webcam.
This plugin to blender is basically just receiving those values from the OS API and applying it. It’s a fairly common integration and almost all alternatives depend on ARKit on an iPhone as a result rather than implementing any algorithms themselves.
Variations of this plugins functionality have existed since the introduction of the iPhone X in 2017.
the face recognition trick (generating a 3d vertex mesh for the video) should also be doable with a homelab setup. i assume lidar would improve the signal a lot adding factually correct depth values though.
Would it be possible to integrate directly into Blender instead of an add on? If so I think Blender is GPLv2 and this is GPLv3. If merg8ng is something you see in the future you may want to change that.
This is something that is unlikely to be merged into Blender.
It’s not usable standalone as it requires a companion app and a companion device.
If Blender did want to integrate it, there’s nothing novel here that would prevent them writing their own. There’s plenty of similar plugins, and it’s just forwarding events from the companion device.
The place where it would make the most sense to add would be for Blender on the iPad where it would require no companion device at all.
There's a nice Blender extension called "FaceIt" that I used a few years ago for rigging and producing ARKit-compatible facial animations and characters. It worked quite well (for what it was designed), and I recommend it!
>Faceit is a Blender Add-on that assists you in creating complex facial expressions for arbitrary 3D characters.
>An intuitive, semi-automatic and non-destructive workflow guides you through the creation of facial shape keys that are perfectly adapted to your 3D model's topology and morphology, whether it’s a photorealistic human model or a cartoonish character. You maintain full artistic control while saving a ton of time and energy.
Yessssssss!!!!! However, we need something that doesn’t rely on iPhone. We need webcam. You can use your iPhone as a webcam. You can also use more powerful video devices as a webcam. I would love a DIY mudface map that’s a b/w displacement map so you can capture the wrinkles of the face and map that with blender trackers. Seriously though, this is a huge leap towards that future.
This repo doesn’t provide any computer vision algorithms. It’s taking the values the phone is providing for facial activations.
You’re asking for a different project altogether.
Here you go:
https://3d.kalidoface.com/
100% Webcam based skeletal body and facial blendshape tracking. The models are from Google and are open source.
As other has said, it’s using the iOS facial detection API that uses the front true depth camera (aka, the camera used for FaceID)
Is it using structured light / lidar of the iPhone, or just the camera? I don’t know how the project works, but calling out iPhone specifically makes me think it’s using a hardware feature that isn’t in a generic webcam.
It’s specifically using the ARKit facial tracking that gives you FACS blend shape values
https://developer.apple.com/documentation/ARKit/tracking-and...
This plugin to blender is basically just receiving those values from the OS API and applying it. It’s a fairly common integration and almost all alternatives depend on ARKit on an iPhone as a result rather than implementing any algorithms themselves.
Variations of this plugins functionality have existed since the introduction of the iPhone X in 2017.
the face recognition trick (generating a 3d vertex mesh for the video) should also be doable with a homelab setup. i assume lidar would improve the signal a lot adding factually correct depth values though.
Would it be possible to integrate directly into Blender instead of an add on? If so I think Blender is GPLv2 and this is GPLv3. If merg8ng is something you see in the future you may want to change that.
This is something that is unlikely to be merged into Blender.
It’s not usable standalone as it requires a companion app and a companion device.
If Blender did want to integrate it, there’s nothing novel here that would prevent them writing their own. There’s plenty of similar plugins, and it’s just forwarding events from the companion device.
The place where it would make the most sense to add would be for Blender on the iPad where it would require no companion device at all.
Being able to record and manage takes directly in Blender would be an awesome feature and first thing that pops into mind :)
I have an add-on that does the same thing as OP, with a free (open source) and paid version. The paid version lets you record direct to Blender.
https://nickfisher.gumroad.com/l/tvzndw
Is there a GitHub repo for the add-on? I don’t find it from a quick scroll through of the GitHub profile linked from your HN profile.
Edit: nvm, found it https://github.com/nmfisher/blender_livelinkface
I would love to play around with this, but I don't own an iphone :/ using a webcam + local model for detection as input would be awesome
There's a nice Blender extension called "FaceIt" that I used a few years ago for rigging and producing ARKit-compatible facial animations and characters. It worked quite well (for what it was designed), and I recommend it!
https://superhivemarket.com/products/faceit
>Faceit is a Blender Add-on that assists you in creating complex facial expressions for arbitrary 3D characters.
>An intuitive, semi-automatic and non-destructive workflow guides you through the creation of facial shape keys that are perfectly adapted to your 3D model's topology and morphology, whether it’s a photorealistic human model or a cartoonish character. You maintain full artistic control while saving a ton of time and energy.
https://faceit-doc.readthedocs.io/en/latest/FAQ/
This is a great explanation of how FaceIt works, facial animation, shape keys, face rigs, ARKit, etc:
This addon automates Facial Animation (FACEIT Tut 1)
https://www.youtube.com/watch?v=KQ32KRYq6RA&list=PLdcL5aF8Zc...