DiipuSurotu

Banned
Oct 25, 2017
53,148
Square Enix spent over a decade working on Final Fantasy 7 Remake but the passionate project was not the only thing eating away precious resources. The developer was also building an advanced piece of technology alongside for one specific purpose — to make the in-game characters look as lifelike as possible.

Speaking with Edge for the latest issue, co-director Naoki Hamaguchi explained how artificial intelligence (AI) decides what expressions each character has to make during conversations. The program detects emotions from every line of dialogue delivered in Final Fantasy 7 Remake and manipulates faces accordingly.

You can generally tell the emotional content of any piece of dialogue from the intonation and patterns, if you look at it on a graph and at how the levels of tension go up and down.
So we took a number of samples from various different voice data, downloaded them into a database, and then looked for the patterns to create a system where we could get a very high level of recognition on the actual emotional content of any piece of dialogue.

Hamaguchi confirmed that the same AI also does the job of lip-syncing and can even calculate the optimal angle (and distance) from characters speaking in a scene to automatically move the camera to that spot. The technology was being built "pretty much throughout the entire development cycle" and hence, is as old as Final Fantasy 7 Remake itself.

More at: https://segmentnext.com/2020/04/02/final-fantasy-7-remake-ai-facial-expressions/

Tell me words aren't the only way to tell someone how you feel if old
 

CielTynave

Member
Oct 25, 2017
2,345
Huh

I remember when one of the first few trailers dropped some people noticed the animations were different between the English and JP versions, guess that explains it?
 

Jakenbakin

"This guy are sick" and Corrupted by Vengeance
Member
Jun 17, 2018
13,003
Didn't people say in the spoiler thread that lip syncing isn't great? Does this technology only apply to the Japanese track, then, or was my limited scanning of that thread off base?
 

Dio

Member
Oct 25, 2017
8,215
Didn't people say in the spoiler thread that lip syncing isn't great? Does this technology only apply to the Japanese track, then, or was my limited scanning of that thread off base?
i mean, the demo looked fine, and id imagine it uses the same technology for the lipsync. only time will tell, i guess.
 

Kuro

Member
Oct 25, 2017
22,836
That sounds like instead of having to manually link up emotions with face animations for random NPCs, devs can save some time with this. It will undoubtedly still end up with stupid glitches though.
 

dodo

Member
Oct 27, 2017
4,123
Didn't people say in the spoiler thread that lip syncing isn't great? Does this technology only apply to the Japanese track, then, or was my limited scanning of that thread off base?

iirc people are saying a lot of minor characters/scenes feel a bit weird because the models' mouths move a lot and not much else, which actually sounds exactly the kind of problem you'd run into with tech like this. if it can approximate the facial expression and mouth movements that's one thing, but i imagine it's much harder to match body language to that with AI