Deleted member 44129

User requested account closure
Banned
May 29, 2018
7,690
Remember scanning your head into Rainbow 6 Vegas? That was awesome. I had assumed at the time that this tech would eventually be standard, but it never happened. I want to be able to scan my head/face and build an accurate model that can then be imported into different games.

Surely this should be a thing by now.
 

Tempy

Member
Oct 25, 2017
3,336
Remember scanning your head into Rainbow 6 Vegas? That was awesome. I had assumed at the time that this tech would eventually be standard, but it never happened. I want to be able to scan my head/face and build an accurate model that can then be imported into different games.

Surely this should be a thing by now.
Imagine going online and facing a horde of people with dicks or anuses on their face.

That's why.
 

adumb

Banned
Aug 17, 2019
548
Probably the yikes factor of shooting representations of actual players. I think Perfect Dark was supposed to have something like this, but it was pulled. I'm surprised to hear another game actually did it, considering the risk of blowback from 'concerned parties'.
 

Chivalry

Chicken Chaser
Banned
Nov 22, 2018
3,894
High-res meshes need expensive setups, cleaning, rigging, etc to look even remotely good and to be work ingame.
 

MrConbon210

Member
Oct 31, 2017
7,779
What types of games would you realistically want to have your exact head modeled in? Most people who create a character in games don't necessarily model it after themselves. It's like the option on the 3DS to have your mii be created from a photo. People would rather do it themselves.
 
Oct 29, 2017
14,486
The increasing complexity of 3D models I assume. I imagine it isn't easy to pass them as decent now that graphics fidelity is considerably higher.
 

Booya_base

Member
Oct 31, 2017
780
Jersey
Thank you OP, we talk about this all the time IRL. R6 Vegas was amazing it was hilarious to see how it messed up each other's faces but it was still recognisable. It's insane we haven't got that much more often since then. That insane face tracking tech in Star Citizen gives me some hope for the future though
 
OP
OP

Deleted member 44129

User requested account closure
Banned
May 29, 2018
7,690
Probably the yikes factor of shooting representations of actual players. I think Perfect Dark was supposed to have something like this, but it was pulled. I'm surprised to hear another game actually did it, considering the risk of blowback from 'concerned parties'.
This is a really good point, but surely the answer would be to put some rules in place? Certain genres can't do head models, or it has to be passed via Sony/Microsoft/Nintendo on a per game basis?
 
Feb 10, 2018
17,534
If it good be done with a game camera and N64 with perfect dark, I'm sure it could be done with a smartphone and modern game console.
 

Deleted member 12790

User requested account closure
Banned
Oct 27, 2017
24,537
Modern graphics don't work like you're assuming, where it's just throwing a bitmap image over some polygons and stretching it over everything. Modern shader pipelines do things in stages, and the result of what we see is many textures -- really, more like arrays of data -- composited into a final image. Here's an example of the "textures" that make up a small model:

Basic-guide-for-testures-map-types-1.jpg


A normal camera could only really capture the diffuse texture. Maaaaybe a 3D scan camera could create the normals texture as well if the tool was easy enough to use. But without a lot of work by hand, it'll look very, very odd compared to the rest of the game. The faces will stand out, they'll almost literally be glowing.
 
OP
OP

Deleted member 44129

User requested account closure
Banned
May 29, 2018
7,690
Modern graphics don't work like you're assuming, where it's just throwing a bitmap image over some polygons and stretching it over everything. Modern shader pipelines do things in stages, and the result of what we see is many textures -- really, more like arrays of data -- composited into a final image. Here's an example of the "textures" that make up a small model:

Basic-guide-for-testures-map-types-1.jpg


A normal camera could only really capture the diffuse texture. Maaaaybe a 3D scan camera could create the normals texture as well if the tool was easy enough to use. But without a lot of work by hand, it'll look very, very odd compared to the rest of the game. The faces will stand out, they'll almost literally be glowing.
Well, I'd assume that if it were console level, they'd do something so that the model of your head can easioy convert into the major, commonly used engines?
 

Deleted member 12790

User requested account closure
Banned
Oct 27, 2017
24,537
Well, I'd assume that if it were console level, they'd do something so that the model of your head can easioy convert into the major, commonly used engines?

There is no easy conversion process, it's not a model problem, it's a problem of textures. It's like trying to take a black and white image and use it in a color scene. It'll look odd, because it's lacking data that needs to be there to look right. You have to create that data by hand, using more than just a camera.
 

DieH@rd

Member
Oct 26, 2017
11,358
Do 2K NBA games still have face scanning support?

I'm not against the idea. It would be neat to play through Mass Effect or C2077 [or any number of other games with char-generator] as myself.
 
OP
OP

Deleted member 44129

User requested account closure
Banned
May 29, 2018
7,690
There is no easy conversion process, it's not a model problem, it's a problem of textures. It's like trying to take a black and white image and use it in a color scene. It'll look odd, because it's lacking data that needs to be there to look right. You have to create that data by hand, using more than just a camera.
Dammit, why does cutting edge modern technology have to be so COMPLICATED!?!?
 
Oct 29, 2017
2,103
NL
Probably the yikes factor of shooting representations of actual players. I think Perfect Dark was supposed to have something like this, but it was pulled. I'm surprised to hear another game actually did it, considering the risk of blowback from 'concerned parties'.

Man, if only. The new faces in Perfect Dark HD are worse than the original.
 

Foltzie

One Winged Slayer
The Fallen
Oct 26, 2017
6,981
In addition to the complication of working well. . .

No one cared in the games that have done int R6V didn't become a GOAT with that feature. The Wii U had a version of that and well, it didn't move any more systems (and it was featured on the Tonight Show)
 

caff!!!

Member
Oct 29, 2017
3,084
I'd assume that such features take away dev time for more important things in most games, and were mostly infamous for creating awful masterpieces.
 

Bartis

Member
Dec 30, 2017
260
Gears 5 droids have a screen as a face, and it projects a video of a face on it. That would work for me
 

Akela

Member
Oct 28, 2017
1,915
If there's any server based processing then that probably opens a huge can of worms in terms of privacy that I doubt many devs want to deal with.

If it's done entirely on console then that limits the accuracy of the result. If the game isn't using any fancy machine learning algorithms (that stuff is really new btw, barely any games even use basic style transfer stuff) then the options are to pretty much do what they did in the PS2 era and project a photo onto a pre-made 3D mesh, that happens to have a much higher visual fidelity thereby making the whole thing look far more creepier, or do an actual 3D scan of someone's face, which is... difficult to do on a console. At that point you're talking about building a companion app to the game using the iOS Face ID API or something since that's the only quick way I can think of to build a depth map of someone's face. Once you have that data you then need to build an algorithm to process it, including displacing the polygons of the face, reconstructing the standard PBR texture set from that photo (which of course has to be done automatically, you're not going to have your players use some Substance B2M equivalent to do it manually, right?) and finally get a result that absolutely falls apart the moment the camera comes in close to the character, so the best result would be for a game like Bloodborne where the camera is far enough away for the seams to disappear, in which case this whole endeavour has been pointless.

In end, we're talking about a substantial amount of work for a feature that, let's be honest here, not many people care about. For the games that actually have a feature like this, how many players actually take advantage of it? I can't imagine it's a whole lot.

Really, I imagine most people don't actually want to see their actual faces in games, when they can build an idealised version of themselves in the standard character creator. The idea of my player character in Skyrim actually being me, and not some vague facsimile of me, is a bit weird to think about. Would make for some fun clips on YouTube no doubt though.
 
Last edited:

Harlequin

Banned
Oct 27, 2017
1,614
There is no easy conversion process, it's not a model problem, it's a problem of textures. It's like trying to take a black and white image and use it in a color scene. It'll look odd, because it's lacking data that needs to be there to look right. You have to create that data by hand, using more than just a camera.
I imagine if you wanted to, you could use machine learning to create a solution for that sort of thing. Train an algorithm with a bunch of professional-grade, high quality face scans that include things like roughness and normal data until it can guesstimate what that data should look like when it's not supplied. The results wouldn't be as good as a proper scan, of course, but I don't think it'd be impossible to do. It's just nobody really thinks the effort is worth it, apparently.
 
Nov 14, 2017
4,929
I imagine if you wanted to, you could use machine learning to create a solution for that sort of thing. Train an algorithm with a bunch of professional-grade, high quality face scans that include things like roughness and normal data until it can guesstimate what that data should look like when it's not supplied. The results wouldn't be as good as a proper scan, of course, but I don't think it'd be impossible to do. It's just nobody really thinks the effort is worth it, apparently.
The output of machine learning is kind of like a magic trick though. When it works it looks amazing, but when the trick fails it looks really really dumb.
 

Arebours

Member
Oct 27, 2017
2,656
it's difficult to get it right without proper lighting and a static subject if you use something like photogrammetry. Machine learning might be a great help here though.
 

blonded

Member
Oct 30, 2017
1,130
There is no easy conversion process, it's not a model problem, it's a problem of textures. It's like trying to take a black and white image and use it in a color scene. It'll look odd, because it's lacking data that needs to be there to look right. You have to create that data by hand, using more than just a camera.
What about a photogrammetry-type approach?
 

Deleted member 12790

User requested account closure
Banned
Oct 27, 2017
24,537
What about a photogrammetry-type approach?

A better approach would be like the poster above said using AI. And actually, that'll probably happen eventually. Like that poster said, someone has to actually create the model, however, and more importantly, start collecting the training data. It'll take time to develop something like that, though.
 

Harlequin

Banned
Oct 27, 2017
1,614
The output of machine learning is kind of like a magic trick though. When it works it looks amazing, but when the trick fails it looks really really dumb.
This would be a highly specific use case, though. We know the kind of input device that will be used (Kinect or PS Camera), we know the kind of data to expect (human face) and the output data isn't all that highly variable. Where machine learning algorithms tend to fail is when you introduce unexpected factors and use cases. But our use case is so specific that that shouldn't be too much of a problem. The biggest potential for issues would be the capturing of user faces. Variable lighting conditions, varying distances between camera and subject, varying degrees of motion, angles, people trying to capture things other than human faces, etc. And I'd imagine that you could check against many of those parameters and set thresholds beyond which users would get an error message (i.e. "Your room is too dark to perform facial capture. Please try again in a brighter environment.") It would undoubtedly be a lot of work to get it functioning properly but I do think it'd be possible. (But if anyone more knowledgeable with this kind of tech disagrees, please do correct me. I'm mostly just making a halfway educated guess here.)
 

Deleted member 12790

User requested account closure
Banned
Oct 27, 2017
24,537
This would be a highly specific use case, though. We know the kind of input device that will be used (Kinect or PS Camera), we know the kind of data to expect (human face) and the output data isn't all that highly variable. Where machine learning algorithms tend to fail is when you introduce unexpected factors and use cases. But our use case is so specific that that shouldn't be too much of a problem. The biggest potential for issues would be the capturing of user faces. Variable lighting conditions, varying distances between camera and subject, varying degrees of motion, angles, people trying to capture things other than human faces, etc. And I'd imagine that you could check against many of those parameters and set thresholds beyond which users would get an error message (i.e. "Your room is too dark to perform facial capture. Please try again in a brighter environment.") It would undoubtedly be a lot of work to get it functioning properly but I do think it'd be possible. (But if anyone more knowledgeable with this kind of tech disagrees, please do correct me. I'm mostly just making a halfway educated guess here.)

One way to overcome the limitation you talk about is to use convolution, have the person not take a single picture but instead do a series of head movements that can be matched up against a mountain of training data. I.e. not just a picture head on, but make them look side to side, then up and down, then smile, then frown, and so forth. It helps give many frames of references.
 

I KILL PXLS

Member
Oct 25, 2017
11,936
I wonder how feasible it is to do a photogrammetry type scan of your face and then use that as reference for the character creation instead of trying to use the actual scan as the final. Wasn't it Fight Night that tried to do something similar years ago?
 

Deleted member 12790

User requested account closure
Banned
Oct 27, 2017
24,537
I wonder how feasible it is to do a photogrammetry type scan of your face and then use that as reference for the character creation instead of trying to use the actual scan as the final. Wasn't it Fight Night that tried to do something similar years ago?

Photogrammetry is most appropriate for static models, like terrain or buildings, but not so much for things that move, especially faces where our brains are hardwired to detect "the uncanny valley." The disconnect between looking physically correct but with static emotion comes off looking very strange. You can use it as a tool to create faces, but it needs a lot of work by hand with rigging and such.
 

I KILL PXLS

Member
Oct 25, 2017
11,936
Photogrammetry is most appropriate for static models, like terrain or buildings, but not so much for things that move, especially faces where our brains are hardwired to detect "the uncanny valley." The disconnect between looking physically correct but with static emotion comes off looking very strange. You can use it as a tool to create faces, but it needs a lot of work by hand with rigging and such.
Yeah, I mean taking that data and using it as reference to generate a face from a more traditional character creator (preferably a robust one like Black Desert's) that could otherwise be used manually. Basically use the scan to automatically set the sliders, match the closest hair, find the closest colors, etc.
 

TacoSupreme

Member
Jul 26, 2019
1,882
Ahh, I remember back in the day importing my face into the EA NHL hockey games and playing as a monster version of myself.

These kinds of things are exceptionally fucking hilarious if you have a full beard.
 
OP
OP

Deleted member 44129

User requested account closure
Banned
May 29, 2018
7,690
Technical difficulties in implementing this aside.....

I'm reading this and quite surprised it's a feature people don't really have an interest in. Interesting that I'm alone in this. I loved it in that Tony Hawks game. Loved it in Rainbow 6. There was some eyetoy game on PS2 where you could put your head on a jack in the box spring, which I recall being REALLY accurate. At the time I thought some kid of party game with your heads in the game would have great comedy potential.

Seems I'm kind of alone in this. I was convinced it was something people would think was a good idea.
 

Suedemaker

Linked the Fire
Member
Jun 4, 2019
1,777
This was basically the one thing I wanted from Kinect. Not necessarily a completely perfect image, but something like what Kinect Sports did on the X1.

Scan the face, use the in-game modelling to make it accurate but context appropriate.

It would save me a ton of time in things like Mass Effect, Fallout, Elder Scrolls etc
 

TheBaldwin

Member
Feb 25, 2018
8,531
It could be a cool feature, espcailly in sports games.

But for me, the fun in character creators is creating my own character. i make completely unique characters each time, and just having me would just be kinda eh.