Why would you need to rotate the image for facial recognition? That might've been necessary 10-20 years ago but scanners I've used from China can do a face match with live video on a moving target no problem, and it ran on a cheap tablet.
not for the actual recognition I imagine, but for training a model you'd want your training data to be somewhat consistent (faces in the middle, no rotation, neutral background, similar lighting etc)
Not really, if you want a robust recognition system. In fact it's often normal in image sets to create part of your training data by transforming pictures of your training set, for example by rotating, translating, scaling etc
really depends on what exactly your doing, I'd say. Sure, variety is generally good but only in so far as it's actually required in the particular use case.
All I'm saying is this doesn't seem like an entirely implausible explanation. You can even see the old image under the rotated one - not exactly looking like a typical AI generated media glitch, from what I've seen so far.
If I'd have to guess, I'd say refinement. Further training in an effort to get accuracy up.
As good as these systems are, they're never fully accurate, never truly 'finished'
All that being said, this is just me taking some stabs in the dark.
I don't see how. Modern systems don't need photo editing. You literally just stare into it and it draws lines on your face in real time live video. They also have cameras that can ID you by your walking gait and even just based on your eyes if you have a mask on.
Why would you need to rotate the image for facial recognition? That might've been necessary 10-20 years ago but scanners I've used from China can do a face match with live video on a moving target no problem, and it ran on a cheap tablet.
not for the actual recognition I imagine, but for training a model you'd want your training data to be somewhat consistent (faces in the middle, no rotation, neutral background, similar lighting etc)
Not really, if you want a robust recognition system. In fact it's often normal in image sets to create part of your training data by transforming pictures of your training set, for example by rotating, translating, scaling etc
really depends on what exactly your doing, I'd say. Sure, variety is generally good but only in so far as it's actually required in the particular use case.
All I'm saying is this doesn't seem like an entirely implausible explanation. You can even see the old image under the rotated one - not exactly looking like a typical AI generated media glitch, from what I've seen so far.
So, my question, then, would be to ask why they're using their target pop in a training set when the system is live.
If I'd have to guess, I'd say refinement. Further training in an effort to get accuracy up. As good as these systems are, they're never fully accurate, never truly 'finished'
All that being said, this is just me taking some stabs in the dark.
deleted by creator
I don't see how. Modern systems don't need photo editing. You literally just stare into it and it draws lines on your face in real time live video. They also have cameras that can ID you by your walking gait and even just based on your eyes if you have a mask on.
deleted by creator