Whoa holy shit, this is kinda nutso. I was on the "this is real" train before but I'm now leaning toward the "this is deep fake" train.
So a real person doesn't make nearly the same face twice, but DNNs do because their training dataset is finite and it doesn't learn in real-time to identify it already used a pose.
(also NN wizards please don't pick me apart here; I know that's not how it works exactly, but my point I'm trying to get across is it's not like an LSTM: the network has no feedback to prevent over-reliance on a particular learned concept over time)
Whoa holy shit, this is kinda nutso. I was on the "this is real" train before but I'm now leaning toward the "this is deep fake" train.
So a real person doesn't make nearly the same face twice, but DNNs do because their training dataset is finite and it doesn't learn in real-time to identify it already used a pose.
(also NN wizards please don't pick me apart here; I know that's not how it works exactly, but my point I'm trying to get across is it's not like an LSTM: the network has no feedback to prevent over-reliance on a particular learned concept over time)