Voice "cloning", in which a computer reads text aloud in a voice meant to sound like a specific individual, was once a niche product for those who were in danger of losing their speech to disease. Now the technology has become so advanced and easy to implement that voice-ID security systems are being jeopardized [bold added]
any voice—including that of a stranger—can be cloned if decent recordings are available on YouTube or elsewhere. Researchers at the University of Alabama, Birmingham, led by Nitesh Saxena, were able to use [Carnegie-Mellon's] Festvox to clone voices based on only five minutes of speech retrieved online. When tested against voice-biometrics software like that used by many banks to block unauthorised access to accounts, more than 80% of the fake voices tricked the computer. Alan Black, one of Festvox’s developers, reckons systems that rely on voice-ID software are now “deeply, fundamentally insecure”.
Public figures who are caught "on tape" with embarrassing utterances can say with increasing believability that the recording is "fake audio." And why not? As everything has gone digital--with enormous benefits, to be sure--everything can be faked.
No comments:
Post a Comment