A novel speech simulation technique, Can synthesize your voice, Impersonating you.!
A Canadian startup called Lyrebird has invented an artificial intelligence system that allows users to synthesize the voice of a speaker using a recording about a minute long. The AI system can use voice imitation algorithms to imitate a person's voice and also read various types of text aloud, and while this voice copying technology may sound interesting, it can also have serious consequences as users can use it to impersonate others.
The startup is based on a deep learning model developed by PhD students at the University of Montreal. The company's name is "Gorgeous Harp Bird," a bird native to Australia that can mimic the sounds of 20 different creatures at once.
The company uses an artificial intelligence system to compress the personal characteristics in a piece of speech into a unique code. The developers say that the code can generate 1,000 sentences in less than half a second when fed into the algorithm, which not only synthesizes speech but also controls the voice, giving it emotions such as anger, sympathy or tension, with the official website demonstrating how accurate the technology is, using the voices of Trump, Obama and Hillary as examples.
The developers say the technology could be used in a wide range of ways, such as as as a personal assistant, reading audiobooks in the voice of a celebrity, or composing "speeches" for people with disabilities, as well as in anime movies and video games, and Gorgeous Bird is the first company to use a small recording to accurately replicate someone's voice. Such technology can cause serious social problems. "The researchers wrote on the official website.
Recordings are often seen as powerful evidence and are particularly valued in the judicial systems of many countries. And unscrupulous individuals can use the technology we have invented to easily manipulate recordings, thereby undermining their credibility as evidence. "The developers acknowledged that the technology could lead to dangerous consequences, such as "misleading diplomats by stealing someone's identity or committing fraud", and the team argued that by the time the technology was made available to the public, the recordings should not be considered as evidence of identity.
The company said the technology is still in the development stage and did not mention a specific release date or cost.