Deceptive new tech has people voicing words they never said

(Photo by Joe Kovacs)
(Photo by Joe Kovacs)

Now it appears that artificial intelligence can make anyone say anyone. Literally anyone and literally anything. And the proof is in the Mona Lisa … RAPPING?

All that’s required is a still image of a face, a recording of words, singing, anything, and the new software.

Microsoft’s product exhibits the possibilities.

A report at CNN detailed one of the recent “advances” in computer tech, with Microsoft’s codes able to “take a still image of a face and an audio clip of someone speaking and automatically create a realistic looking video of that person speaking.”

Get the hottest, most important news stories on the Internet – delivered FREE to your inbox as soon as they break! Take just 30 seconds and sign up for WND’s Email News Alerts!

That software is called VASA-1 and the report calls the results “a bit jarring.”

“Microsoft said the technology could be used for education or ‘improving accessibility for individuals with communication challenges,’ or potentially to create virtual companions for humans. But it’s also easy to see how the tool could be abused and used to impersonate real people,” CNN documented.

“Wow. Creating videos realistically depicting people saying words they never said? What could possibly go wrong with that?” commented author and WND Managing Editor David Kupelian. “Today’s ruling elites, from the Deep State to Big Tech, are so dependent on lies and deception – while censoring and attacking unwelcome truth as ‘disinformation,’ ‘misinformation’ and ‘malinformation’ – it’s easy to imagine that before long they’ll be using technology like this to enhance their daily practice of portraying the innocent as guilty and the guilty as innocent.”

CNN noted that experts now worry the tech could “disrupt” existing industries of film and advertising, and elevate the level of “misinformation” to which consumers are subjected.

The report said Microsoft isn’t going to release the software … yet.

“The move is similar to how Microsoft partner OpenAI is handling concerns around its AI-generated video tool, Sora: OpenAI teased Sora in February, but has so far only made it available to some professional users and cybersecurity professors for testing purposes,” the report said.

Online, Microsoft researchers claimed they are “opposed” to anything that creates “misleading” content.

However, they’ve designed to code to take into account face and head movements, lip motion, expression, eye gaze, blinking and much more.

For 25 years, WND has boldly brought you the news that really matters. If you appreciate our Christian journalists and their uniquely truthful reporting and analysis, please help us by becoming a WND Insider!

Content created by the WND News Center is available for re-publication without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact [email protected].

SUPPORT TRUTHFUL JOURNALISM. MAKE A DONATION TO THE NONPROFIT WND NEWS CENTER. THANK YOU!

Related Posts