Recent advancements in artificial intelligence (AI) and cloud computing technologies have led to rapid development in the sophistication of audio, video, and image manipulation techniques. This synthetic media content is commonly referred to as “deepfakes[1].” AI based tools can manipulate media in increasingly believable ways, for example by creating a copy of a public person’s voice or superimposing one person’s face on another person’s body.

Legislation, policy, media literacy, and technology must work in tandem for an effective remedy for malicious use of deepfakes.

Technical countermeasures used to mitigate the impact of deepfakes fall into three categories: media authentication, media provenance, and deepfake detection.

Media Authentication includes solutions that help prove integrity across the media lifecycle by using watermarking, media verification markers, signatures, and chain-of-custody logging. Authentication is the most effective way to prevent the deceptive manipulation of trusted media because it verifies and tracks integrity throughout the content lifecycle or verify it at the distribution endpoint.

#ai #fb #deepfake-technology #dfdc #deepfakes

Deepfake detection is super hard!
1.70 GEEK