With the rapid development of digital technologies, the phenomenon of deepfake is becoming increasingly common, raising concerns in the context of disinformation and manipulation of the public. Deepfake is a technique that uses artificial intelligence to create hyper-realistic but fake video or audio footage in which individuals appear to say or do things they never said or did.
Psychological effects of deepfake
Dr. Jakub Kuś of SWPS University points to the phenomenon of “truth fatigue” resulting from the flood of fake content. People, bombarded with information whose authenticity they are unable to verify, can fall into skepticism and cynicism towards the media. This leads people to give up trying to distinguish truth from falsehood, which encourages the development of conspiracy theories and social polarization.
NASK experts point out that while deepfake technology is improving, there are some signals that could indicate manipulation:
Unnatural mouth movements and facial expressions: deepfake often has difficulty reproducing these expressions accurately.
Sound and synchronization problems: sounds may not match lip movements.
Artifacts in images: Blurred edges or differences in lighting indicate possible editing.
Deepfake Detection Tools
In response to the growing threat, tools are being developed to identify fake content:
Sensity AI: A platform that offers video and image analysis to identify deepfakes.
Deepware Scanner: A tool that uses artificial intelligence to analyze faces and detect signs of video tampering.
Microsoft Video Authenticator: A program that analyzes photos and videos for signs of manipulation, estimating the likelihood that the material has been modified.
Reality Defender: A tool developed by the AI Foundation that analyzes audio and video content for signs of manipulation.
Sources: Nauka w Polsce, Radio DTR, Instytut Cyberbezpieczeństwa / Photo: Adobe Stock