Deepfake technology has been developed by researchers at academic institutions beginning in the 1990s, and later by amateurs in online communities. More recently, the methods have been adopted by industry.
Age and lack of literacy related to deepfakes are another factor that drives engagement. Older users who may be technologically-illiterate might not recognize deepfakes as falsified content and share this content because they believe it to be true. Alternatively, younger users accustomed to the entertainment value of deepfakes are more likely to share them with an awareness of their falsified content. Despite cognitive ability being a factor in successfully detecting deepfakes, individuals who are aware of a deepfake may be just as likely to share it on social media as one who does not know it is a deepfake. Within scholarship focused on detecting deepfakes, deep-learning methods using techniques to identify software-induced artifacts have been found to be the most effective in separating a deepfake from an authentic product. Due to the capabilities of deepfakes, concerns have developed related to regulations and literacy toward the technology. The potential malicious applications of deepfakes and their capability to impact public figures, reputations, or promote misleading narratives are the primary drivers of these concerns. Amongst some experts, potential malicious applications of deepfakes have encouraged them into labeling deepfakes as a potential danger to democratic societies that would benefit from a regulatory framework to mitigate potential risks.
In cinema studies, deepfakes illustrate how how "the human face is emerging as a central object of ambivalence in the digital age". Video artists have used deepfakes to "playfully rewrite film history by retrofitting canonical cinema with new star performers". Film scholar Christopher Holliday analyses how altering the gender and race of performers in familiar movie scenes destabilizes gender classifications and categories. The concept of "queering" deepfakes is also discussed in Oliver M. Gingrich's discussion of media artworks that use deepfakes to reframe gender, including British artist Jake Elwes' Zizi: Queering the Dataset, an artwork that uses deepfakes of drag queens to intentionally play with gender. The aesthetic potentials of deepfakes are also beginning to be explored. Theatre historian John Fletcher notes that early demonstrations of deepfakes are presented as performances, and situates these in the context of theater, discussing "some of the more troubling paradigm shifts" that deepfakes represent as a performance genre.
Philosophers and media scholars have discussed the ethical implications of deepfakes in the dissemination of disinformation. Amina Vatreš from the Department of Communication Studies at the University of Sarajevo identifies three factors contributing to the widespread acceptance of deepfakes, and where its greatest danger lies: 1) convincing visualization and auditory support, 2) widespread accessibility, and 3) the inability to draw a clear line between truth and falsehood. Another area of discussion on deepfakes is in relation to pornography made with deepfakes. Media scholar Emily van der Nagel draws upon research in photography studies on manipulated images to discuss verification systems, that allow women to consent to uses of their images.
Beyond pornography, deepfakes have been framed by philosophers as an "epistemic threat" to knowledge and thus to society. There are several other suggestions for how to deal with the risks deepfakes give rise beyond pornography, but also to corporations, politicians and others, of "exploitation, intimidation, and personal sabotage", and there are several scholarly discussions of potential legal and regulatory responses both in legal studies and media studies. In psychology and media studies, scholars discuss the effects of disinformation that uses deepfakes, and the social impact of deepfakes.
While most English-language academic studies of deepfakes focus on the Western anxieties about disinformation and pornography, digital anthropologist Gabriele de Seta has analyzed the Chinese reception of deepfakes, which are known as huanlian, which translates to "changing faces". The Chinese term does not contain the "fake" of the English deepfake, and de Seta argues that this cultural context may explain why the Chinese response has centered on practical regulatory measures to "fraud risks, image rights, economic profit, and ethical imbalances".
A landmark early project was the "Video Rewrite" program, published in 1997. The program modified existing video footage of a person speaking to depict that person mouthing the words from a different audio track. It was the first system to fully automate this kind of facial reanimation, and it did so using machine learning techniques to make connections between the sounds produced by a video's subject and the shape of the subject's face.
Contemporary academic projects have focused on creating more realistic videos and improving deepfake techniques. The "Synthesizing Obama" program, published in 2017, modifies video footage of former president Barack Obama to depict him mouthing the words contained in a separate audio track. The project lists as a main research contribution to its photorealistic technique for synthesizing mouth shapes from audio. The "Face2Face" program, published in 2016, modifies video footage of a person's face to depict them mimicking another person's facial expressions. The project highlights its primary research contribution as the development of the first method for re-enacting facial expressions in real time using a camera that does not capture depth, enabling the technique to work with common consumer cameras.
Researchers have also shown that deepfakes are expanding into other domains such as medical imagery. In this work, it was shown how an attacker can automatically inject or remove lung cancer in a patient's 3D CT scan. The result was so convincing that it fooled three radiologists and a state-of-the-art lung cancer detection AI. To demonstrate the threat, the authors successfully performed the attack on a hospital in a White hat penetration test.
A survey of deepfakes, published in May 2020, provides a timeline of how the creation and detection deepfakes have advanced over the last few years. The survey identifies that researchers have been focusing on resolving the following challenges of deepfake creation:
Overall, deepfakes are expected to have several implications in media and society, media production, media representations, media audiences, gender, law, and regulation, and politics.
Other online communities remain, including Reddit communities that do not share pornography, such as "r/SFWdeepfakes" (short for "safe for work deepfakes"), in which community members share deepfakes depicting celebrities, politicians, and others in non-pornographic scenarios. Other online communities continue to share pornography on platforms that have not banned deepfake pornography.
In January 2018, a proprietary desktop application called "FakeApp" was launched. This app allows users to easily create and share videos with their faces swapped with each other. As of 2019, "FakeApp" had been largely replaced by open-source alternatives such as "Faceswap", command line-based "DeepFaceLab", and web-based apps such as DeepfakesWeb.com
Larger companies started to use deepfakes. Corporate training videos can be created using deepfaked avatars and their voices, for example Synthesia, which uses deepfake technology with avatars to create personalized videos. The mobile app Momo created the application Zao which allows users to superimpose their face on television and movie clips with a single picture. As of 2019 the Japanese AI company DataGrid made a full body deepfake that could create a person from scratch.
Deepfake technology's ability to fabricate messages and actions of others can include deceased individuals. On 29 October 2020, Kim Kardashian posted a video featuring a hologram of her late father Robert Kardashian created by the company Kaleida, which used a combination of performance, motion tracking, SFX, VFX and DeepFake technologies to create the illusion.
Companies which have used digital clones of professional actors in advertisements include Puma, Nike and Procter & Gamble.
Deepfakes allowed for the use of David Beckham in a campaign using nearly nine languages to raise awareness the fight against Malaria.
Deepfakes are also being used in education and media to create realistic videos and interactive content, which offer new ways to engage audiences.
Deepfakes can be used to generate blackmail materials that falsely incriminate a victim. A report by the American Congressional Research Service warned that deepfakes could be used to blackmail elected officials or those with access to classified information for espionage or influence purposes.
Alternatively, since the fakes cannot reliably be distinguished from genuine materials, victims of actual blackmail can now claim that the true artifacts are fakes, granting them plausible deniability. The effect is to void credibility of existing blackmail materials, which erases loyalty to blackmailers and destroys the blackmailer's control. This phenomenon can be termed "blackmail inflation", since it "devalues" real blackmail, rendering it worthless. It is possible to utilize commodity GPU hardware with a small software program to generate this blackmail content for any number of subjects in huge quantities, driving up the supply of fake blackmail content limitlessly and in highly scalable fashion.
Fraudsters and scammers make use of deepfakes to trick people into fake investment schemes, financial fraud, cryptocurrencies, sending money, and following endorsements. The likenesses of celebrities and politicians have been used for large-scale scams, as well as those of private individuals, which are used in spearphishing attacks. According to the Better Business Bureau, deepfake scams are becoming more prevalent. These scams are responsible for an estimated $12 billion in fraud losses globally. According to a recent report these numbers are expected to reach $40 Billion over the next three years.
Celebrities have been warning people of these fake endorsements, and to be more vigilant against them. Celebrities are unlikely to file lawsuits against every person operating deepfake scams, as "finding and suing anonymous social media users is resource intensive," though cease and desist letters to social media companies work in getting videos and ads taken down.
As of 2023, the combination advances in deepfake technology, which could clone an individual's voice from a recording of a few seconds to a minute, and new text generation tools, enabled automated impersonation scams, targeting victims using a convincing digital clone of a friend or relative.
Deepfakes have been used to misrepresent well-known politicians in videos.
In 2017, Deepfake pornography prominently surfaced on the Internet, particularly on Reddit. As of 2019, many deepfakes on the internet feature pornography of female celebrities whose likeness is typically used without their consent. A report published in October 2019 by Dutch cybersecurity startup Deeptrace estimated that 96% of all deepfakes online were pornographic.
As of 2018, a Daisy Ridley deepfake first captured attention, among others. As of October 2019, most of the deepfake subjects on the internet were British and American actors. However, around a quarter of the subjects are South Korean, the majority of which are K-pop stars.
Female celebrities are often a main target when it comes to deepfake pornography. In 2023, deepfake porn videos appeared online of Emma Watson and Scarlett Johansson in a face swapping app. In 2024, deepfake porn images circulated online of Taylor Swift.
Academic studies have reported that women, LGBT people and people of colour (particularly activists, politicians and those questioning power) are at higher risk of being targets of promulgation of deepfake pornography.
Deepfakes have begun to see use in popular social media platforms, notably through Zao, a Chinese deepfake app that allows users to substitute their own faces onto those of characters in scenes from films and television shows such as Romeo + Juliet and Game of Thrones. The app originally faced scrutiny over its invasive user data and privacy policy, after which the company put out a statement claiming it would revise the policy. In January 2020 Facebook announced that it was introducing new measures to counter this on its platforms.
Though fake photos have long been plentiful, faking motion pictures has been more difficult, and the presence of deepfakes increases the difficulty of classifying videos as genuine or not. AI researcher Alex Champandard has said people should know how fast things can be corrupted with deepfake technology, and that the problem is not a technical one, but rather one to be solved by trust in information and journalism. Computer science associate professor Hao Li of the University of Southern California states that deepfakes created for malicious use, such as fake news, will be even more harmful if nothing is done to spread awareness of deepfake technology. Li predicted that genuine videos and deepfakes would become indistinguishable in as soon as half a year, as of October 2019, due to rapid advancement in artificial intelligence and computer graphics. Former Google fraud czar Shuman Ghosemajumder has called deepfakes an area of "societal concern" and said that they will inevitably evolve to a point at which they can be generated automatically, and an individual could use that technology to produce millions of deepfake videos.
A primary pitfall is that humanity could fall into an age in which it can no longer be determined whether a medium's content corresponds to the truth. Deepfakes are one of a number of tools for disinformation attack, creating doubt, and undermining trust. They have a potential to interfere with democratic functions in societies, such as identifying collective agendas, debating issues, informing decisions, and solving problems though the exercise of political will. People may also start to dismiss real events as fake.
Deepfakes possess the ability to damage individual entities tremendously. This is because deepfakes are often targeted at one individual, and/or their relations to others in hopes to create a narrative powerful enough to influence public opinion or beliefs. This can be done through deepfake voice phishing, which manipulates audio to create fake phone calls or conversations. Another method of deepfake use is fabricated private remarks, which manipulate media to convey individuals voicing damaging comments. The quality of a negative video or audio does not need to be that high. As long as someone's likeness and actions are recognizable, a deepfake can hurt their reputation.
In September 2020 Microsoft made public that they are developing a Deepfake detection software tool.
Detecting fake audio is a highly complex task that requires careful attention to the audio signal in order to achieve good performance. Using deep learning, preprocessing of feature design and masking augmentation have been proven effective in improving performance.
Most of the academic research surrounding deepfakes focuses on the detection of deepfake videos. One approach to deepfake detection is to use algorithms to recognize patterns and pick up subtle inconsistencies that arise in deepfake videos. For example, researchers have developed automatic systems that examine videos for errors such as irregular blinking patterns of lighting. This approach has been criticized because deepfake detection is characterized by a "moving goal post" where the production of deepfakes continues to change and improve as algorithms to detect deepfakes improve. In order to assess the most effective algorithms for detecting deepfakes, a coalition of leading technology companies hosted the Deepfake Detection Challenge to accelerate the technology for identifying manipulated content. The winning model of the Deepfake Detection Challenge was 65% accurate on the holdout set of 4,000 videos. A team at Massachusetts Institute of Technology published a paper in December 2021 demonstrating that ordinary humans are 69–72% accurate at identifying a random sample of 50 of these videos.
A team at the University of Buffalo published a paper in October 2020 outlining their technique of using reflections of light in the eyes of those depicted to spot deepfakes with a high rate of success, even without the use of an AI detection tool, at least for the time being.
In the case of well-documented individuals such as political leaders, algorithms have been developed to distinguish identity-based features such as patterns of facial, gestural, and vocal mannerisms and detect deep-fake impersonators.
Another team led by Wael AbdAlmageed with Visual Intelligence and Multimedia Analytics Laboratory (VIMAL) of the Information Sciences Institute at the University Of Southern California developed two generations of deepfake detectors based on convolutional neural networks. The first generation used recurrent neural networks to spot spatio-temporal inconsistencies to identify visual artifacts left by the deepfake generation process. The algorithm achieved 96% accuracy on FaceForensics++, the only large-scale deepfake benchmark available at that time. The second generation used end-to-end deep networks to differentiate between artifacts and high-level semantic facial information using two-branch networks. The first branch propagates colour information while the other branch suppresses facial content and amplifies low-level frequencies using Laplacian of Gaussian (LoG). Further, they included a new loss function that learns a compact representation of bona fide faces, while dispersing the representations (i.e. features) of deepfakes. VIMAL's approach showed state-of-the-art performance on FaceForensics++ and Celeb-DF benchmarks, and on March 16, 2022 (the same day of the release), was used to identify the deepfake of Volodymyr Zelensky out-of-the-box without any retraining or knowledge of the algorithm with which the deepfake was created.
Digitally signing of all video and imagery by cameras and video cameras, including smartphone cameras, was suggested to fight deepfakes. That allows tracing every photograph or video back to its original owner that can be used to pursue dissidents.
One easy way to uncover deepfake video calls consists in asking the caller to turn sideways.
Henry Ajder who works for Deeptrace, a company that detects deepfakes, says there are several ways to protect against deepfakes in the workplace. Semantic passwords or secret questions can be used when holding important conversations. Voice authentication and other biometric security features should be up to date. Educate employees about deepfakes.
Due to the capability of deepfakes to fool viewers and believably mimic a person, research has indicated that the concept of truth through observation cannot be fully relied on. Additionally, literacy of the technology among populations could be called into question due to the relatively new success of convincing deepfakes. When combined with increasing ease of access to the technology, this has led to the concern amongst some experts that some societies are not prepared to interact with deepfakes organically without potential consequences from sharing misinformation and disinformation. Media literacy has been considered as a potential counter to "prime" a viewer to identify a deepfake when they encounter one organically by engendering critical thinking. While media literacy education can have conflicting results in the overall success in detecting deepfakes, research has indicated that critical thinking and a skeptical outlook toward a presented piece of media are effective at assisting an individual in determining a deepfake. Media literacy frameworks promote critical analysis of media and the motivations behind the presentation of the associated content. Media literacy shows promise as a potential cognitive countermeasure when interacting with malicious deepfakes.
Recently, the use of deepfakes has inspired research on deepfake's capability and effects when used in disinformation campaigns. This capability has raised concerns, partly due to the potential of deepfakes to circumvent a person's skepticism and influence their views on an issue. Due to the continued advancement in technology that improves deceptive capabilities of deepfakes, some scholars believe that deepfakes could pose a significant threat to democratic societies. Studies have investigated the effects of political deepfakes. In two separate studies focusing on Dutch participants, it was found that deepfakes have varying effects on an audience. As a tool of disinformation, deepfakes did not necessarily produce stronger reactions or shifts in viewpoints than traditional textual disinformation. However, deepfakes did produce a reassuring effect on individuals who held preconceived notions that aligned with the viewpoint promoted by the deepfake disinformation in the study. Additionally, deepfakes are effective when designed to target a specific demographic segment related to a particular issue. "Microtargeting" involves understanding nuanced political issues of a specific demographic to create a targeted deepfake. The targeted deepfake is then used to connect with and influence the viewpoint of that demographic. Targeted deepfakes were found to be notably effective by the researchers. Research has also found that the political effects of deepfakes are not necessarily as straightforward or assured. Researchers in the United Kingdom uncovered that deepfake political disinformation does not have a guaranteed effect on populations beyond indications that it may sow distrust or uncertainty in a source that provides the deepfake. The implications of distrust in sources led researchers to conclude that deepfakes may have outsized effect in a "low-trust" information environment where public institutions are not trusted by the public.
Across the world, there are key instances where deepfakes have been used to misrepresent well-known politicians and other public figures.
In the United States, there have been some responses to the problems posed by deepfakes. In 2018, the Malicious Deep Fake Prohibition Act was introduced to the US Senate; in 2019, the Deepfakes Accountability Act was introduced in the 116th United States Congress by U.S. representative for New York's 9th congressional district Yvette Clarke. Several states have also introduced legislation regarding deepfakes, including Virginia, Texas, California, and New York; charges as varied as identity theft, cyberstalking, and revenge porn have been pursued, while more comprehensive statutes are urged.
In November 2019, China announced that deepfakes and other synthetically faked footage should bear a clear notice about their fakeness starting in 2020. Failure to comply could be considered a crime the Cyberspace Administration of China stated on its website. The Chinese government seems to be reserving the right to prosecute both users and online video platforms failing to abide by the rules. The Cyberspace Administration of China, the Ministry of Industry and Information Technology, and the Ministry of Public Security jointly issued the Provision on the Administration of Deep Synthesis Internet Information Service in November 2022. China's updated Deep Synthesis Provisions (Administrative Provisions on Deep Synthesis in Internet-Based Information Services) went into effect in January 2023.
In the United Kingdom, producers of deepfake material could be prosecuted for harassment, but deepfake production was not a specific crime until 2023, when the Online Safety Act was passed, which made deepfakes illegal; the UK plans to expand the Act's scope to criminalize deepfakes created with "intention to cause distress" in 2024.
In India, there are no direct laws or regulation on AI or deepfakes, but there are provisions under the Indian Penal Code and Information Technology Act 2000/2008, which can be looked at for legal remedies, and the new proposed Digital India Act will have a chapter on AI and deepfakes in particular, as per the MoS Rajeev Chandrasekhar.
In 2019, DARPA hosted a "proposers day" for the Semantic Forensics (SemaFor) program where researchers were driven to prevent viral spread of AI-manipulated media. DARPA and the Semantic Forensics Program were also working together to detect AI-manipulated media through efforts in training computers to utilize common sense, logical reasoning. Built on the MediFor's technologies, SemaFor's attribution algorithms infer if digital media originates from a particular organization or individual, while characterization algorithms determine whether media was generated or manipulated for malicious purposes. In March 2024, SemaFor published an analytic catalog that offers the public access to open-source resources developed under SemaFor.
Brandon, John (16 February 2018). "Terrifying high-tech porn:consumer Creepy 'deepfake' videos are on the rise". Fox News. Archived from the original on 15 June 2018. Retrieved 20 February 2018. https://www.foxnews.com/tech/terrifying-high-tech-porn-creepy-deepfake-videos-are-on-the-rise/
Kalpokas, Ignas; Kalpokiene, Julija (2022). Deepfakes. Springer Cham. pp. 1–2. doi:10.1007/978-3-030-93802-4. ISBN 978-3-030-93801-7. 978-3-030-93801-7
Berry, David M. (19 March 2025). "Synthetic media and computational capitalism: towards a critical theory of artificial intelligence". AI & Society. arXiv:2503.18976. doi:10.1007/s00146-025-02265-2. ISSN 1435-5655. https://doi.org/10.1007%2Fs00146-025-02265-2
Juefei-Xu, Felix; Wang, Run; Huang, Yihao; Guo, Qing; Ma, Lei; Liu, Yang (1 July 2022). "Countering Malicious DeepFakes: Survey, Battleground, and Horizon". International Journal of Computer Vision. 130 (7): 1678–1734. doi:10.1007/s11263-022-01606-8. ISSN 1573-1405. PMC 9066404. PMID 35528632. Archived from the original on 10 June 2024. Retrieved 15 July 2023. https://doi.org/10.1007/s11263-022-01606-8
Kietzmann, J.; Lee, L. W.; McCarthy, I. P.; Kietzmann, T. C. (2020). "Deepfakes: Trick or treat?" (PDF). Business Horizons. 63 (2): 135–146. doi:10.1016/j.bushor.2019.11.006. S2CID 213818098. Archived (PDF) from the original on 29 December 2022. Retrieved 30 December 2022. https://irep.ntu.ac.uk/id/eprint/38737/1/1247050_Lee.pdf
Waldrop, M. Mitchell (16 March 2020). "Synthetic media: The real trouble with deepfakes". Knowable Magazine. Annual Reviews. doi:10.1146/knowable-031320-1. Archived from the original on 19 November 2022. Retrieved 19 December 2022. https://knowablemagazine.org/article/technology/2020/synthetic-media-real-trouble-deepfakes
Kietzmann, J.; Lee, L. W.; McCarthy, I. P.; Kietzmann, T. C. (2020). "Deepfakes: Trick or treat?" (PDF). Business Horizons. 63 (2): 135–146. doi:10.1016/j.bushor.2019.11.006. S2CID 213818098. Archived (PDF) from the original on 29 December 2022. Retrieved 30 December 2022. https://irep.ntu.ac.uk/id/eprint/38737/1/1247050_Lee.pdf
Schwartz, Oscar (12 November 2018). "You thought fake news was bad? Deep fakes are where truth goes to die". The Guardian. Archived from the original on 16 June 2019. Retrieved 14 November 2018. https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth
Farid, Hany (15 September 2019). "Image Forensics". Annual Review of Vision Science. 5 (1): 549–573. doi:10.1146/annurev-vision-091718-014827. ISSN 2374-4642. PMID 31525144. S2CID 263558880. Archived from the original on 10 June 2024. Retrieved 20 September 2023. https://www.annualreviews.org/doi/full/10.1146/annurev-vision-091718-014827
Banks, Alec (20 February 2018). "What Are Deepfakes & Why the Future of Porn is Terrifying". Highsnobiety. Archived from the original on 14 July 2021. Retrieved 20 February 2018. https://www.highsnobiety.com/p/what-are-deepfakes-ai-porn/
Christian, Jon. "Experts fear face swapping tech could start an international showdown". The Outline. Archived from the original on 16 January 2020. Retrieved 28 February 2018. https://theoutline.com/post/3179/deepfake-videos-are-freaking-experts-out
Roose, Kevin (4 March 2018). "Here Come the Fake Videos, Too". The New York Times. ISSN 0362-4331. Archived from the original on 18 June 2019. Retrieved 24 March 2018. https://www.nytimes.com/2018/03/04/technology/fake-videos-deepfakes.html
Schreyer, Marco; Sattarov, Timur; Reimer, Bernd; Borth, Damian (October 2019). "Adversarial Learning of Deepfakes in Accounting". arXiv:1910.03810 [cs.LG]. /wiki/ArXiv_(identifier)
Caramancion, Kevin Matthe (21 April 2021). "The Demographic Profile Most at Risk of being Disinformed". 2021 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS). IEEE. pp. 1–7. doi:10.1109/iemtronics52119.2021.9422597. ISBN 978-1-6654-4067-7. S2CID 234499888. Archived from the original on 10 June 2024. Retrieved 9 June 2023. 978-1-6654-4067-7
Lalla, Vejay; Mitrani, Adine; Harned, Zach. "Artificial Intelligence: Deepfakes in the Entertainment Industry". World Intellectual Property Organization. Archived from the original on 8 November 2022. Retrieved 8 November 2022. https://www.wipo.int/wipo_magazine/en/2022/02/article_0003.html
Harwell, Drew (12 June 2019). "Top AI researchers race to detect 'deepfake' videos: 'We are outgunned'". The Washington Post. Archived from the original on 31 October 2019. Retrieved 8 November 2019. https://www.washingtonpost.com/technology/2019/06/12/top-ai-researchers-race-detect-deepfake-videos-we-are-outgunned/
Sanchez, Julian (8 February 2018). "Thanks to AI, the future of 'fake news' is being pioneered in homemade porn". NBC News. Archived from the original on 9 November 2019. Retrieved 8 November 2019. https://www.nbcnews.com/think/opinion/thanks-ai-future-fake-news-may-be-easily-faked-video-ncna845726
Porter, Jon (2 September 2019). "Another convincing deepfake app goes viral prompting immediate privacy backlash". The Verge. Archived from the original on 3 September 2019. Retrieved 8 November 2019. https://www.theverge.com/2019/9/2/20844338/zao-deepfake-app-movie-tv-show-face-replace-privacy-policy-concerns
Harwell, Drew (12 June 2019). "Top AI researchers race to detect 'deepfake' videos: 'We are outgunned'". The Washington Post. Archived from the original on 31 October 2019. Retrieved 8 November 2019. https://www.washingtonpost.com/technology/2019/06/12/top-ai-researchers-race-detect-deepfake-videos-we-are-outgunned/
Vatreš, Amina (2021). "Deepfake Phenomenon: An advanced form of fake news and its implications on reliable journalism". Društvene i humanističke studije. 16 (3): 561–576. doi:10.51558/2490-3647.2021.6.3.561. https://doi.org/10.51558%2F2490-3647.2021.6.3.561
Rana, Md Shohel; Nobi, Mohammad Nur; Murali, Beddhu; Sung, Andrew H. (2022). "Deepfake Detection: A Systematic Literature Review". IEEE Access. 10: 25494–25513. Bibcode:2022IEEEA..1025494R. doi:10.1109/ACCESS.2022.3154404. ISSN 2169-3536. https://doi.org/10.1109%2FACCESS.2022.3154404
Sudarsan, Ananya; Chua, Hui Na; Jasser, Muhammed Basheer; Wong, Richard T.K. (1 March 2024). "Deepfake Characterization, Propagation, and Detection in Social Media - A Synthesis Review". 2024 20th IEEE International Colloquium on Signal Processing & Its Applications (CSPA). IEEE. pp. 219–224. doi:10.1109/CSPA60979.2024.10525373. ISBN 979-8-3503-8231-0. 979-8-3503-8231-0
Sudarsan, Ananya; Chua, Hui Na; Jasser, Muhammed Basheer; Wong, Richard T.K. (1 March 2024). "Deepfake Characterization, Propagation, and Detection in Social Media - A Synthesis Review". 2024 20th IEEE International Colloquium on Signal Processing & Its Applications (CSPA). IEEE. pp. 219–224. doi:10.1109/CSPA60979.2024.10525373. ISBN 979-8-3503-8231-0. 979-8-3503-8231-0
Sudarsan, Ananya; Chua, Hui Na; Jasser, Muhammed Basheer; Wong, Richard T.K. (1 March 2024). "Deepfake Characterization, Propagation, and Detection in Social Media - A Synthesis Review". 2024 20th IEEE International Colloquium on Signal Processing & Its Applications (CSPA). IEEE. pp. 219–224. doi:10.1109/CSPA60979.2024.10525373. ISBN 979-8-3503-8231-0. 979-8-3503-8231-0
Sudarsan, Ananya; Chua, Hui Na; Jasser, Muhammed Basheer; Wong, Richard T.K. (1 March 2024). "Deepfake Characterization, Propagation, and Detection in Social Media - A Synthesis Review". 2024 20th IEEE International Colloquium on Signal Processing & Its Applications (CSPA). IEEE. pp. 219–224. doi:10.1109/CSPA60979.2024.10525373. ISBN 979-8-3503-8231-0. 979-8-3503-8231-0
Sudarsan, Ananya; Chua, Hui Na; Jasser, Muhammed Basheer; Wong, Richard T.K. (1 March 2024). "Deepfake Characterization, Propagation, and Detection in Social Media - A Synthesis Review". 2024 20th IEEE International Colloquium on Signal Processing & Its Applications (CSPA). IEEE. pp. 219–224. doi:10.1109/CSPA60979.2024.10525373. ISBN 979-8-3503-8231-0. 979-8-3503-8231-0
Rana, Md Shohel; Nobi, Mohammad Nur; Murali, Beddhu; Sung, Andrew H. (2022). "Deepfake Detection: A Systematic Literature Review". IEEE Access. 10: 25494–25513. Bibcode:2022IEEEA..1025494R. doi:10.1109/ACCESS.2022.3154404. ISSN 2169-3536. https://doi.org/10.1109%2FACCESS.2022.3154404
Alanazi, Sami; Asif, Seemal; Caird-daley, Antoinette; Moulitsas, Irene (20 February 2025). "Unmasking deepfakes: a multidisciplinary examination of social impacts and regulatory responses". Human-Intelligent Systems Integration. doi:10.1007/s42454-025-00060-4. ISSN 2524-4876. https://doi.org/10.1007%2Fs42454-025-00060-4
Alanazi, Sami; Asif, Seemal; Caird-daley, Antoinette; Moulitsas, Irene (20 February 2025). "Unmasking deepfakes: a multidisciplinary examination of social impacts and regulatory responses". Human-Intelligent Systems Integration. doi:10.1007/s42454-025-00060-4. ISSN 2524-4876. https://doi.org/10.1007%2Fs42454-025-00060-4
Alanazi, Sami; Asif, Seemal; Caird-daley, Antoinette; Moulitsas, Irene (20 February 2025). "Unmasking deepfakes: a multidisciplinary examination of social impacts and regulatory responses". Human-Intelligent Systems Integration. doi:10.1007/s42454-025-00060-4. ISSN 2524-4876. https://doi.org/10.1007%2Fs42454-025-00060-4
Bode, Lisa; Lees, Dominic; Golding, Dan (29 July 2021). "The Digital Face and Deepfakes on Screen". Convergence: The International Journal of Research into New Media Technologies. 27 (4): 849–854. doi:10.1177/13548565211034044. ISSN 1354-8565. S2CID 237402465. https://doi.org/10.1177%2F13548565211034044
Holliday, Christopher (26 July 2021). "Rewriting the stars: Surface tensions and gender troubles in the online media production of digital deepfakes". Convergence: The International Journal of Research into New Media Technologies. 27 (4): 899–918. doi:10.1177/13548565211029412. ISSN 1354-8565. S2CID 237402548. https://doi.org/10.1177%2F13548565211029412
Holliday, Christopher (26 July 2021). "Rewriting the stars: Surface tensions and gender troubles in the online media production of digital deepfakes". Convergence: The International Journal of Research into New Media Technologies. 27 (4): 899–918. doi:10.1177/13548565211029412. ISSN 1354-8565. S2CID 237402548. https://doi.org/10.1177%2F13548565211029412
Gingrich, Oliver M. (5 July 2021). "GENDER*UCK: Reframing gender & media art". Proceedings of EVA London 2021 (EVA 2021). Electronic Workshops in Computing. doi:10.14236/ewic/EVA2021.25. S2CID 236918199. https://doi.org/10.14236%2Fewic%2FEVA2021.25
Fletcher, John (2018). "Deepfakes, Artificial Intelligence, and Some Kind of Dystopia: The New Faces of Online Post-Fact Performance". Theatre Journal. 70 (4): 455–471. doi:10.1353/tj.2018.0097. ISSN 1086-332X. S2CID 191988083. https://muse.jhu.edu/article/715916
Vatreš, Amina (2021). "Deepfake Phenomenon: An advanced form of fake news and its implications on reliable journalism". Društvene i humanističke studije. 16 (3): 561–576. doi:10.51558/2490-3647.2021.6.3.561. https://doi.org/10.51558%2F2490-3647.2021.6.3.561
Öhman, Carl (1 June 2020). "Introducing the pervert's dilemma: a contribution to the critique of Deepfake Pornography". Ethics and Information Technology. 22 (2): 133–140. doi:10.1007/s10676-019-09522-1. ISSN 1572-8439. S2CID 208145457. https://doi.org/10.1007%2Fs10676-019-09522-1
van der Nagel, Emily (1 October 2020). "Verifying images: deepfakes, control, and consent". Porn Studies. 7 (4): 424–429. doi:10.1080/23268743.2020.1741434. ISSN 2326-8743. S2CID 242891792. Archived from the original on 10 June 2024. Retrieved 9 February 2022. https://www.tandfonline.com/doi/full/10.1080/23268743.2020.1741434
Fallis, Don (1 December 2021). "The Epistemic Threat of Deepfakes". Philosophy & Technology. 34 (4): 623–643. doi:10.1007/s13347-020-00419-2. ISSN 2210-5433. PMC 7406872. PMID 32837868. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7406872
Chesney, Robert; Citron, Danielle Keats (2018). "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security". SSRN Electronic Journal. doi:10.2139/ssrn.3213954. ISSN 1556-5068. Archived from the original on 21 December 2019. Retrieved 9 February 2022. https://www.ssrn.com/abstract=3213954
Yadlin-Segal, Aya; Oppenheim, Yael (February 2021). "Whose dystopia is it anyway? Deepfakes and social media regulation". Convergence: The International Journal of Research into New Media Technologies. 27 (1): 36–51. doi:10.1177/1354856520923963. ISSN 1354-8565. S2CID 219438536. Archived from the original on 9 February 2022. Retrieved 9 February 2022. http://journals.sagepub.com/doi/10.1177/1354856520923963
Hwang, Yoori; Ryu, Ji Youn; Jeong, Se-Hoon (1 March 2021). "Effects of Disinformation Using Deepfake: The Protective Effect of Media Literacy Education". Cyberpsychology, Behavior, and Social Networking. 24 (3): 188–193. doi:10.1089/cyber.2020.0174. ISSN 2152-2715. PMID 33646021. S2CID 232078561. Archived from the original on 10 June 2024. Retrieved 9 February 2022. https://www.liebertpub.com/doi/10.1089/cyber.2020.0174
Hight, Craig (12 November 2021). "Deepfakes and documentary practice in an age of misinformation". Continuum. 36 (3): 393–410. doi:10.1080/10304312.2021.2003756. ISSN 1030-4312. S2CID 244092288. Archived from the original on 9 February 2022. Retrieved 9 February 2022. https://www.tandfonline.com/doi/full/10.1080/10304312.2021.2003756
Hancock, Jeffrey T.; Bailenson, Jeremy N. (1 March 2021). "The Social Impact of Deepfakes". Cyberpsychology, Behavior, and Social Networking. 24 (3): 149–152. doi:10.1089/cyber.2021.29208.jth. ISSN 2152-2715. PMID 33760669. S2CID 232356146. Archived from the original on 10 June 2024. Retrieved 9 February 2022. https://www.liebertpub.com/doi/10.1089/cyber.2021.29208.jth
de Seta, Gabriele (30 July 2021). "Huanlian, or changing faces: Deepfakes on Chinese digital media platforms". Convergence: The International Journal of Research into New Media Technologies. 27 (4): 935–953. doi:10.1177/13548565211030185. hdl:11250/2833613. ISSN 1354-8565. S2CID 237402447. Archived from the original on 10 June 2024. Retrieved 9 February 2022. http://journals.sagepub.com/doi/10.1177/13548565211030185
Bregler, Christoph; Covell, Michele; Slaney, Malcolm (1997). "Video Rewrite: Driving visual speech with audio". Proceedings of the 24th annual conference on Computer graphics and interactive techniques – SIGGRAPH '97. Vol. 24. pp. 353–360. doi:10.1145/258734.258880. ISBN 0897918967. S2CID 2341707. Archived from the original on 10 June 2024. Retrieved 10 July 2023. 0897918967
Bregler, Christoph; Covell, Michele; Slaney, Malcolm (1997). "Video Rewrite: Driving visual speech with audio". Proceedings of the 24th annual conference on Computer graphics and interactive techniques – SIGGRAPH '97. Vol. 24. pp. 353–360. doi:10.1145/258734.258880. ISBN 0897918967. S2CID 2341707. Archived from the original on 10 June 2024. Retrieved 10 July 2023. 0897918967
Suwajanakorn, Supasorn; Seitz, Steven M.; Kemelmacher-Shlizerman, Ira (July 2017). "Synthesizing Obama: Learning Lip Sync from Audio". ACM Trans. Graph. 36 (4): 95:1–95:13. doi:10.1145/3072959.3073640. S2CID 207586187. Archived from the original on 19 May 2020. Retrieved 10 July 2023. https://dl.acm.org/doi/10.1145/3072959.3073640
Thies, Justus; Zollhöfer, Michael; Stamminger, Marc; Theobalt, Christian; Nießner, Matthias (June 2016). "Face2Face: Real-Time Face Capture and Reenactment of RGB Videos". 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 2387–2395. arXiv:2007.14808. doi:10.1109/CVPR.2016.262. ISBN 9781467388511. S2CID 206593693. 9781467388511
Suwajanakorn, Supasorn; Seitz, Steven M.; Kemelmacher-Shlizerman, Ira (July 2017). "Synthesizing Obama: Learning Lip Sync from Audio". ACM Trans. Graph. 36 (4): 95:1–95:13. doi:10.1145/3072959.3073640. S2CID 207586187. Archived from the original on 19 May 2020. Retrieved 10 July 2023. https://dl.acm.org/doi/10.1145/3072959.3073640
Suwajanakorn, Supasorn; Seitz, Steven M.; Kemelmacher-Shlizerman, Ira (July 2017). "Synthesizing Obama: Learning Lip Sync from Audio". ACM Trans. Graph. 36 (4): 95:1–95:13. doi:10.1145/3072959.3073640. S2CID 207586187. Archived from the original on 19 May 2020. Retrieved 10 July 2023. https://dl.acm.org/doi/10.1145/3072959.3073640
Thies, Justus; Zollhöfer, Michael; Stamminger, Marc; Theobalt, Christian; Nießner, Matthias (June 2016). "Face2Face: Real-Time Face Capture and Reenactment of RGB Videos". 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 2387–2395. arXiv:2007.14808. doi:10.1109/CVPR.2016.262. ISBN 9781467388511. S2CID 206593693. 9781467388511
"Deepfakes for dancing: you can now use AI to fake those dance moves you always wanted". The Verge. Archived from the original on 17 May 2019. Retrieved 27 August 2018. https://www.theverge.com/2018/8/26/17778792/deepfakes-video-dancing-ai-synthesis
Farquhar, Peter (27 August 2018). "An AI program will soon be here to help your deepface dancing – just don't call it deepfake". Business Insider Australia. Archived from the original on 10 April 2019. Retrieved 27 August 2018. https://web.archive.org/web/20190410050633/https://www.businessinsider.com.au/artificial-intelligence-ai-deepfake-dancing-2018-8
Mirsky, Yisroel; Mahler, Tom; Shelef, Ilan; Elovici, Yuval (2019). CT-GAN: Malicious Tampering of 3D Medical Imagery using Deep Learning. pp. 461–478. arXiv:1901.03597. ISBN 978-1-939133-06-9. Archived from the original on 20 June 2020. Retrieved 18 June 2020. 978-1-939133-06-9
O'Neill, Patrick Howell (3 April 2019). "Researchers Demonstrate Malware That Can Trick Doctors Into Misdiagnosing Cancer". Gizmodo. Archived from the original on 10 June 2024. Retrieved 3 June 2022. https://gizmodo.com/researchers-demonstrate-malware-that-can-trick-doctors-1833786672
Mirsky, Yisroel; Lee, Wenke (12 May 2020). "The Creation and Detection of Deepfakes: A Survey". ACM Computing Surveys. arXiv:2004.11138. doi:10.1145/3425780. S2CID 216080410. /wiki/ArXiv_(identifier)
Karnouskos, Stamatis (2020). "Artificial Intelligence in Digital Media: The Era of Deepfakes" (PDF). IEEE Transactions on Technology and Society. 1 (3): 1. doi:10.1109/TTS.2020.3001312. S2CID 221716206. Archived (PDF) from the original on 14 July 2021. Retrieved 9 July 2020. https://papers.duckdns.org/files/2020_Deepfakes.pdf
Cole, Samantha (24 January 2018). "We Are Truly Fucked: Everyone Is Making AI-Generated Fake Porn Now". Vice. Archived from the original on 7 September 2019. Retrieved 4 May 2019. https://www.vice.com/en/article/reddit-fake-porn-app-daisy-ridley/
Cole, Samantha (24 January 2018). "We Are Truly Fucked: Everyone Is Making AI-Generated Fake Porn Now". Vice. Archived from the original on 7 September 2019. Retrieved 4 May 2019. https://www.vice.com/en/article/reddit-fake-porn-app-daisy-ridley/
Haysom, Sam (31 January 2018). "People Are Using Face-Swapping Tech to Add Nicolas Cage to Random Movies and What Is 2018". Mashable. Archived from the original on 24 July 2019. Retrieved 4 April 2019. https://mashable.com/2018/01/31/nicolas-cage-face-swapping-deepfakes/
"r/SFWdeepfakes". Reddit. Archived from the original on 9 August 2019. Retrieved 12 December 2018. https://www.reddit.com/r/SFWdeepfakes/
Hathaway, Jay (8 February 2018). "Here's where 'deepfakes,' the new fake celebrity porn, went after the Reddit ban". The Daily Dot. Archived from the original on 6 July 2019. Retrieved 22 December 2018. https://www.dailydot.com/unclick/deepfake-sites-reddit-ban/
"What is a Deepfake and How Are They Made?". Online Tech Tips. 23 May 2019. Archived from the original on 8 November 2019. Retrieved 8 November 2019. https://www.online-tech-tips.com/computer-tips/what-is-a-deepfake-and-how-are-they-made/
Robertson, Adi (11 February 2018). "I'm using AI to face-swap Elon Musk and Jeff Bezos, and I'm really bad at it". The Verge. Archived from the original on 24 March 2018. Retrieved 8 November 2019. https://www.theverge.com/2018/2/11/16992986/fakeapp-deepfakes-ai-face-swapping
"Deepfakes web | The best online faceswap app". Deepfakes web. Archived from the original on 14 July 2021. Retrieved 21 February 2021. https://deepfakesweb.com/
"Faceswap is the leading free and Open Source multi-platform Deepfakes software". 15 October 2019. Archived from the original on 31 May 2021. Retrieved 14 July 2021 – via WordPress. https://faceswap.dev
"DeepFaceLab is a tool that utilizes machine learning to replace faces in videos. Includes prebuilt ready to work standalone Windows 7,8,10 binary (look readme.md).: iperov/DeepFaceLab". 19 June 2019. Archived from the original on 9 May 2019. Retrieved 6 March 2019 – via GitHub. https://github.com/iperov/DeepFaceLab
Porter, Jon (2 September 2019). "Another convincing deepfake app goes viral prompting immediate privacy backlash". The Verge. Archived from the original on 3 September 2019. Retrieved 8 November 2019. https://www.theverge.com/2019/9/2/20844338/zao-deepfake-app-movie-tv-show-face-replace-privacy-policy-concerns
Chandler, Simon. "Why Deepfakes Are A Net Positive For Humanity". Forbes. Archived from the original on 16 November 2020. Retrieved 3 November 2020. https://www.forbes.com/sites/simonchandler/2020/03/09/why-deepfakes-are-a-net-positive-for-humanity/
Porter, Jon (2 September 2019). "Another convincing deepfake app goes viral prompting immediate privacy backlash". The Verge. Archived from the original on 3 September 2019. Retrieved 8 November 2019. https://www.theverge.com/2019/9/2/20844338/zao-deepfake-app-movie-tv-show-face-replace-privacy-policy-concerns
Pangburn, D. J. (21 September 2019). "You've been warned: Full body deepfakes are the next step in AI-based human mimicry". Fast Company. Archived from the original on 8 November 2019. Retrieved 8 November 2019. https://www.fastcompany.com/90407145/youve-been-warned-full-body-deepfakes-are-the-next-step-in-ai-based-human-mimicry
Lyons, Kim (29 January 2020). "FTC says the tech behind audio deepfakes is getting better". The Verge. Archived from the original on 30 January 2020. Retrieved 8 February 2020. https://www.theverge.com/2020/1/29/21080553/ftc-deepfakes-audio-cloning-joe-rogan-phone-scams
"Audio samples from "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis"". google.github.io. Archived from the original on 14 November 2019. Retrieved 8 February 2020. https://google.github.io/tacotron/publications/speaker_adaptation/
Jia, Ye; Zhang, Yu; Weiss, Ron J.; Wang, Quan; Shen, Jonathan; Ren, Fei; Chen, Zhifeng; Nguyen, Patrick; Pang, Ruoming; Moreno, Ignacio Lopez; Wu, Yonghui (2 January 2019). "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis". arXiv:1806.04558 [cs.CL]. /wiki/ArXiv_(identifier)
"TUM Visual Computing: Prof. Matthias Nießner". www.niessnerlab.org. Archived from the original on 21 February 2020. Retrieved 8 February 2020. http://www.niessnerlab.org/projects/roessler2019faceforensicspp.html
"Full Page Reload". IEEE Spectrum: Technology, Engineering, and Science News. 11 December 2019. Archived from the original on 26 June 2020. Retrieved 8 February 2020. https://spectrum.ieee.org/facebook-ai-launches-its-deepfake-detection-challenge
"Contributing Data to Deepfake Detection Research". 24 September 2019. Archived from the original on 5 February 2020. Retrieved 8 February 2020. http://ai.googleblog.com/2019/09/contributing-data-to-deepfake-detection.html
Thalen, Mikael. "You can now deepfake yourself into a celebrity with just a few clicks". daily dot. Archived from the original on 6 April 2020. Retrieved 3 April 2020. https://www.dailydot.com/debug/impressions-deepfake-app///
Matthews, Zane (6 March 2020). "Fun or Fear: Deepfake App Puts Celebrity Faces In Your Selfies". Kool1079. Archived from the original on 24 March 2020. Retrieved 6 March 2020. https://kool1079.com/fun-or-fear-deepfake-app-puts-celebrity-faces-in-your-selfies//
"Kanye West, Kim Kardashian and her dad: Should we make holograms of the dead?". BBC News. 31 October 2020. Archived from the original on 15 November 2020. Retrieved 11 November 2020. https://www.bbc.com/news/entertainment-arts-54753214
"Kanye West Gave Kim Kardashian a Hologram of Her Father for Her Birthday". themodems. 30 October 2020. Archived from the original on 11 November 2020. Retrieved 11 November 2020. https://www.themodems.com/post/kanye-west-gave-kim-kardashian-a-hologram-of-her-father-for-her-birthday
"Parkland victim Joaquin Oliver comes back to life in heartbreaking plea to voters". adage.com. 2 October 2020. Archived from the original on 11 November 2020. Retrieved 11 November 2020. https://adage.com/article/advertising/parkland-victim-joaquin-oliver-comes-back-life-heartbreaking-plea-voters/2285166
Bowenbank, Starr (14 September 2022). "Simon Cowell Duets With Elvis in Metaphysic's Latest Deepfake 'AGT' Performance: Watch". Billboard. Archived from the original on 15 September 2022. Retrieved 8 November 2022. https://www.billboard.com/culture/tv-film/simon-cowell-duet-elvis-deepfake-agt-performance-1235138799/
"John Lennon 'One Laptop per Child' Commecial". YouTube. 26 December 2008. Archived from the original on 9 March 2023. Retrieved 9 March 2023. https://www.youtube.com/watch?v=Oz9R82vWw08
Zucconi, Alan (14 March 2018). "Understanding the Technology Behind DeepFakes". Alan Zucconi. Archived from the original on 1 November 2019. Retrieved 8 November 2019. https://www.alanzucconi.com/2018/03/14/understanding-the-technology-behind-deepfakes/
"What is a Deepfake?". Blog - Synthesys. 3 May 2022. Archived from the original on 26 June 2022. Retrieved 17 May 2022. https://web.archive.org/web/20220626181456/https://blog.synthesys.io/what-is-deepfake/
"These New Tricks Can Outsmart Deepfake Videos—for Now". Wired. ISSN 1059-1028. Archived from the original on 3 October 2019. Retrieved 9 November 2019. https://www.wired.com/story/these-new-tricks-can-outsmart-deepfake-videosfor-now/
"These New Tricks Can Outsmart Deepfake Videos—for Now". Wired. ISSN 1059-1028. Archived from the original on 3 October 2019. Retrieved 9 November 2019. https://www.wired.com/story/these-new-tricks-can-outsmart-deepfake-videosfor-now/
Kemp, Luke (8 July 2019). "In the age of deepfakes, could virtual actors put humans out of business?". The Guardian. ISSN 0261-3077. Archived from the original on 20 October 2019. Retrieved 20 October 2019. https://www.theguardian.com/film/2019/jul/03/in-the-age-of-deepfakes-could-virtual-actors-put-humans-out-of-business
Verma, Pranshu (21 July 2023). "Digital clones made by AI tech could make Hollywood extras obsolete". Washington Post. Archived from the original on 20 July 2023. Retrieved 4 January 2024. https://www.washingtonpost.com/technology/2023/07/19/ai-actors-fear-sag-strike-hollywood/
"High-Resolution Neural Face Swapping for Visual Effects | Disney Research Studios". Archived from the original on 27 November 2020. Retrieved 7 October 2020. https://studios.disneyresearch.com/2020/06/29/high-resolution-neural-face-swapping-for-visual-effects/
"High-Resolution Neural Face Swapping for Visual Effects | Disney Research Studios". Archived from the original on 27 November 2020. Retrieved 7 October 2020. https://studios.disneyresearch.com/2020/06/29/high-resolution-neural-face-swapping-for-visual-effects/
"Disney's deepfake technology could be used in film and TV". Blooloop. 21 July 2020. Archived from the original on 12 November 2020. Retrieved 7 October 2020. https://blooloop.com/news/disney-deepfake-face-swap-technology/
"Disney's deepfake technology could be used in film and TV". Blooloop. 21 July 2020. Archived from the original on 12 November 2020. Retrieved 7 October 2020. https://blooloop.com/news/disney-deepfake-face-swap-technology/
Lindley, Jon A. (2 July 2020). "Disney Ventures Into Bringing Back 'Dead Actors' Through Facial Recognition". Tech Times. Archived from the original on 14 July 2021. Retrieved 7 October 2020. https://www.techtimes.com/articles/250776/20200702/disney-is-using-deepfakes-and-facial-recognition-to-bring-back-dead-actors.htm
Radulovic, Petrana (17 October 2018). "Harrison Ford is the star of Solo: A Star Wars Story thanks to deepfake technology". Polygon. Archived from the original on 20 October 2019. Retrieved 20 October 2019. https://www.polygon.com/2018/10/17/17989214/harrison-ford-solo-movie-deepfake-technology
Winick, Erin. "How acting as Carrie Fisher's puppet made a career for Rogue One's Princess Leia". MIT Technology Review. Archived from the original on 23 October 2019. Retrieved 20 October 2019. https://www.technologyreview.com/s/612241/how-acting-as-carrie-fishers-puppet-made-a-career-for-rogue-ones-princess-leia/
"Deepfake Luke Skywalker is another step down a ghoulish CGI path". British GQ. 10 February 2022. Archived from the original on 22 May 2022. Retrieved 3 June 2022. https://www.gq-magazine.co.uk/culture/article/boba-fett-luke-skywalker
Dazed (10 February 2022). "Will deepfakes rewrite history as we know it?". Dazed. Archived from the original on 8 June 2022. Retrieved 3 June 2022. https://www.dazeddigital.com/science-tech/article/55429/1/deepfake-museum-of-moving-image-media-unstable-evidence-on-screen
Schwartzel, Erich (21 December 2023). "Behind the Making of My AI Digital Double". Wall Street Journal. Archived from the original on 6 January 2024. Retrieved 4 January 2024. https://www.wsj.com/tech/ai/behind-the-making-of-my-ai-digital-double-0ff22ac8
Coffee, Patrick (18 June 2023). "Celebrities Use AI to Take Control of Their Own Images". Wall Street Journal. Archived from the original on 10 June 2024. Retrieved 4 January 2024. https://www.wsj.com/amp/articles/ai-deepfakes-celebrity-marketing-brands-81381aa6
Prescott, Katie Prescott (23 August 2024). "The man who creates fake people – like David Beckham speaking nine languages". The Times. Retrieved 22 October 2024. https://www.thetimes.com/business-money/companies/article/the-man-who-creates-fake-people-like-david-beckham-speaking-nine-languages-flvbxmpw3
"Not Vijay, Here's Who Played the Younger Version of Him in The GOAT". english.tupaki.com/. 8 September 2024. Retrieved 8 November 2024. https://english.tupaki.com/entertainment/vijay-younger-version-in-goat-1383681
Katerina Cizek, William Uricchio, and Sarah Wolozin: Collective Wisdom | Massachusetts Institute of Technology [1] Archived 4 March 2020 at the Wayback Machine https://wip.mitpress.mit.edu/pub/collective-wisdom-part-6
"ANSA | Ornella Muti in cortometraggio a Firenze". 3 November 2017. Archived from the original on 27 February 2020. Retrieved 27 February 2020. http://www.ansa.it/toscana/notizie/2017/11/03/ornella-muti-in-cortometraggio-a-firenze_36349008-ce7b-4c7e-8742-43e28f7225f4.html
"'South Park' creators launch new deepfake satire series 'Sassy Justice'". NME. 27 October 2020. Archived from the original on 10 June 2024. Retrieved 7 June 2022. https://www.nme.com/news/tv/south-park-creators-launch-new-deepfake-satire-series-sassy-justice-2800657
Tayler, Kelley M.; Harris, Laurie A. (8 June 2021). Deep Fakes and National Security (Report). Congressional Research Service. p. 1. Archived from the original on 14 June 2022. Retrieved 19 July 2021. https://crsreports.congress.gov/product/pdf/IF/IF11333
Limberg, Peter (24 May 2020). "Blackmail Inflation". CultState. Archived from the original on 24 January 2021. Retrieved 18 January 2021. https://cultstate.com/2020/05/24/Podcast-18--Blackmail-Inflation/
"For Kappy". Telegraph. 24 May 2020. Archived from the original on 24 January 2021. Retrieved 18 January 2021. https://t.me/forKappy
"The AGT Judges Had Priceless Reactions to That Simon Cowell Singing Audition". NBC Insider Official Site. 8 June 2022. Archived from the original on 29 August 2022. Retrieved 29 August 2022. https://www.nbc.com/nbc-insider/agt-2022-see-the-judges-reactions-to-simon-cowell-singing
Marr, Bernard. "Can A Metaverse AI Win America's Got Talent? (And What That Means For The Industry)". Forbes. Archived from the original on 30 August 2022. Retrieved 30 August 2022. https://www.forbes.com/sites/bernardmarr/2022/08/30/can-a-metaverse-ai-win-americas-got-talent-and-what-that-means-for-the-industry/
Morales, Jowi (10 June 2022). "Deepfakes Go Mainstream: How Metaphysic's AGT Entry Will Impact Entertainment". MUO. Archived from the original on 10 June 2024. Retrieved 29 August 2022. https://www.makeuseof.com/deepfakes-mainstream-agt-entry/
Carter, Rebecca (1 June 2019). "BGT viewers slam Simon Cowell for 'rude' and 'nasty' remark to contestant". Entertainment Daily. Archived from the original on 31 August 2022. Retrieved 31 August 2022. https://www.entertainmentdaily.co.uk/tv/bgt-viewers-slam-simon-cowell-for-rude-and-nasty-remark-to-contestant/
Simon Cowell Sings on Stage?! Metaphysic Will Leave You Speechless | AGT 2022, archived from the original on 29 August 2022, retrieved 29 August 2022 https://www.youtube.com/watch?v=mPU0WNUzsBo&ab_channel=America%27sGotTalent
Segarra, Edward. "'AGT' judges Simon Cowell, Howie Mandel get 'deepfake' treatment by AI act Metaphysic: Watch here". USA TODAY. Archived from the original on 31 August 2022. Retrieved 31 August 2022. https://www.usatoday.com/story/entertainment/tv/2022/08/30/agt-simon-cowell-calls-ai-opera-best-act-metaphysic/7947094001/
Bowenbank, Starr (14 September 2022). "Simon Cowell Duets With Elvis in Metaphysic's Latest Deepfake 'AGT' Performance: Watch". Billboard. Archived from the original on 10 June 2024. Retrieved 15 September 2022. https://www.billboard.com/culture/tv-film/simon-cowell-duet-elvis-deepfake-agt-performance-1235138799/
Zwiezen, Zack (18 January 2021). "Website Lets You Make GLaDOS Say Whatever You Want". Kotaku. Archived from the original on 17 January 2021. Retrieved 18 January 2021. https://kotaku.com/this-website-lets-you-make-glados-say-whatever-you-want-1846062835
Ruppert, Liana (18 January 2021). "Make Portal's GLaDOS And Other Beloved Characters Say The Weirdest Things With This App". Game Informer. Archived from the original on 18 January 2021. Retrieved 18 January 2021. https://www.gameinformer.com/gamer-culture/2021/01/18/make-portals-glados-and-other-beloved-characters-say-the-weirdest-things
Clayton, Natalie (19 January 2021). "Make the cast of TF2 recite old memes with this AI text-to-speech tool". PC Gamer. Archived from the original on 19 January 2021. Retrieved 19 January 2021. https://www.pcgamer.com/make-the-cast-of-tf2-recite-old-memes-with-this-ai-text-to-speech-tool
Sherman, Maria (3 December 2023). "Kiss say farewell to live touring, become first US band to go virtual and become digital avatars". AP News. Associated Press. Archived from the original on 1 January 2024. Retrieved 4 January 2024. https://apnews.com/article/kiss-digital-avatars-end-of-road-finale-37a8ae9905099343c7b41654b2344d0c
Cerullo, Megan (9 January 2024). "AI-generated ads using Taylor Swift's likeness dupe fans with fake Le Creuset giveaway". CBS News. Archived from the original on 10 January 2024. Retrieved 10 January 2024. https://www.cbsnews.com/news/taylor-swift-le-creuset-ai-generated-ads/
Westfall, Chris. "AI Deepfakes On The Rise Causing Billions In Fraud Losses". Forbes. Retrieved 1 December 2024. https://www.forbes.com/sites/chriswestfall/2024/11/29/ai-deepfakes-of-elon-musk-on-the-rise-causing-billions-in-fraud-losses/
Westfall, Chris. "AI Deepfakes On The Rise Causing Billions In Fraud Losses". Forbes. Retrieved 1 December 2024. https://www.forbes.com/sites/chriswestfall/2024/11/29/ai-deepfakes-of-elon-musk-on-the-rise-causing-billions-in-fraud-losses/
Hsu, Tiffany; Lu, Yiwen (9 January 2024). "No, That's Not Taylor Swift Peddling Le Creuset Cookware". The New York Times. p. B1. Retrieved 10 January 2024. https://www.nytimes.com/2024/01/09/technology/taylor-swift-le-creuset-ai-deepfake.html
Cerullo, Megan (9 January 2024). "AI-generated ads using Taylor Swift's likeness dupe fans with fake Le Creuset giveaway". CBS News. Archived from the original on 10 January 2024. Retrieved 10 January 2024. https://www.cbsnews.com/news/taylor-swift-le-creuset-ai-generated-ads/
Taylor, Derrick Bryson (2 October 2023). "Tom Hanks Warns of Dental Ad Using A.I. Version of Him". The New York Times. ISSN 0362-4331. Archived from the original on 10 June 2024. Retrieved 12 October 2023. https://www.nytimes.com/2023/10/02/technology/tom-hanks-ai-dental-video.html
Johnson, Kirsten (11 December 2023). "Arizona woman falls victim to deepfake scam using celebrities on social media". ABC 15 Arizona. Archived from the original on 10 January 2024. Retrieved 10 January 2024. https://www.abc15.com/news/let-joe-know/arizona-woman-falls-victim-to-deep-fake-scam-using-celebrities-on-social-media
Kulundu, Mary (4 January 2024). "Deepfake videos of Elon Musk used in get-rich-quick scam". Agence France-Presse. Archived from the original on 10 June 2024. Retrieved 10 January 2024. https://factcheck.afp.com/doc.afp.com.349D4AG
Esmael, Lisbet (3 January 2024). "PH needs multifaceted approach vs 'deepfake' videos used to scam Pinoys". CNN Philippines. Archived from the original on 10 January 2024. Retrieved 10 January 2024. https://web.archive.org/web/20240110171010/https://www.cnnphilippines.com/news/2024/1/3/cybersecurity-deepfake-technology.html
Taylor, Derrick Bryson (2 October 2023). "Tom Hanks Warns of Dental Ad Using A.I. Version of Him". The New York Times. ISSN 0362-4331. Archived from the original on 10 June 2024. Retrieved 12 October 2023. https://www.nytimes.com/2023/10/02/technology/tom-hanks-ai-dental-video.html
Gerken, Tom (4 October 2023). "MrBeast and BBC stars used in deepfake scam videos". BBC News. Archived from the original on 10 June 2024. Retrieved 10 January 2024. https://www.bbc.com/news/technology-66993651
Lim, Kimberly (29 December 2023). "Singapore PM Lee warns of 'very convincing' deepfakes 'spreading disinformation' after fake video of him emerges". South China Morning Post. Archived from the original on 9 January 2024. Retrieved 10 January 2024. https://www.scmp.com/week-asia/politics/article/3246701/singapore-pm-lee-warns-very-convincing-deepfakes-spreading-disinformation-after-fake-video-him
Taylor, Josh (30 November 2023). "Scammer paid Facebook 7c per view to circulate video of deepfake Jim Chalmers and Gina Rinehart". The Guardian. Archived from the original on 10 June 2024. Retrieved 10 January 2024. https://www.theguardian.com/technology/2023/dec/01/scammer-paid-facebook-7c-per-view-to-circulate-video-of-deepfake-jim-chalmers-and-gina-rinehart
Palmer, Joseph Olbrycht (14 December 2023). "Deepfake of Australian treasury, central bank officials used to promote investment scam". Agence France-Presse. Archived from the original on 10 January 2024. Retrieved 10 January 2024. https://factcheck.afp.com/doc.afp.com.34766ZF
Koebler, Jason (9 January 2024). "Deepfaked Celebrity Ads Promoting Medicare Scams Run Rampant on YouTube". 404 Media. Archived from the original on 10 January 2024. Retrieved 10 January 2024. https://www.404media.co/joe-rogan-taylor-swift-andrew-tate-ai-deepfake-youtube-medicare-ads/
Hsu, Tiffany; Lu, Yiwen (9 January 2024). "No, That's Not Taylor Swift Peddling Le Creuset Cookware". The New York Times. p. B1. Retrieved 10 January 2024. https://www.nytimes.com/2024/01/09/technology/taylor-swift-le-creuset-ai-deepfake.html
Rosenblatt, Kalhan (3 October 2023). "MrBeast calls TikTok ad showing an AI version of him a 'scam'". NBC News. Archived from the original on 10 January 2024. Retrieved 10 January 2024. https://www.nbcnews.com/tech/mrbeast-ai-tiktok-ad-deepfake-rcna118596
Koebler, Jason (9 January 2024). "Deepfaked Celebrity Ads Promoting Medicare Scams Run Rampant on YouTube". 404 Media. Archived from the original on 10 January 2024. Retrieved 10 January 2024. https://www.404media.co/joe-rogan-taylor-swift-andrew-tate-ai-deepfake-youtube-medicare-ads/
Koebler, Jason (25 January 2024). "YouTube Deletes 1,000 Videos of Celebrity AI Scam Ads". 404 Media. Archived from the original on 10 June 2024. Retrieved 2 February 2024. https://www.404media.co/youtube-deletes-1-000-videos-of-celebrity-ai-scam-ads/
Hsu, Tiffany; Lu, Yiwen (9 January 2024). "No, That's Not Taylor Swift Peddling Le Creuset Cookware". The New York Times. p. B1. Retrieved 10 January 2024. https://www.nytimes.com/2024/01/09/technology/taylor-swift-le-creuset-ai-deepfake.html
Johnson, Kirsten (11 December 2023). "Arizona woman falls victim to deepfake scam using celebrities on social media". ABC 15 Arizona. Archived from the original on 10 January 2024. Retrieved 10 January 2024. https://www.abc15.com/news/let-joe-know/arizona-woman-falls-victim-to-deep-fake-scam-using-celebrities-on-social-media
Hsu, Tiffany; Lu, Yiwen (9 January 2024). "No, That's Not Taylor Swift Peddling Le Creuset Cookware". The New York Times. p. B1. Retrieved 10 January 2024. https://www.nytimes.com/2024/01/09/technology/taylor-swift-le-creuset-ai-deepfake.html
Rosenblatt, Kalhan (3 October 2023). "MrBeast calls TikTok ad showing an AI version of him a 'scam'". NBC News. Archived from the original on 10 January 2024. Retrieved 10 January 2024. https://www.nbcnews.com/tech/mrbeast-ai-tiktok-ad-deepfake-rcna118596
Kulundu, Mary (4 January 2024). "Deepfake videos of Elon Musk used in get-rich-quick scam". Agence France-Presse. Archived from the original on 10 June 2024. Retrieved 10 January 2024. https://factcheck.afp.com/doc.afp.com.349D4AG
Bucci, Nino (27 November 2023). "Dick Smith criticises Facebook after scammers circulate deepfake video ad". The Guardian. Archived from the original on 10 June 2024. Retrieved 10 January 2024. https://www.theguardian.com/australia-news/2023/nov/27/dick-smith-criticises-facebook-after-scammers-circulate-deepfake-video-ad
Lomas, Natasha (7 July 2023). "Martin Lewis warns over 'first' deepfake video scam ad circulating on Facebook". TechCrunch. Archived from the original on 10 January 2024. Retrieved 10 January 2024. https://techcrunch.com/2023/07/07/martin-lewis-deepfake-scam-ad-facebook/
Lim, Kimberly (29 December 2023). "Singapore PM Lee warns of 'very convincing' deepfakes 'spreading disinformation' after fake video of him emerges". South China Morning Post. Archived from the original on 9 January 2024. Retrieved 10 January 2024. https://www.scmp.com/week-asia/politics/article/3246701/singapore-pm-lee-warns-very-convincing-deepfakes-spreading-disinformation-after-fake-video-him
Lopatto, Elizabeth (3 January 2024). "Fun new deepfake consequence: more convincing crypto scams". The Verge. Archived from the original on 10 January 2024. Retrieved 10 January 2024. https://www.theverge.com/2024/1/3/24024262/youtube-twitter-x-crypto-solana-deepfake-scam
Koebler, Jason (9 January 2024). "Deepfaked Celebrity Ads Promoting Medicare Scams Run Rampant on YouTube". 404 Media. Archived from the original on 10 January 2024. Retrieved 10 January 2024. https://www.404media.co/joe-rogan-taylor-swift-andrew-tate-ai-deepfake-youtube-medicare-ads/
Rosenblatt, Kalhan (3 October 2023). "MrBeast calls TikTok ad showing an AI version of him a 'scam'". NBC News. Archived from the original on 10 January 2024. Retrieved 10 January 2024. https://www.nbcnews.com/tech/mrbeast-ai-tiktok-ad-deepfake-rcna118596
Cerullo, Megan (9 January 2024). "AI-generated ads using Taylor Swift's likeness dupe fans with fake Le Creuset giveaway". CBS News. Archived from the original on 10 January 2024. Retrieved 10 January 2024. https://www.cbsnews.com/news/taylor-swift-le-creuset-ai-generated-ads/
Hsu, Tiffany; Lu, Yiwen (9 January 2024). "No, That's Not Taylor Swift Peddling Le Creuset Cookware". The New York Times. p. B1. Retrieved 10 January 2024. https://www.nytimes.com/2024/01/09/technology/taylor-swift-le-creuset-ai-deepfake.html
Johnson, Kirsten (11 December 2023). "Arizona woman falls victim to deepfake scam using celebrities on social media". ABC 15 Arizona. Archived from the original on 10 January 2024. Retrieved 10 January 2024. https://www.abc15.com/news/let-joe-know/arizona-woman-falls-victim-to-deep-fake-scam-using-celebrities-on-social-media
Spoto, Maia; Poritz, Isaiah (11 October 2023). "MrBeast, Tom Hanks Stung by AI Scams as Law Rushes to Keep Pace". Bloomberg Law. Archived from the original on 10 January 2024. Retrieved 10 January 2024. https://news.bloomberglaw.com/litigation/mrbeast-tom-hanks-stung-by-ai-scams-as-law-rushes-to-keep-pace
Statt, Nick (5 September 2019). "Thieves are now using AI deepfakes to trick companies into sending them money". Archived from the original on 15 September 2019. Retrieved 13 September 2019. https://www.theverge.com/2019/9/5/20851248/deepfakes-ai-fake-audio-phone-calls-thieves-trick-companies-stealing-money
Damiani, Jesse. "A Voice Deepfake Was Used To Scam A CEO Out Of $243,000". Forbes. Archived from the original on 14 September 2019. Retrieved 9 November 2019. https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/
"Deepfakes, explained". MIT Sloan. 5 March 2024. Archived from the original on 5 March 2024. Retrieved 6 March 2024. https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained
Schwartz, Christopher; Wright, Matthew (17 March 2023). "Voice deepfakes are calling – here's what they are and how to avoid getting scammed". The Conversation. Archived from the original on 4 January 2024. Retrieved 4 January 2024. https://theconversation.com/voice-deepfakes-are-calling-heres-what-they-are-and-how-to-avoid-getting-scammed-201449
Somers, Meredith (21 July 2020). "Deepfakes, explained". MIT Sloan. Archived from the original on 5 March 2024. Retrieved 6 March 2024. https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained
C, Kim (22 August 2020). "Coffin Dance and More: The Music Memes of 2020 So Far". Music Times. Archived from the original on 26 June 2021. Retrieved 26 August 2020. https://www.musictimes.com/articles/82157/20200822/coffin-dance-and-more-the-music-memes-of-2020-so-far.htm
Sholihyn, Ilyas (7 August 2020). "Someone deepfaked Singapore's politicians to lip-sync that Japanese meme song". AsiaOne. Archived from the original on 3 September 2020. Retrieved 26 August 2020. https://www.asiaone.com/digital/someone-deepfaked-singapores-politicians-lip-sync-japanese-meme-song
"Wenn Merkel plötzlich Trumps Gesicht trägt: die gefährliche Manipulation von Bildern und Videos". az Aargauer Zeitung. 3 February 2018. Archived from the original on 13 April 2019. Retrieved 9 April 2018. https://www.aargauerzeitung.ch/leben/digital/wenn-merkel-ploetzlich-trumps-gesicht-traegt-die-gefaehrliche-manipulation-von-bildern-und-videos-132155720
Gensing, Patrick. "Deepfakes: Auf dem Weg in eine alternative Realität?". Archived from the original on 11 October 2018. Retrieved 9 April 2018. http://faktenfinder.tagesschau.de/hintergrund/deep-fakes-101.html
Romano, Aja (18 April 2018). "Jordan Peele's simulated Obama PSA is a double-edged warning against fake news". Vox. Archived from the original on 11 June 2019. Retrieved 10 September 2018. https://www.vox.com/2018/4/18/17252410/jordan-peele-obama-deepfake-buzzfeed
Swenson, Kyle (11 January 2019). "A Seattle TV station aired doctored footage of Trump's Oval Office speech. The employee has been fired". The Washington Post. Archived from the original on 15 April 2019. Retrieved 11 January 2019. https://www.washingtonpost.com/nation/2019/01/11/seattle-tv-station-aired-doctored-footage-trumps-oval-office-speech-employee-has-been-fired/
O'Sullivan, Donie (4 June 2019). "Congress to investigate deepfakes as doctored Pelosi video causes stir". CNN. Archived from the original on 29 June 2019. Retrieved 9 November 2019. https://www.cnn.com/2019/06/04/politics/house-intelligence-committee-deepfakes-threats-hearing/index.html
"#TellTheTruthBelgium". Extinction Rebellion Belgium. Archived from the original on 25 April 2020. Retrieved 21 April 2020. https://www.extinctionrebellion.be/en/
Holubowicz, Gerald (15 April 2020). "Extinction Rebellion s'empare des deepfakes". Journalism.design (in French). Archived from the original on 29 July 2020. Retrieved 21 April 2020. https://journalism.design/les-deepfakes/extinction-rebellion-sempare-des-deepfakes/
Carnahan, Dustin (16 September 2020). "Faked videos shore up false beliefs about Biden's mental health". The Conversation. Archived from the original on 9 April 2022. Retrieved 9 April 2022. https://theconversation.com/faked-videos-shore-up-false-beliefs-about-bidens-mental-health-145975
Parker, Ashley (7 September 2020). "Trump and allies ramp up efforts to spread disinformation and fake news". The Independent. Retrieved 9 April 2022. https://www.independent.co.uk/news/world/americas/us-election-2020/trump-us-election-fake-news-biden-twitter-deep-fake-videos-b404815.html
Christopher, Nilesh (18 February 2020). "We've Just Seen the First Use of Deepfakes in an Indian Election Campaign". Vice. Archived from the original on 19 February 2020. Retrieved 19 February 2020. https://www.vice.com/en/article/the-first-use-of-deepfakes-in-indian-election-by-bjp/
"Amabie: the mythical creature making a coronavirus comeback". The Economist. 28 April 2020. ISSN 0013-0613. Archived from the original on 20 May 2021. Retrieved 3 June 2021. https://www.economist.com/1843/2020/04/28/amabie-the-mythical-creature-making-a-coronavirus-comeback
Roth, Andrew (22 April 2021). "European MPs targeted by deepfake video calls imitating Russian opposition". The Guardian. Archived from the original on 29 March 2022. Retrieved 29 March 2022. https://www.theguardian.com/world/2021/apr/22/european-mps-targeted-by-deepfake-video-calls-imitating-russian-opposition
Ivanov, Maxim; Rothrock, Kevin (22 April 2021). "Hello, this is Leonid Volkov* Using deepfake video and posing as Navalny's right-hand man, Russian pranksters fool Latvian politicians and journalists into invitation and TV interview". Meduza. Archived from the original on 29 March 2022. Retrieved 29 March 2022. https://meduza.io/en/feature/2021/04/22/hello-this-is-leonid-volkov
"Dutch MPs in video conference with deep fake imitation of Navalny's Chief of Staff". nltimes.nl. 24 April 2021. Archived from the original on 10 June 2024. Retrieved 29 March 2022. https://nltimes.nl/2021/04/24/dutch-mps-video-conference-deep-fake-imitation-navalnys-chief-staff
"'Deepfake' Navalny Aide Targets European Lawmakers". The Moscow Times. 23 April 2021. Archived from the original on 29 March 2022. Retrieved 29 March 2022. https://www.themoscowtimes.com/2021/04/23/deepfake-navalny-aide-targets-european-lawmakers-a73717
Vincent, James (30 April 2021). "'Deepfake' that supposedly fooled European politicians was just a look-alike, say pranksters". The Verge. Archived from the original on 29 March 2022. Retrieved 29 March 2022. https://www.theverge.com/2021/4/30/22407264/deepfake-european-polticians-leonid-volkov-vovan-lexus
Novak, Matt (8 May 2023). "Viral Video Of Kamala Harris Speaking Gibberish Is Actually A Deepfake". Forbes. Archived from the original on 18 July 2023. Retrieved 18 July 2023. https://www.forbes.com/sites/mattnovak/2023/05/08/viral-video-of-kamala-harris-speaking-gibberish-is-deepfake/?sh=723384a270f7
"PolitiFact - Kamala Harris wasn't slurring about today, yesterday or tomorrow. This video is altered". Politifact. Archived from the original on 10 June 2024. Retrieved 18 July 2023. https://www.politifact.com/factchecks/2023/may/05/facebook-posts/kamala-harris-wasnt-slurring-about-today-yesterday/
Shuham, Matt (8 June 2023). "DeSantis Campaign Ad Shows Fake AI Images Of Trump Hugging Fauci". HuffPost. Archived from the original on 10 June 2024. Retrieved 8 June 2023. https://www.huffpost.com/entry/desantis-trump-fauci-fake-ai-ad_n_64822436e4b025003edc3c8b
"AI Deepfakes Pose Major Threat to Elections in US and India". The Washington Post. ISSN 0190-8286. Archived from the original on 20 May 2024. Retrieved 22 October 2024. https://www.washingtonpost.com/technology/2024/04/23/ai-deepfake-election-2024-us-india/
Christopher, Nilesh (March 2024). "Indian Voters Are Being Bombarded With Millions of Deepfakes. Political Candidates Approve". Wired. Archived from the original on 12 March 2024. Retrieved 20 October 2024. https://www.wired.com/story/indian-elections-ai-deepfakes/
"What an Indian Deepfaker Tells Us About Global Election Security". Bloomberg. Archived from the original on 1 April 2024. Retrieved 20 October 2024. https://www.bloomberg.com/features/2024-ai-election-security-deepfakes/
Politi, Daniel; Alcoba, Natalie. "Argentina's President Joins A.I.-Fueled Smear Campaign Against Journalist". New York Times. Retrieved 3 July 2025. https://www.nytimes.com/2025/07/02/world/americas/argentina-president-milei-press-attacks.html
"Julia Mengolini se quebró por una violenta campaña de libertarios a la que se sumó Milei". La Nación (in Spanish). Retrieved 28 June 2025. https://www.lanacion.com.ar/politica/julia-mengolini-se-quebro-por-una-campana-de-libertarios-a-la-que-se-sumo-milei-nid28062025/
Roettgers, Janko (21 February 2018). "Porn Producers Offer to Help Hollywood Take Down Deepfake Videos". Variety. Archived from the original on 10 June 2019. Retrieved 28 February 2018. https://variety.com/2018/digital/news/deepfakes-porn-adult-industry-1202705749/
Dickson, E. J. (7 October 2019). "Deepfake Porn Is Still a Threat, Particularly for K-Pop Stars". Rolling Stone. Archived from the original on 30 October 2019. Retrieved 9 November 2019. https://www.rollingstone.com/culture/culture-news/deepfakes-nonconsensual-porn-study-kpop-895605/
"The State of Deepfake - Landscape, Threats, and Impact" (PDF). Deeptrace. 1 October 2019. Archived (PDF) from the original on 9 August 2020. Retrieved 7 July 2020. https://regmedia.co.uk/2019/10/08/deepfake_report.pdf
Roettgers, Janko (21 February 2018). "Porn Producers Offer to Help Hollywood Take Down Deepfake Videos". Variety. Archived from the original on 10 June 2019. Retrieved 28 February 2018. https://variety.com/2018/digital/news/deepfakes-porn-adult-industry-1202705749/
Goggin, Benjamin (7 June 2019). "From porn to 'Game of Thrones': How deepfakes and realistic-looking fake videos hit it big". Business Insider. Archived from the original on 8 November 2019. Retrieved 9 November 2019. https://www.businessinsider.com/deepfakes-explained-the-rise-of-fake-realistic-videos-online-2019-6
Lee, Dave (3 February 2018). "'Fake porn' has serious consequences". Archived from the original on 1 December 2019. Retrieved 9 November 2019. https://www.bbc.com/news/technology-42912529
Cole, Samantha (19 June 2018). "Gfycat's AI Solution for Fighting Deepfakes Isn't Working". Vice. Archived from the original on 8 November 2019. Retrieved 9 November 2019. https://www.vice.com/en/article/gfycat-spotting-deepfakes-fake-ai-porn/
Dickson, E. J. (7 October 2019). "Deepfake Porn Is Still a Threat, Particularly for K-Pop Stars". Rolling Stone. Archived from the original on 30 October 2019. Retrieved 9 November 2019. https://www.rollingstone.com/culture/culture-news/deepfakes-nonconsensual-porn-study-kpop-895605/
Dickson, E. J. (7 October 2019). "Deepfake Porn Is Still a Threat, Particularly for K-Pop Stars". Rolling Stone. Archived from the original on 30 October 2019. Retrieved 9 November 2019. https://www.rollingstone.com/culture/culture-news/deepfakes-nonconsensual-porn-study-kpop-895605/
Zoe, Freni (24 November 2019). "Deepfake Porn Is Here To Stay". Medium. Archived from the original on 10 December 2019. Retrieved 10 December 2019. https://medium.com/@frenizoe/deepfake-porn-efb80f39bae3
Cole, Samantha; Maiberg, Emanuel; Koebler, Jason (26 June 2019). "This Horrifying App Undresses a Photo of Any Woman with a Single Click". Vice. Archived from the original on 2 July 2019. Retrieved 2 July 2019. https://www.vice.com/en/article/deepnude-app-creates-fake-nudes-of-any-woman/
Cox, Joseph (9 July 2019). "GitHub Removed Open Source Versions of DeepNude". Vice Media. Archived from the original on 24 September 2020. Retrieved 14 July 2019. https://www.vice.com/en/article/github-removed-open-source-versions-of-deepnude-app-deepfakes/
"pic.twitter.com/8uJKBQTZ0o". 27 June 2019. Archived from the original on 6 April 2021. Retrieved 3 August 2019. https://twitter.com/deepnudeapp/status/1144307316231200768
"Hundreds of sexual deepfake ads using Emma Watson's face ran on Facebook and Instagram in the last two days". NBC News. 7 March 2023. Archived from the original on 29 February 2024. Retrieved 8 March 2024. https://www.nbcnews.com/tech/social-media/emma-watson-deep-fake-scarlett-johansson-face-swap-app-rcna73624
Filipovic, Jill (31 January 2024). "Anyone could be a victim of 'deepfakes'. But there's a reason Taylor Swift is a target". The Guardian. ISSN 0261-3077. Archived from the original on 10 June 2024. Retrieved 8 March 2024. https://www.theguardian.com/commentisfree/2024/jan/31/taylor-swift-ai-pictures-far-right
Paris, Britt (October 2021). "Configuring Fakes: Digitized Bodies, the Politics of Evidence, and Agency". Social Media + Society. 7 (4). doi:10.1177/20563051211062919. ISSN 2056-3051. https://doi.org/10.1177%2F20563051211062919
Damiani, Jesse. "Chinese Deepfake App Zao Goes Viral, Faces Immediate Criticism Over User Data And Security Policy". Forbes. Archived from the original on 14 September 2019. Retrieved 18 November 2019. https://www.forbes.com/sites/jessedamiani/2019/09/03/chinese-deepfake-app-zao-goes-viral-faces-immediate-criticism-over-user-data-and-security-policy/
Porter, Jon (2 September 2019). "Another convincing deepfake app goes viral prompting immediate privacy backlash". The Verge. Archived from the original on 3 September 2019. Retrieved 8 November 2019. https://www.theverge.com/2019/9/2/20844338/zao-deepfake-app-movie-tv-show-face-replace-privacy-policy-concerns
"Ahead of Irish and US elections, Facebook announces new measures against 'deepfake' videos". Independent.ie. 7 January 2020. Archived from the original on 8 January 2020. Retrieved 7 January 2020. https://www.independent.ie/business/technology/ahead-of-irish-and-us-elections-facebook-announces-new-measures-against-deepfake-videos-38840513.html
Tayler, Kelley M.; Harris, Laurie A. (8 June 2021). Deep Fakes and National Security (Report). Congressional Research Service. p. 1. Archived from the original on 14 June 2022. Retrieved 19 July 2021. https://crsreports.congress.gov/product/pdf/IF/IF11333
"How Belgian visual expert Chris Ume masterminded Tom Cruise's deepfakes". The Statesman. 6 March 2021. Archived from the original on 24 August 2022. Retrieved 24 August 2022. https://www.thestatesman.com/technology/science/belgian-visual-expert-chris-ume-masterminded-tom-cruises-deepfakes-1502955882.html
Metz, Rachel. "How a deepfake Tom Cruise on TikTok turned into a very real AI company". CNN. Archived from the original on 10 June 2024. Retrieved 17 March 2022. https://edition.cnn.com/2021/08/06/tech/tom-cruise-deepfake-tiktok-company/index.html
Corcoran, Mark; Henry, Matt (23 June 2021). "This is not Tom Cruise. That's what has security experts so worried". ABC News. Archived from the original on 28 March 2022. Retrieved 28 March 2022. https://www.abc.net.au/news/2021-06-24/tom-cruise-deepfake-chris-ume-security-washington-dc/100234772
Reuters, 15 July 2020, Deepfake Used to Attack Activist Couple Shows New Disinformation Frontier Archived 26 September 2020 at the Wayback Machine https://www.reuters.com/article/us-cyber-deepfake-activist-idUSKCN24G15E
972 Magazine, 12 August 2020, "'Leftists for Bibi'? Deepfake Pro-Netanyahu Propaganda Exposed: According to a Series of Facebook Posts, the Israeli Prime Minister is Winning over Left-Wing Followers--Except that None of the People in Question Exist" Archived 14 August 2020 at the Wayback Machine https://www.972mag.com/leftists-for-bibi-deepfake-pro-netanyahu-propaganda-exposed/
The Seventh Eye, 9 June 2020, הפורנוגרפיה של ההסתהתומכי נתניהו ממשיכים להפיץ פוסטים מזויפים בקבוצות במדיה החברתית • לצד הטרלות מעלות גיחוך מופצות תמונות שקריות על מנת להגביר את השנאה והפילוג בחברה הישראלית Archived 18 August 2020 at the Wayback Machine https://www.the7eye.org.il/375768
"Wenn Merkel plötzlich Trumps Gesicht trägt: die gefährliche Manipulation von Bildern und Videos". az Aargauer Zeitung. 3 February 2018. Archived from the original on 13 April 2019. Retrieved 9 April 2018. https://www.aargauerzeitung.ch/leben/digital/wenn-merkel-ploetzlich-trumps-gesicht-traegt-die-gefaehrliche-manipulation-von-bildern-und-videos-132155720
"Wenn Merkel plötzlich Trumps Gesicht trägt: die gefährliche Manipulation von Bildern und Videos". az Aargauer Zeitung. 3 February 2018. Archived from the original on 13 April 2019. Retrieved 9 April 2018. https://www.aargauerzeitung.ch/leben/digital/wenn-merkel-ploetzlich-trumps-gesicht-traegt-die-gefaehrliche-manipulation-von-bildern-und-videos-132155720
"Perfect Deepfake Tech Could Arrive Sooner Than Expected". www.wbur.org. 2 October 2019. Archived from the original on 30 October 2019. Retrieved 9 November 2019. https://www.wbur.org/hereandnow/2019/10/02/deepfake-technology
"Perfect Deepfake Tech Could Arrive Sooner Than Expected". www.wbur.org. 2 October 2019. Archived from the original on 30 October 2019. Retrieved 9 November 2019. https://www.wbur.org/hereandnow/2019/10/02/deepfake-technology
Sonnemaker, Tyler. "As social media platforms brace for the incoming wave of deepfakes, Google's former 'fraud czar' predicts the biggest danger is that deepfakes will eventually become boring". Business Insider. Archived from the original on 14 April 2021. Retrieved 14 April 2021. https://www.businessinsider.com/google-ex-fraud-czar-danger-of-deepfakes-is-becoming-boring-2020-1
"Wenn Merkel plötzlich Trumps Gesicht trägt: die gefährliche Manipulation von Bildern und Videos". az Aargauer Zeitung. 3 February 2018. Archived from the original on 13 April 2019. Retrieved 9 April 2018. https://www.aargauerzeitung.ch/leben/digital/wenn-merkel-ploetzlich-trumps-gesicht-traegt-die-gefaehrliche-manipulation-von-bildern-und-videos-132155720
Vaccari, Cristian; Chadwick, Andrew (January 2020). "Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News". Social Media + Society. 6 (1): 205630512090340. doi:10.1177/2056305120903408. ISSN 2056-3051. S2CID 214265502. https://doi.org/10.1177%2F2056305120903408
Pawelec, M (2022). "Deepfakes and Democracy (Theory): How Synthetic Audio-Visual Media for Disinformation and Hate Speech Threaten Core Democratic Functions". Digital Society: Ethics, Socio-legal and Governance of Digital Technology. 1 (2) 19. doi:10.1007/s44206-022-00010-6. PMC 9453721. PMID 36097613. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9453721
Somers, Meredith (21 July 2020). "Deepfakes, explained". MIT Sloan. Archived from the original on 5 March 2024. Retrieved 6 March 2024. https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained
Bateman, Jon (2020). "Summary". Deepfakes and Synthetic Media in the Financial System: 1–2. Archived from the original on 20 April 2021. Retrieved 28 October 2020. https://www.jstor.org/stable/resrep25783.6
Bateman, Jon (2020). "Summary". Deepfakes and Synthetic Media in the Financial System: 1–2. Archived from the original on 20 April 2021. Retrieved 28 October 2020. https://www.jstor.org/stable/resrep25783.6
Bateman, Jon (2020). "Summary". Deepfakes and Synthetic Media in the Financial System: 1–2. Archived from the original on 20 April 2021. Retrieved 28 October 2020. https://www.jstor.org/stable/resrep25783.6
Somers, Meredith (21 July 2020). "Deepfakes, explained". MIT Sloan. Archived from the original on 5 March 2024. Retrieved 6 March 2024. https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained
Kelion, Leo (September 2020). "Deepfake detection tool unveiled by Microsoft". BBC News. Archived from the original on 14 April 2021. Retrieved 15 April 2021. https://www.bbc.com/news/technology-53984114
Cohen, Ariel; Rimon, Inbal; Aflalo, Eran; Permuter, Haim H. (June 2022). "A study on data augmentation in voice anti-spoofing". Speech Communication. 141: 56–67. arXiv:2110.10491. doi:10.1016/j.specom.2022.04.005. S2CID 239050551. /wiki/ArXiv_(identifier)
Manke, Kara (18 June 2019). "Researchers use facial quirks to unmask 'deepfakes'". Berkeley News. Archived from the original on 9 November 2019. Retrieved 9 November 2019. https://news.berkeley.edu/2019/06/18/researchers-use-facial-quirks-to-unmask-deepfakes/
Manke, Kara (18 June 2019). "Researchers use facial quirks to unmask 'deepfakes'". Berkeley News. Archived from the original on 9 November 2019. Retrieved 9 November 2019. https://news.berkeley.edu/2019/06/18/researchers-use-facial-quirks-to-unmask-deepfakes/
Farid, Hany (1 December 2006). "Digital Doctoring: How to Tell the Real from the Fake". Significance. 3 (4): 162–166. doi:10.1111/j.1740-9713.2006.00197.x. S2CID 13861938. https://doi.org/10.1111%2Fj.1740-9713.2006.00197.x
Harwell, Drew (12 June 2019). "Top AI researchers race to detect 'deepfake' videos: 'We are outgunned'". The Washington Post. Archived from the original on 31 October 2019. Retrieved 8 November 2019. https://www.washingtonpost.com/technology/2019/06/12/top-ai-researchers-race-detect-deepfake-videos-we-are-outgunned/
Manke, Kara (18 June 2019). "Researchers use facial quirks to unmask 'deepfakes'". Berkeley News. Archived from the original on 9 November 2019. Retrieved 9 November 2019. https://news.berkeley.edu/2019/06/18/researchers-use-facial-quirks-to-unmask-deepfakes/
"Join the Deepfake Detection Challenge (DFDC)". deepfakedetectionchallenge.ai. Archived from the original on 12 January 2020. Retrieved 8 November 2019. https://deepfakedetectionchallenge.ai/
"Deepfake Detection Challenge Results: An open initiative to advance AI". ai.facebook.com. Archived from the original on 29 October 2020. Retrieved 30 September 2022. https://ai.facebook.com/blog/deepfake-detection-challenge-results-an-open-initiative-to-advance-ai/
Groh, Matthew; Epstein, Ziv; Firestone, Chaz; Picard, Rosalind (2022). "Deepfake detection by human crowds, machines, and machine-informed crowds". Proceedings of the National Academy of Sciences. 119 (1). arXiv:2105.06496. Bibcode:2022PNAS..11910013G. doi:10.1073/pnas.2110013119. PMC 8740705. PMID 34969837. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8740705
Hu, Shu; Li, Yuezun; Lyu, Siwei (12 October 2020). "Exposing GAN-Generated Faces Using Inconsistent Corneal Specular Highlights". arXiv:2009.11924 [cs.CV]. /wiki/ArXiv_(identifier)
Boháček, M; Farid, H (29 November 2022). "Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms". Proceedings of the National Academy of Sciences of the United States of America. 119 (48): e2216035119. Bibcode:2022PNAS..11916035B. doi:10.1073/pnas.2216035119. PMC 9860138. PMID 36417442. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9860138
"Google Scholar". scholar.google.com. Retrieved 30 April 2022. https://scholar.google.com/scholar?hl=en&as_sdt=0,39&q=Recurrent+Convolutional+Strategies+for+Face+Manipulation+Detection+in+Videos&btnG=
Masi, Iacopo; Killekar, Aditya; Mascarenhas, Royston Marian; Gurudatt, Shenoy Pratik; Abdalmageed, Wael (2020). Two-branch recurrent network for isolating deepfakes in videos. Lecture Notes in Computer Science. Vol. 12352. pp. 667–684. arXiv:2008.03412. doi:10.1007/978-3-030-58571-6_39. ISBN 978-3-030-58570-9. Archived from the original on 10 June 2024. Retrieved 30 April 2022. 978-3-030-58570-9
"Google Scholar". scholar.google.com. Retrieved 30 April 2022. https://scholar.google.com/scholar?hl=en&as_sdt=0,39&q=Recurrent+Convolutional+Strategies+for+Face+Manipulation+Detection+in+Videos&btnG=
Masi, Iacopo; Killekar, Aditya; Mascarenhas, Royston Marian; Gurudatt, Shenoy Pratik; Abdalmageed, Wael (2020). Two-branch recurrent network for isolating deepfakes in videos. Lecture Notes in Computer Science. Vol. 12352. pp. 667–684. arXiv:2008.03412. doi:10.1007/978-3-030-58571-6_39. ISBN 978-3-030-58570-9. Archived from the original on 10 June 2024. Retrieved 30 April 2022. 978-3-030-58570-9
"The Blockchain Solution to Our Deepfake Problems". Wired. ISSN 1059-1028. Archived from the original on 7 November 2019. Retrieved 9 November 2019. https://www.wired.com/story/the-blockchain-solution-to-our-deepfake-problems/
"The Blockchain Solution to Our Deepfake Problems". Wired. ISSN 1059-1028. Archived from the original on 7 November 2019. Retrieved 9 November 2019. https://www.wired.com/story/the-blockchain-solution-to-our-deepfake-problems/
"The Blockchain Solution to Our Deepfake Problems". Wired. ISSN 1059-1028. Archived from the original on 7 November 2019. Retrieved 9 November 2019. https://www.wired.com/story/the-blockchain-solution-to-our-deepfake-problems/
Leetaru, Kalev. "Why Digital Signatures Won't Prevent Deep Fakes But Will Help Repressive Governments". Forbes. Archived from the original on 14 April 2021. Retrieved 17 February 2021. https://www.forbes.com/sites/kalevleetaru/2018/09/09/why-digital-signatures-wont-prevent-deep-fakes-but-will-help-repressive-governments/
Leetaru, Kalev. "Why Digital Signatures Won't Prevent Deep Fakes But Will Help Repressive Governments". Forbes. Archived from the original on 14 April 2021. Retrieved 17 February 2021. https://www.forbes.com/sites/kalevleetaru/2018/09/09/why-digital-signatures-wont-prevent-deep-fakes-but-will-help-repressive-governments/
"To Uncover a Deepfake Video Call, Ask the Caller to Turn Sideways". Metaphysic. 8 August 2022. Archived from the original on 26 August 2022. Retrieved 24 August 2022. https://web.archive.org/web/20220826235234/https://metaphysic.ai/to-uncover-a-deepfake-video-call-ask-the-caller-to-turn-sideways/
Somers, Meredith (21 July 2020). "Deepfakes, explained". MIT Sloan. Archived from the original on 5 March 2024. Retrieved 6 March 2024. https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained
Karnouskos, Stamatis (September 2020). "Artificial Intelligence in Digital Media: The Era of Deepfakes". IEEE Transactions on Technology and Society. 1 (3): 138–147. doi:10.1109/TTS.2020.3001312. ISSN 2637-6415. https://ieeexplore.ieee.org/document/9123958
Karnouskos, Stamatis (September 2020). "Artificial Intelligence in Digital Media: The Era of Deepfakes". IEEE Transactions on Technology and Society. 1 (3): 138–147. doi:10.1109/TTS.2020.3001312. ISSN 2637-6415. https://ieeexplore.ieee.org/document/9123958
Karnouskos, Stamatis (September 2020). "Artificial Intelligence in Digital Media: The Era of Deepfakes". IEEE Transactions on Technology and Society. 1 (3): 138–147. doi:10.1109/TTS.2020.3001312. ISSN 2637-6415. https://ieeexplore.ieee.org/document/9123958
Karnouskos, Stamatis (September 2020). "Artificial Intelligence in Digital Media: The Era of Deepfakes". IEEE Transactions on Technology and Society. 1 (3): 138–147. doi:10.1109/TTS.2020.3001312. ISSN 2637-6415. https://ieeexplore.ieee.org/document/9123958
Turós, Mátyás; Kenyeres, Attila Zoltán; Szűts, Zoltán (September 2024). "Fake video detection among secondary school students: The impact of sociocultural, media literacy and media use factors". Telematics and Informatics Reports. 15 100160. doi:10.1016/j.teler.2024.100160. https://doi.org/10.1016%2Fj.teler.2024.100160
Turós, Mátyás; Kenyeres, Attila Zoltán; Szűts, Zoltán (September 2024). "Fake video detection among secondary school students: The impact of sociocultural, media literacy and media use factors". Telematics and Informatics Reports. 15 100160. doi:10.1016/j.teler.2024.100160. https://doi.org/10.1016%2Fj.teler.2024.100160
Ahmed, Saifuddin (May 2023). "Navigating the maze: Deepfakes, cognitive ability, and social media news skepticism". New Media & Society. 25 (5): 1108–1129. doi:10.1177/14614448211019198. ISSN 1461-4448. https://journals.sagepub.com/doi/10.1177/14614448211019198
Turós, Mátyás; Kenyeres, Attila Zoltán; Szűts, Zoltán (September 2024). "Fake video detection among secondary school students: The impact of sociocultural, media literacy and media use factors". Telematics and Informatics Reports. 15 100160. doi:10.1016/j.teler.2024.100160. https://doi.org/10.1016%2Fj.teler.2024.100160
"Kate Middleton's ring mysteriously vanishes, raises more AI concerns". MSN. 25 March 2024. Archived from the original on 10 June 2024. Retrieved 19 May 2024. https://www.msn.com/en-ae/news/featured/kate-middleton-s-ring-mysteriously-vanishes-raises-more-ai-concerns/ar-BB1ktPZJ
Hindustan Times (5 April 2024). "'Kate's cancer admission is fake', Meghan Markle's fan and UCLA director, Johnathan Perkins, floats conspiracy theory". The Hindustan Times. Archived from the original on 10 June 2024. Retrieved 19 May 2024. https://www.hindustantimes.com/world-news/us-news/kates-cancer-admission-is-fake-meghan-markles-fan-and-ucla-director-johnathan-perkins-floats-conspiracy-theory-101712301940262.html
Hameleers, Michael; van der Meer, Toni G. L. A.; Dobber, Tom (February 2024). "They Would Never Say Anything Like This! Reasons To Doubt Political Deepfakes". European Journal of Communication. 39 (1): 56–70. doi:10.1177/02673231231184703. ISSN 0267-3231. https://doi.org/10.1177%2F02673231231184703
Vaccari, Cristian; Chadwick, Andrew (January 2020). "Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News". Social Media + Society. 6 (1): 205630512090340. doi:10.1177/2056305120903408. ISSN 2056-3051. S2CID 214265502. https://doi.org/10.1177%2F2056305120903408
Dobber, Tom; Metoui, Nadia; Trilling, Damian; Helberger, Natali; de Vreese, Claes (January 2021). "Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes?". The International Journal of Press/Politics. 26 (1): 69–91. doi:10.1177/1940161220944364. ISSN 1940-1612. https://doi.org/10.1177%2F1940161220944364
Hameleers, Michael; van der Meer, Toni G. L. A.; Dobber, Tom (February 2024). "They Would Never Say Anything Like This! Reasons To Doubt Political Deepfakes". European Journal of Communication. 39 (1): 56–70. doi:10.1177/02673231231184703. ISSN 0267-3231. https://doi.org/10.1177%2F02673231231184703
Dobber, Tom; Metoui, Nadia; Trilling, Damian; Helberger, Natali; de Vreese, Claes (January 2021). "Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes?". The International Journal of Press/Politics. 26 (1): 69–91. doi:10.1177/1940161220944364. ISSN 1940-1612. https://doi.org/10.1177%2F1940161220944364
Vaccari, Cristian; Chadwick, Andrew (January 2020). "Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News". Social Media + Society. 6 (1): 205630512090340. doi:10.1177/2056305120903408. ISSN 2056-3051. S2CID 214265502. https://doi.org/10.1177%2F2056305120903408
Hameleers, Michael; van der Meer, Toni G. L. A.; Dobber, Tom (February 2024). "They Would Never Say Anything Like This! Reasons To Doubt Political Deepfakes". European Journal of Communication. 39 (1): 56–70. doi:10.1177/02673231231184703. ISSN 0267-3231. https://doi.org/10.1177%2F02673231231184703
Hameleers, Michael; van der Meer, Toni G. L. A.; Dobber, Tom (February 2024). "They Would Never Say Anything Like This! Reasons To Doubt Political Deepfakes". European Journal of Communication. 39 (1): 56–70. doi:10.1177/02673231231184703. ISSN 0267-3231. https://doi.org/10.1177%2F02673231231184703
Dobber, Tom; Metoui, Nadia; Trilling, Damian; Helberger, Natali; de Vreese, Claes (January 2021). "Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes?". The International Journal of Press/Politics. 26 (1): 69–91. doi:10.1177/1940161220944364. ISSN 1940-1612. https://doi.org/10.1177%2F1940161220944364
Dobber, Tom; Metoui, Nadia; Trilling, Damian; Helberger, Natali; de Vreese, Claes (January 2021). "Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes?". The International Journal of Press/Politics. 26 (1): 69–91. doi:10.1177/1940161220944364. ISSN 1940-1612. https://doi.org/10.1177%2F1940161220944364
Vaccari, Cristian; Chadwick, Andrew (January 2020). "Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News". Social Media + Society. 6 (1): 205630512090340. doi:10.1177/2056305120903408. ISSN 2056-3051. S2CID 214265502. https://doi.org/10.1177%2F2056305120903408
Vaccari, Cristian; Chadwick, Andrew (January 2020). "Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News". Social Media + Society. 6 (1): 205630512090340. doi:10.1177/2056305120903408. ISSN 2056-3051. S2CID 214265502. https://doi.org/10.1177%2F2056305120903408
Fagan, Kaylee. "A viral video that appeared to show Obama calling Trump a 'dips---' shows a disturbing new trend called 'deepfakes'". Business Insider. Archived from the original on 22 September 2020. Retrieved 3 November 2020. https://www.businessinsider.com/obama-deepfake-video-insulting-trump-2018-4
"The rise of the deepfake and the threat to democracy". The Guardian. Archived from the original on 1 November 2020. Retrieved 3 November 2020. https://www.theguardian.com/technology/ng-interactive/2019/jun/22/the-rise-of-the-deepfake-and-the-threat-to-democracy
"The rise of the deepfake and the threat to democracy". The Guardian. Archived from the original on 1 November 2020. Retrieved 3 November 2020. https://www.theguardian.com/technology/ng-interactive/2019/jun/22/the-rise-of-the-deepfake-and-the-threat-to-democracy
"Trump shares deepfake photo of himself praying as AI images of arrest spread online". The Independent. 24 March 2023. Archived from the original on 28 May 2023. Retrieved 16 June 2023. https://www.independent.co.uk/news/world/americas/us-politics/donald-trump-ai-praying-photo-b2307178.html
"AI-generated images of Trump being arrested circulate on social media". AP News. 21 March 2023. Archived from the original on 10 June 2024. Retrieved 10 October 2023. https://apnews.com/article/fact-check-trump-nypd-stormy-daniels-539393517762
Towers-Clark, Charles. "Mona Lisa And Nancy Pelosi: The Implications Of Deepfakes". Forbes. Archived from the original on 23 November 2020. Retrieved 7 October 2020. https://www.forbes.com/sites/charlestowersclark/2019/05/31/mona-lisa-and-nancy-pelosi-the-implications-of-deepfakes/
"What Is The Difference Between A Deepfake And Shallowfake?". 21 April 2020. Archived from the original on 26 June 2022. Retrieved 5 December 2021. https://web.archive.org/web/20220626115937/https://deepfakenow.com/what-is-the-difference-between-a-deepfake-and-shallowfake/
"Gallery: 'Spectre' Launches ( Press Release)". Bill Posters. 29 May 2019. Archived from the original on 10 June 2024. Retrieved 15 May 2024. https://billposters.ch/spectre-launch/
Cole, Samantha (11 June 2019). "This Deepfake of Mark Zuckerberg Tests Facebook's Fake Video Policies". Vice. Archived from the original on 10 June 2024. Retrieved 15 May 2024. https://www.vice.com/en/article/deepfake-of-mark-zuckerberg-facebook-fake-video-policy/
"Deepfake Putin is here to warn Americans about their self-inflicted doom". MIT Technology Review. Archived from the original on 30 October 2020. Retrieved 7 October 2020. https://www.technologyreview.com/2020/09/29/1009098/ai-deepfake-putin-kim-jong-un-us-election/
"Deepfake Putin is here to warn Americans about their self-inflicted doom". MIT Technology Review. Archived from the original on 30 October 2020. Retrieved 7 October 2020. https://www.technologyreview.com/2020/09/29/1009098/ai-deepfake-putin-kim-jong-un-us-election/
"Deepfake Putin is here to warn Americans about their self-inflicted doom". MIT Technology Review. Archived from the original on 30 October 2020. Retrieved 7 October 2020. https://www.technologyreview.com/2020/09/29/1009098/ai-deepfake-putin-kim-jong-un-us-election/
Sonne, Paul (5 June 2023). "Fake Putin Speech Calling for Martial Law Aired in Russia". The New York Times. Archived from the original on 10 June 2024. Retrieved 6 June 2023. https://www.nytimes.com/2023/06/05/world/europe/putin-deep-fake-speech-hackers.html
Pawelec, M (2022). "Deepfakes and Democracy (Theory): How Synthetic Audio-Visual Media for Disinformation and Hate Speech Threaten Core Democratic Functions". Digital Society: Ethics, Socio-legal and Governance of Digital Technology. 1 (2) 19. doi:10.1007/s44206-022-00010-6. PMC 9453721. PMID 36097613. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9453721
Allyn, Bobby (16 March 2022). "Deepfake video of Zelenskyy could be 'tip of the iceberg' in info war, experts warn". NPR. Archived from the original on 29 March 2022. Retrieved 17 March 2022. https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia
Satariano, Adam; Mozur, Paul (7 February 2023). "The People Onscreen Are Fake. The Disinformation Is Real". The New York Times. Archived from the original on 10 June 2024. Retrieved 10 February 2023. https://www.nytimes.com/2023/02/07/technology/artificial-intelligence-training-deepfake.html
"Pope Francis in Balenciaga deepfake fools millions: 'Definitely scary'". New York Post. 28 March 2023. Archived from the original on 10 June 2024. Retrieved 16 June 2023. https://nypost.com/2023/03/27/pope-francis-in-balenciaga-deepfake-fools-millions-definitely-scary/
Lu, Donna (31 March 2023). "Misinformation, mistakes and the Pope in a puffer: what rapidly evolving AI can – and can't – do". The Guardian. Archived from the original on 10 June 2024. Retrieved 16 June 2023. https://www.theguardian.com/technology/2023/apr/01/misinformation-mistakes-and-the-pope-in-a-puffer-what-rapidly-evolving-ai-can-and-cant-do
Murphy, Heather Tal (29 March 2023). "The Pope in a Coat Is Not From a Holy Place". Slate. Archived from the original on 10 June 2024. Retrieved 16 June 2023. https://slate.com/technology/2023/03/pope-coat-midjourney-puffer-jacket-balenciaga-explained.html
"Deepfake audio of Sir Keir Starmer released on first day of Labour conference". Sky News. Retrieved 29 May 2024. https://news.sky.com/story/labour-faces-political-attack-after-deepfake-audio-is-posted-of-sir-keir-starmer-12980181
"Woman in deepfake video with Rashmika Mandanna's face breaks silence: I'm deeply disturbed and upset by what is happening". The Times of India. 9 November 2023. ISSN 0971-8257. Archived from the original on 23 November 2023. Retrieved 23 November 2023. https://timesofindia.indiatimes.com/entertainment/hindi/bollywood/news/woman-in-deepfake-video-with-rashmika-mandannas-face-breaks-silence-im-deeply-disturbed-and-upset-by-what-is-happening/articleshow/105047285.cms?from=mdr
Cupin, Bea (24 April 2024). "Malacañang flags deepfake audio of Marcos ordering military attack". Rappler. Archived from the original on 10 June 2024. Retrieved 14 May 2024. https://www.rappler.com/philippines/malacanang-flags-deepfake-audio-marcos-ordering-military-attack-april-2024/
Flores, Helen (27 April 2024). "'Foreign actor' seen behind President Marcos audio deepfake". The Philippines Star. Archived from the original on 10 June 2024. Retrieved 14 May 2024. https://www.philstar.com/headlines/2024/04/27/2350826/foreign-actor-seen-behind-president-marcos-audio-deepfake
Argosino, Faith (14 May 2024). "Raps filed vs social media pages for libelous content, Marcos deepfake". Philippine Daily Inquirer. Archived from the original on 10 June 2024. Retrieved 14 May 2024. https://newsinfo.inquirer.net/1940406/raps-filed-vs-social-media-pages-for-libelous-content-marcos-deepfake
"Face-swapped? Deepfake detector flags alleged Marcos video as 'suspicious'". Rappler. 23 July 2024. Retrieved 25 July 2024. https://www.rappler.com/philippines/face-swapped-deepfake-detector-flags-alleged-marcos-video-suspicious/
"NBI, PNP findings show 'polvoron' video fake". The Philippine Star. 24 July 2024. Retrieved 25 July 2024. https://www.philstar.com/nation/2024/07/24/2372498/nbi-pnp-findings-show-polvoron-video-fake
Shepardson, David (23 May 2024). "US political consultant indicted over AI-generated Biden robocalls". Reuters. https://www.reuters.com/world/us/us-political-consultant-indicted-over-ai-generated-biden-robocalls-2024-05-23/
"US political consultant indicted over AI-generated Biden robocalls". AP News. 23 May 2024. Archived from the original on 10 June 2024. Retrieved 8 June 2024. https://apnews.com/article/biden-robocalls-ai-new-hampshire-charges-fines-9e9cc63a71eb9c78b9bb0d1ec2aa6e9c
Magramo, Kathleen (17 May 2024). "British engineering giant Arup revealed as $25 million deepfake scam victim". CNN. Retrieved 18 May 2025. https://edition.cnn.com/2024/05/16/tech/arup-deepfake-scam-loss-hong-kong-intl-hnk/index.html
Dela Cruz, Ailla (16 June 2025). "Fact Check: Video of students opposing Sara Duterte impeachment is AI-generated". Rappler. Retrieved 17 June 2025. https://www.rappler.com/newsbreak/fact-check/video-students-oppose-sara-duterte-impeachment-ai-generated/
Cabato, Luisa (16 June 2025). "Dela Rosa ridiculed over AI video on Sara Duterte impeachment". Philippine Daily Inquirer. Retrieved 17 June 2025. https://newsinfo.inquirer.net/2070901/dela-rosa-ridiculed-over-ai-video-on-sara-duterte-impeachment
Magsambol, Bonz (16 June 2025). "Sara Duterte: Nothing wrong with sharing AI video opposing my impeachment". Rappler. Retrieved 17 June 2025. https://www.rappler.com/philippines/sara-duterte-nothing-wrong-sharing-ai-video-opposing-impeachment/
"Help us shape our approach to synthetic and manipulated media". blog.twitter.com. Archived from the original on 28 October 2020. Retrieved 7 October 2020. https://blog.twitter.com/en_us/topics/company/2019/synthetic_manipulated_media_policy_feedback.html
"Help us shape our approach to synthetic and manipulated media". blog.twitter.com. Archived from the original on 28 October 2020. Retrieved 7 October 2020. https://blog.twitter.com/en_us/topics/company/2019/synthetic_manipulated_media_policy_feedback.html
"Help us shape our approach to synthetic and manipulated media". blog.twitter.com. Archived from the original on 28 October 2020. Retrieved 7 October 2020. https://blog.twitter.com/en_us/topics/company/2019/synthetic_manipulated_media_policy_feedback.html
"Help us shape our approach to synthetic and manipulated media". blog.twitter.com. Archived from the original on 28 October 2020. Retrieved 7 October 2020. https://blog.twitter.com/en_us/topics/company/2019/synthetic_manipulated_media_policy_feedback.html
"TechCrunch". TechCrunch. 11 November 2019. Archived from the original on 14 July 2021. Retrieved 7 October 2020. https://techcrunch.com/2019/11/11/twitter-drafts-a-deepfake-policy-that-would-label-and-warn-but-not-remove-manipulated-media/
Five US states push Musk to fix AI chatbot over election misinformation Reuters accessed 19 August 2024. https://www.reuters.com/technology/artificial-intelligence/five-us-states-push-musk-fix-ai-chatbot-over-election-misinformation-2024-08-05/
Noxious images spread after Elon Musk launches AI tool with few guardrails The Washington Post accessed 19 August 2024. https://www.washingtonpost.com/technology/2024/08/16/elon-musk-grok-ai/
How Elon Musk and X Became the Biggest Purveyors of Online Misinformation Rolling Stone accessed 19 August 2024. https://www.rollingstone.com/culture/culture-features/elon-musk-twitter-misinformation-timeline-1235076786/
"Deepfake Detection Challenge Results: An open initiative to advance AI". ai.facebook.com. Archived from the original on 29 October 2020. Retrieved 7 October 2020. https://ai.facebook.com/blog/deepfake-detection-challenge-results-an-open-initiative-to-advance-ai/
"Deepfake Detection Challenge Results: An open initiative to advance AI". ai.facebook.com. Archived from the original on 29 October 2020. Retrieved 7 October 2020. https://ai.facebook.com/blog/deepfake-detection-challenge-results-an-open-initiative-to-advance-ai/
Paul, Katie (4 February 2020). "Twitter to label deepfakes and other deceptive media". Reuters. Archived from the original on 10 October 2020. Retrieved 7 October 2020. https://www.reuters.com/article/us-twitter-security-idUSKBN1ZY2OV
Paul, Katie (4 February 2020). "Twitter to label deepfakes and other deceptive media". Reuters. Archived from the original on 10 October 2020. Retrieved 7 October 2020. https://www.reuters.com/article/us-twitter-security-idUSKBN1ZY2OV
Cole, Samantha (31 January 2018). "AI-Generated Fake Porn Makers Have Been Kicked Off Their Favorite Host". Vice. Archived from the original on 1 November 2019. Retrieved 18 November 2019. https://www.vice.com/en/article/deepfakes-ai-porn-removed-from-gfycat/
Ghoshal, Abhimanyu (7 February 2018). "Twitter, Pornhub and other platforms ban AI-generated celebrity porn". The Next Web. Archived from the original on 20 December 2019. Retrieved 9 November 2019. https://thenextweb.com/insider/2018/02/07/twitter-pornhub-and-other-platforms-ban-ai-generated-celebrity-porn/
Böhm, Markus (7 February 2018). ""Deepfakes": Firmen gehen gegen gefälschte Promi-Pornos vor". Spiegel Online. Archived from the original on 23 September 2019. Retrieved 9 November 2019. https://www.spiegel.de/netzwelt/web/deepfakes-online-plattformen-wollen-fake-promi-pornos-loeschen-a-1192170.html
barbara.wimmer (8 February 2018). "Deepfakes: Reddit löscht Forum für künstlich generierte Fake-Pornos". futurezone.at (in German). Archived from the original on 8 February 2018. Retrieved 9 November 2019. https://futurezone.at/digital-life/deepfakes-reddit-loescht-forum-fuer-kuenstlich-generierte-fake-pornos/400003061
"Deepfakes: Auch Reddit verbannt Fake-Porn". heise online (in German). 8 February 2018. Archived from the original on 10 April 2019. Retrieved 9 November 2019. https://www.heise.de/newsticker/meldung/Deepfakes-Auch-Reddit-verbannt-Fake-Porn-3962987.html
"Reddit verbannt Deepfake-Pornos - derStandard.de". DER STANDARD (in Austrian German). Archived from the original on 9 November 2019. Retrieved 9 November 2019. https://www.derstandard.at/story/2000073855676/reddit-verbannt-deepfake-pornos
Robertson, Adi (7 February 2018). "Reddit bans 'deepfakes' AI porn communities". The Verge. Archived from the original on 24 September 2019. Retrieved 9 November 2019. https://www.theverge.com/2018/2/7/16982046/reddit-deepfakes-ai-celebrity-face-swap-porn-community-ban
Cole, Samantha (6 February 2018). "Twitter Is the Latest Platform to Ban AI-Generated Porn". Vice. Archived from the original on 1 November 2019. Retrieved 8 November 2019. https://www.vice.com/en/article/twitter-bans-deepfakes/
Price, Rob (27 January 2018). "Discord just shut down a chat group dedicated to sharing porn videos edited with AI to include celebrities". Business Insider Australia. Archived from the original on 15 December 2019. Retrieved 28 November 2019. https://www.businessinsider.com/discord-closes-down-deepfakes-server-ai-celebrity-porn-2018-1
Ghoshal, Abhimanyu (7 February 2018). "Twitter, Pornhub and other platforms ban AI-generated celebrity porn". The Next Web. Archived from the original on 20 December 2019. Retrieved 9 November 2019. https://thenextweb.com/insider/2018/02/07/twitter-pornhub-and-other-platforms-ban-ai-generated-celebrity-porn/
"Twitter bans 'deepfake' AI-generated porn". Engadget. 20 July 2019. Archived from the original on 15 December 2019. Retrieved 28 November 2019. https://www.engadget.com/2018/02/07/twitter-joins-those-banning-deepfake-ai-porn/
Harrell, Drew. "Fake-porn videos are being weaponized to harass and humiliate women: 'Everybody is a potential target'". The Washington Post. Archived from the original on 2 January 2019. Retrieved 1 January 2019. https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-being-weaponized-harass-humiliate-women-everybody-is-potential-target
Cole, Samantha (6 February 2018). "Pornhub Is Banning AI-Generated Fake Porn Videos, Says They're Nonconsensual". Vice. Archived from the original on 1 November 2019. Retrieved 9 November 2019. https://www.vice.com/en/article/pornhub-bans-deepfakes/
Beres, Damon; Gilmer, Marcus (2 February 2018). "A guide to 'deepfakes,' the internet's latest moral crisis". Mashable. Archived from the original on 9 December 2019. Retrieved 9 November 2019. https://mashable.com/2018/02/02/what-are-deepfakes/
Cole, Samantha (6 February 2018). "Pornhub Is Banning AI-Generated Fake Porn Videos, Says They're Nonconsensual". Vice. Archived from the original on 1 November 2019. Retrieved 9 November 2019. https://www.vice.com/en/article/pornhub-bans-deepfakes/
"Facebook has promised to leave up a deepfake video of Mark Zuckerberg". MIT Technology Review. Archived from the original on 16 October 2019. Retrieved 9 November 2019. https://www.technologyreview.com/f/613690/facebook-deepfake-zuckerberg-instagram-social-media-election-video/
Cole, Samantha (11 June 2019). "This Deepfake of Mark Zuckerberg Tests Facebook's Fake Video Policies". Vice. Archived from the original on 12 October 2019. Retrieved 9 November 2019. https://www.vice.com/en/article/deepfake-of-mark-zuckerberg-facebook-fake-video-policy/
"Facebook has promised to leave up a deepfake video of Mark Zuckerberg". MIT Technology Review. Archived from the original on 16 October 2019. Retrieved 9 November 2019. https://www.technologyreview.com/f/613690/facebook-deepfake-zuckerberg-instagram-social-media-election-video/
Anderson, Martin (2022). Google Has Banned the Training of Deepfakes in Colab Archived 30 May 2022 at the Wayback Machine, Unite.ai, May 28, 2022 https://www.unite.ai/google-has-banned-the-training-of-deepfakes-in-colab/
Maiberg, Emanuel (2022). It Takes 2 Clicks to Get From 'Deep Tom Cruise' to Vile Deepfake Porn, VICE, May 17, 2022 https://www.vice.com/en/article/ethical-deepfakes-deep-tom-cruise-ai-generated-porn/
Sasse, Ben (21 December 2018). "S. 3805–115th Congress (2017-2018): Malicious Deep Fake Prohibition Act of 2018". www.congress.gov. Archived from the original on 16 October 2019. Retrieved 16 October 2019. https://www.congress.gov/bill/115th-congress/senate-bill/3805
Clarke, Yvette D. (28 June 2019). "H.R.3230 - 116th Congress (2019-2020): Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019". www.congress.gov. Archived from the original on 17 December 2019. Retrieved 16 October 2019. https://www.congress.gov/bill/116th-congress/house-bill/3230
"'Deepfake' revenge porn is now illegal in Virginia". TechCrunch. July 2019. Archived from the original on 14 July 2021. Retrieved 16 October 2019. https://techcrunch.com/2019/07/01/deepfake-revenge-porn-is-now-illegal-in-virginia/
Iacono Brown, Nina (15 July 2019). "Congress Wants to Solve Deepfakes by 2020. That Should Worry Us". Slate Magazine. Archived from the original on 16 October 2019. Retrieved 16 October 2019. https://slate.com/technology/2019/07/congress-deepfake-regulation-230-2020.html
Harrell, Drew. "Fake-porn videos are being weaponized to harass and humiliate women: 'Everybody is a potential target'". The Washington Post. Archived from the original on 2 January 2019. Retrieved 1 January 2019. https://www.washingtonpost.com/technology/2018/12/30/fake-porn-videos-are-being-weaponized-harass-humiliate-women-everybody-is-potential-target
"Bill Text - AB-602 Depiction of individual using digital or electronic technology: sexually explicit material: cause of action". leginfo.legislature.ca.gov. Archived from the original on 17 November 2019. Retrieved 9 November 2019. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB602
"Bill Text - AB-730 Elections: deceptive audio or visual media". leginfo.legislature.ca.gov. Archived from the original on 31 October 2019. Retrieved 9 November 2019. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB730
"Bill Text - AB-602 Depiction of individual using digital or electronic technology: sexually explicit material: cause of action". leginfo.legislature.ca.gov. Archived from the original on 17 November 2019. Retrieved 9 November 2019. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB602
"Bill Text - AB-730 Elections: deceptive audio or visual media". leginfo.legislature.ca.gov. Archived from the original on 31 October 2019. Retrieved 9 November 2019. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB730
H.R. 5586: DEEPFAKES Accountability Act Govtrack.US accessed 15 August 2024. https://www.govtrack.us/congress/bills/118/hr5586
H.R. 6943: No AI FRAUD Act Govtrack.US accessed 15 August 2024. https://www.govtrack.us/congress/bills/118/hr6943
"China seeks to root out fake news and deepfakes with new online content rules". Reuters.com. Reuters. 29 November 2019. Archived from the original on 17 December 2019. Retrieved 17 December 2019. https://www.reuters.com/article/us-china-technology/china-seeks-to-root-out-fake-news-and-deepfakes-with-new-online-content-rules-idUSKBN1Y30VU
Statt, Nick (29 November 2019). "China makes it a criminal offense to publish deepfakes or fake news without disclosure". The Verge. Archived from the original on 22 December 2019. Retrieved 17 December 2019. https://www.theverge.com/2019/11/29/20988363/china-deepfakes-ban-internet-rules-fake-news-disclosure-virtual-reality
China to Regulate Deep Synthesis (Deepfake) Technology Starting 2023 China Briefing accessed 15 August 2024. https://www.china-briefing.com/news/china-to-regulate-deep-synthesis-deep-fake-technology-starting-january-2023/
China: Provisions on Deep Synthesis Technology Enter into Effect Library of Congress accessed 15 August 2024. https://www.loc.gov/item/global-legal-monitor/2023-04-25/china-provisions-on-deep-synthesis-technology-enter-into-effect/
Call for upskirting bill to include 'deepfake' pornography ban Archived 21 June 2018 at the Wayback Machine The Guardian https://www.theguardian.com/world/2018/jun/21/call-for-upskirting-bill-to-include-deepfake-pornography-ban
Creating sexually explicit deepfakes to become a criminal offence BBC accessed 15 August 2024. https://www.bbc.com/news/uk-68823042
Creating sexually explicit deepfake images to be made offence in UK accessed 15 August 2024. https://www.theguardian.com/technology/2024/apr/16/creating-sexually-explicit-deepfake-images-to-be-made-offence-in-uk
[2] Archived 22 November 2019 at the Wayback Machine see page 18 https://cyber.gc.ca/sites/default/files/publications/tdp-2019-report_e.pdf
Bogart, Nicole (10 September 2019). "How deepfakes could impact the 2019 Canadian election". Federal Election 2019. Archived from the original on 27 January 2020. Retrieved 28 January 2020. https://election.ctvnews.ca/how-deepfakes-could-impact-the-2019-canadian-election-1.4586847
"What Can The Law Do About Deepfake". mcmillan.ca. Archived from the original on 7 December 2019. Retrieved 28 January 2020. https://mcmillan.ca/What-Can-The-Law-Do-About-Deepfake
An Overview of Canada's Online Harms Act TechPolicy.Press] accessed 15 August 2024. https://www.techpolicy.press/an-overview-of-canadas-online-harms-act/
BILL C-63 House of Commons of Canada accessed 15 August 2024. https://www.parl.ca/Content/Bills/441/Government/C-63/C-63_1/C-63_1.PDF
Venkatasubbbu, Satish (27 June 2023). "How deepfakes are used to scam You & Me? Current trends on detection using AI & legal regulations worldwide". cybermithra.in. Archived from the original on 10 June 2024. Retrieved 3 July 2023. https://cybermithra.in/2023/06/27/deepfakes-part-2
Romero-Moreno, Felipe (29 March 2024). "Generative AI and deepfakes: a human rights approach to tackling harmful content". International Review of Law, Computers & Technology. 39 (2): 297–326. doi:10.1080/13600869.2024.2324540. hdl:2299/20431. ISSN 1360-0869. https://doi.org/10.1080%2F13600869.2024.2324540
Elon Musk's X targeted with nine privacy complaints after grabbing EU users' data for training Grok TechCrunch accessed 19 August 2024. https://techcrunch.com/2024/08/11/elon-musks-x-targeted-with-eight-privacy-complaints-after-grabbing-eu-users-data-for-training-grok/
Hatmaker, Taylor (1 May 2018). "DARPA is funding new tech that can identify manipulated videos and 'deepfakes'". TechCrunch. Archived from the original on 8 December 2023. Retrieved 14 April 2024. https://techcrunch.com/2018/04/30/deepfakes-fake-videos-darpa-sri-international-media-forensics/
Hsu, Jeremy (22 June 2018). "Experts Bet on First Deepfakes Political Scandal - IEEE Spectrum". IEEE. Archived from the original on 22 February 2024. Retrieved 14 April 2024. https://spectrum.ieee.org/experts-bet-on-first-deepfakes-political-scandal
"Media Forensics". www.darpa.mil. Archived from the original on 29 October 2020. Retrieved 7 October 2020. https://www.darpa.mil/program/media-forensics
"The US military is funding an effort to catch deepfakes and other AI trickery". MIT Technology Review. Archived from the original on 1 November 2020. Retrieved 7 October 2020. https://www.technologyreview.com/2018/05/23/142770/the-us-military-is-funding-an-effort-to-catch-deepfakes-and-other-ai-trickery/
Collins, Connor (11 March 2019). "DARPA Tackles Deepfakes With AI". GovCIO Media & Research. Archived from the original on 2 March 2024. Retrieved 14 April 2024. https://govciomedia.com/darpa-tackles-deepfakes-with-ai/
"DARPA Is Taking On the Deepfake Problem". Nextgov.com. 6 August 2019. Archived from the original on 28 October 2020. Retrieved 7 October 2020. https://www.nextgov.com/emerging-tech/2019/08/darpa-taking-deepfake-problem/158980/
"DARPA Is Taking On the Deepfake Problem". Nextgov.com. 6 August 2019. Archived from the original on 28 October 2020. Retrieved 7 October 2020. https://www.nextgov.com/emerging-tech/2019/08/darpa-taking-deepfake-problem/158980/
Sybert, Sarah (16 September 2021). "DARPA Launches New Programs to Detect Falsified Media". GovCIO Media & Research. Archived from the original on 10 June 2024. Retrieved 14 April 2024. https://govciomedia.com/darpa-launches-new-programs-to-detect-falsified-media/
Cooper, Naomi (15 March 2024). "DARPA Launches 2 New Efforts to Boost Defenses Against Manipulated Media". Archived from the original on 15 March 2024. Retrieved 14 April 2024. https://executivegov.com/2024/03/darpa-launches-2-new-efforts-to-boost-defenses-against-manipulated-media/
"Semantic Forensics - Analytic Catalog". semanticforensics.com. Archived from the original on 18 April 2024. Retrieved 14 April 2024. https://semanticforensics.com/analytic-catalog
National Academies of Sciences, Engineering, and Medicine (22 June 2023). "Nobel Prize Summit Fuels Initiatives to Combat Misinformation and Disinformation and Build Trust in Science". National Academies of Sciences, Engineering, and Medicine. Wikidata Q124711722.{{cite journal}}: CS1 maint: multiple names: authors list (link) https://www.nationalacademies.org/news/2023/06/nobel-prize-summit-fuels-initiatives-to-combat-misinformation-and-disinformation-and-build-trust-in-science
"SFE: Wodhams, Jack". sf-encyclopedia.com. Retrieved 6 January 2025. https://sf-encyclopedia.com/entry/wodhams_jack
"Picaper". Internet Speculative Fiction Database. Archived from the original on 29 July 2020. Retrieved 9 July 2019. http://www.isfdb.org/cgi-bin/title.cgi?48679
Kerr, Philip (2010). A Philosophical Investigation. National Geographic Books. ISBN 978-0143117537. 978-0143117537
Bernal, Natasha (8 October 2019). "The disturbing truth behind The Capture and real life deepfakes". The Telegraph. Archived from the original on 14 October 2019. Retrieved 24 October 2019. https://www.telegraph.co.uk/technology/2019/10/08/truth-behind-deepfake-video-bbc-ones-thriller-capture/
Crawley, Peter (5 September 2019). "The Capture: A BBC thriller of surveillance, distortion and duplicity". The Irish Times. Archived from the original on 9 September 2019. Retrieved 24 October 2019. https://www.irishtimes.com/culture/tv-radio-web/the-capture-a-bbc-thriller-of-surveillance-distortion-and-duplicity-1.4008823
John Travolta is Forrest Gump [DeepFake]. Archived from the original on 20 April 2024. Retrieved 20 April 2024 – via www.youtube.com. https://www.youtube.com/watch?v=6sUO2pgWAGc
Novak, Lauren (24 November 2023). "John Travolta Turned Down 'Forrest Gump' & Other Stars Who Chose Not to Play Iconic Characters". Remind. Archived from the original on 10 June 2024. Retrieved 20 April 2024. https://www.remindmagazine.com/article/8243/stars-who-turned-down-iconic-roles/
"ESPN Films Latest 30 for 30 Documentary Al Davis vs. The NFL to Premiere February 4" (Press release). ESPN. 15 January 2021. Archived from the original on 6 February 2021. Retrieved 5 February 2021. https://espnpressroom.com/us/press-releases/2021/01/espn-films-latest-30-for-30-documentary-al-davis-vs-the-nfl-to-premiere-february-4/
Sprung, Shlomo (1 February 2021). "ESPN Documentary 'Al Davis Vs The NFL' Uses Deepfake Technology To Bring Late Raiders Owner Back To Life". Forbes. Archived from the original on 14 April 2021. Retrieved 4 February 2021. https://www.forbes.com/sites/shlomosprung/2021/02/02/al-davis-vs-the-nfl-uses-deepfake-technology-to-bring-late-raiders-owner-pete-rozelle-back-to-life/
"Hudson and Rex". Archived from the original on 13 January 2022. Retrieved 13 January 2022. https://www.citytv.com/show/hudson-rex/
Wood, Mikael (9 May 2022). "Watch Kendrick Lamar morph into O.J., Kanye, Kobe, Nipsey Hussle in new video". Los Angeles Times. Archived from the original on 30 October 2022. Retrieved 10 May 2022. https://www.latimes.com/entertainment-arts/music/story/2022-05-09/kendrick-lamar-new-video-the-heart-part-5-deepfake
Wood, Mikael (9 May 2022). "Watch Kendrick Lamar morph into O.J., Kanye, Kobe, Nipsey Hussle in new video". Los Angeles Times. Archived from the original on 30 October 2022. Retrieved 10 May 2022. https://www.latimes.com/entertainment-arts/music/story/2022-05-09/kendrick-lamar-new-video-the-heart-part-5-deepfake
Aloe Blacc - Wake Me Up (Universal Language Mix), 20 April 2022, archived from the original on 24 August 2022, retrieved 24 August 2022 https://www.youtube.com/watch?v=TzofRTcoPsU
"Watch Aloe Blacc Perform "Wake Me Up" in 3 Languages to Honor Avicii Using Respeecher AI Translation". Voicebot.ai. 5 May 2022. Archived from the original on 24 August 2022. Retrieved 24 August 2022. https://voicebot.ai/2022/05/05/watch-aloe-blacc-perform-wake-me-up-in-3-languages-to-honor-avicii-using-respeecher-ai-translation/
Lees, Dominic (27 January 2023). "Deep Fake Neighbour Wars: ITV's comedy shows how AI can transform popular culture". The Conversation. Archived from the original on 24 June 2023. Retrieved 3 July 2023. https://theconversation.com/deep-fake-neighbour-wars-itvs-comedy-shows-how-ai-can-transform-popular-culture-198569
Taylor, Derrick Bryson (2 October 2023). "Tom Hanks Warns of Dental Ad Using A.I. Version of Him". The New York Times. ISSN 0362-4331. Archived from the original on 10 June 2024. Retrieved 12 October 2023. https://www.nytimes.com/2023/10/02/technology/tom-hanks-ai-dental-video.html