How To Spot a Deepfake

After terror attacks, deepfakes spread fast. Here’s how to spot and report them

In the immediate aftermath of violent attacks, social media often fills the information vacuum before verified facts are available. That space is increasingly occupied by AI-generated deepfakes and manipulated “breaking news” content designed to mislead, inflame, and spread fear.

Following the recent terror attack in Australia, AI-generated images and videos purporting to show the Bondi gunmen, interveners or victims have circulated widely online. This content distorts public understanding, retraumatise affected communities, and create opportunities for extremist narratives to take hold.

Understanding how to identify and report deepfakes has become a core element of digital literacy.


How to spot deepfakes and manipulated content

AI-generated or altered videos frequently contain small inconsistencies that are easy to miss when content is viewed quickly or under emotional strain. Common warning signs include:

  • faces that appear unnaturally smooth or blurred

  • mouth movements that do not align with speech

  • unusual or exaggerated body movements (e.g. the subject of the video reacts to something where background figures do not)

  • Lighting inconsistencies where a person’s face does not match the surrounding environment

  • Audio may also sound clipped, flat, or distorted (e.g. a static sound to the background)

“Breaking news” clips require particular caution. Videos that fail to identify a reporter or location, rely heavily on urgent or alarmist language, or circulate exclusively on social media without confirmation from police or reputable news outlets warrant scepticism. Content framed as “leaked CCTV”, “police bodycam footage”, or “exclusive video” is especially prone to manipulation in the chaotic hours following an attack.


Stop. Think before you share

Slowing the spread of harmful content often begins with a pause. Before sharing material related to violent events, it is worth asking: Who posted this first? Is that source trustworthy? Has the information been confirmed by official channels, such as police? Does sharing this contribute to public understanding, or does it spread fear and speculation?

Where uncertainty remains, choosing not to share helps limit harm.


How to report deepfakes in New Zealand

Most social media platforms provide options to report manipulated or harmful content. While these systems have limitations, mass reporting can reduce visibility and circulation. Relevant categories too choose when reporting usually include “false information”, “false news”, “misinformation”, “manipulated media”, or “violence and terrorism”.

In New Zealand, additional support is available. Where deepfake content targets someone in New Zealand or causes serious emotional distress, reports can be made to Netsafe. You can contact them via their website, text “Netsafe” to 4282 or call them on 0508 638 723.

Netsafe offers free advice and can engage directly with platforms when necessary.


Why bother worrying about deepfakes?

Deepfakes play a significant role in shaping how mass violence is understood and discussed online. In the aftermath of terror attacks, manipulated content can foster fear, divert attention from verified information, and exploit grief for attention or ideological purposes. These dynamics contribute to misidentification, racialised narratives, and erosion of trust in legitimate reporting.

Choosing not to share unverified content and taking steps to report it contributes to a safer information environment. In moments of collective shock, restraint and care have real impact.


Dr Cassandra Mudgway is a senior law lecturer who studies online harm. You can find more analysis and advice on instagram.

Kyle Church