Concerns are growing that generative artificial intelligence can now be used to create highly convincing images of property damage, making it easier for dishonest claims to slip through undetected. What once required advanced editing skills can now be achieved in seconds using simple AI tools, raising fresh risks for landlords and insurers who depend on photographic evidence.
A recent study by SAS has highlighted how AI can fabricate realistic scenes of crashes, damaged buildings and other incidents that look entirely genuine at first glance. These tools are already being exploited by fraudsters and organised crime groups, and their increasing availability means the threat is no longer limited to sophisticated criminals.
For landlords, the issue is particularly worrying. Many rely on images sent by tenants to assess damage, especially when managing properties remotely or through letting agents. If those images have been altered or generated by AI, it becomes far harder to establish whether a claim is genuine or exaggerated. This creates uncertainty not only for landlords but also for insurers handling the resulting claims.
The research warns that small and subtle changes, sometimes referred to as “vanilla synthetics”, are among the most dangerous forms of manipulation. These edits may involve adding a crack to a wall, darkening a stain, or creating the appearance of impact damage. Because the alterations are minor, they are difficult for the human eye to detect and often pass unnoticed during routine checks.
According to the Insurance Fraud Register, insurance fraud already costs the public dearly, adding an average of £50 a year to consumer premiums. With AI making image manipulation easier and faster, there are concerns that fraudulent claims could rise further, placing additional pressure on the insurance system and driving up costs for honest policyholders.
To test how convincing these fake images can be, SAS showed doctored photographs to members of the public. The results were striking, with around 40% of participants unable to identify which images had been manipulated. This suggests that even when people are warned to be cautious, many still struggle to spot AI-generated content.
A spokesperson for SAS explained that fraudsters are using generative AI to make fabricated damage appear completely plausible. With just a few written prompts, they can create, enhance or remove visual evidence to support false claims. This could include making a minor mark look like serious damage or inventing an entire scene of destruction where none exists.
They advised that those reviewing claims should look closely for small inconsistencies. These may include shadows that fall in unnatural directions, damage that does not match the supposed cause, blurred number plates, or backgrounds that appear strangely empty or overly tidy. Such tiny visual mismatches are often the first clues that an image has been altered by AI.
Despite these risks, experts stress that artificial intelligence is not only a tool for fraudsters. The same technology can be used by insurers and investigators to analyse claims data more effectively. AI systems are capable of spotting unusual patterns and anomalies that human reviewers might miss, helping to flag suspicious cases for further investigation.
This dual role of AI means the challenge is not simply to stop its use, but to stay ahead of those who misuse it. Insurers are already investing in detection systems that can assess image authenticity and cross-check claims against wider data sets. Over time, these tools are expected to become a standard part of fraud prevention strategies.
For landlords, the message is clear: relying solely on photographs may no longer be enough. Where possible, in-person inspections, time-stamped images, and independent verification could become more important. Letting agents may also need to update their processes to account for the growing sophistication of digital manipulation.
Tenants, meanwhile, should be aware that submitting altered images could carry serious legal consequences. While AI makes it easier to fake damage, it also leaves digital traces that can be uncovered with the right tools. Attempting to exploit this technology for personal gain could result in rejected claims or legal action.
The rise of AI-generated damage imagery represents another example of how technology is reshaping the property and insurance sectors. What was once a niche concern is quickly becoming a mainstream issue, affecting landlords, tenants and insurers alike.
As generative AI continues to improve, vigilance will be essential. Training staff to recognise warning signs, adopting stronger verification systems and using AI to fight AI may become standard practice across the industry.
Ultimately, while artificial intelligence has the potential to make fraud more convincing, it also offers powerful tools to detect and prevent it. The challenge for landlords and insurers is to adapt quickly, ensuring that trust in photographic evidence is supported by robust checks and modern technology.


