NIST issues guidelines to detect fake face photos and prevent identity fraud
NIST has released practical guidelines to help organizations detect manipulated face photos, a growing tool of identity fraud. The advice emphasizes combining technology, human review, and prevention measures to strengthen trust in biometric identification systems.

The US National Institute of Standards and Technology (NIST) has published new guidelines to help organisations detect morphed face photos—digitally manipulated images that combine features of two people. Such morphs are increasingly used in identity fraud and can trick facial recognition systems used in passports, border control, and other security checks.
What is face morphing, and why is it a problem?
Face morphing software blends photos of two individuals into a single, realistic-looking image. These images can sometimes fool biometric systems into recognising the photo as belonging to both original people. For example, someone applying for a passport with a morphed photo could later allow another person to use the same document, bypassing identity checks.
Creating morphs has become easier in recent years thanks to smartphone apps, AI tools, and graphics software. While some morphs leave visible flaws, like odd textures around the eyes or lips, others are so convincing that they can evade human reviewers and automated systems.
What the guidelines provide
NIST’s new report, Face Analysis Technology Evaluation (FATE) MORPH 4B, offers practical advice for using morph detection software and for investigating suspicious photos. The document explains two main scenarios:
- Single-image detection: When only one photo is available, such as during a passport application. Detection accuracy can be high if the system knows the type of software used to create the morph, but it drops sharply when facing unfamiliar methods.
- Differential detection: When a suspicious photo can be compared to a trusted reference photo, such as one taken at a border checkpoint. This method is more consistent, with an accuracy between 72% and 90%.
The guidelines stress that software alone is not enough. Effective detection requires a combination of automated tools, human review, and clear workflows for handling flagged photos. Importantly, they recommend preventing manipulated photos from entering systems in the first place, for example, by requiring live photo capture during ID applications.
Why this matters
Identity fraud enabled by morphing attacks threatens public safety, border security, and trust in biometric identification systems. By offering tailored guidance, NIST aims to help governments, passport offices, and border agencies deploy detection systems more effectively.
For civil society, the issue matters because secure identity verification underpins many aspects of daily life, from safe travel and protected borders to preventing criminals from misusing false identities. At the same time, transparency in how morph detection is used helps protect individuals from errors or misuse of biometric data.
Looking ahead
NIST has been testing morph detection tools since 2018 and reports that the technology is steadily improving. The new guidelines are intended to raise awareness of morphing threats and to help organisations configure their systems responsibly. As NIST’s Mei Ngan put it: ‘It’s important to know that morphing attacks are happening, and there are ways to mitigate them. The most effective way is to not allow users the opportunity to submit a manipulated photo for an ID credential in the first place.‘