Microsoft it has partnered with to assist take away non-consensual intimate photographs — together with deepfakes — from its Bing search engine.
When a sufferer opens a “case” with StopNCII, the database creates a digital fingerprint, additionally known as a “hash,” of an intimate picture or video saved on that particular person’s gadget with out their needing to add the file. The hash is then despatched to taking part trade companions, who can search out matches for the unique and take away them from their platform if it breaks their content material insurance policies. The method additionally applies to AI-generated deepfakes of an actual particular person.
A number of different tech corporations have agreed to work with StopNCII to clean intimate photographs shared with out permission. Meta the software, and makes use of it on its Fb, Instagram and Threads platforms; different providers which have partnered with the trouble embody , Reddit, Snap, Niantic, OnlyFans, PornHub, Playhouse and Redgifs.
Absent from that checklist is, unusually, Google. The tech big has its personal set of for reporting non-consensual photographs, together with . Nonetheless, failing to take part in one of many few centralized locations for scrubbing revenge porn and different personal photographs arguably locations an extra burden on victims to take a piecemeal method to recovering their privateness.
Along with efforts like StopNCII, the US authorities has taken some steps this yr to particularly deal with the harms performed by the deepfake facet of non-consensual photographs. The known as for brand new laws on the topic, and a gaggle of Senators moved to guard victims with , launched in July.
When you imagine you’ve got been the sufferer of non-consensual intimate image-sharing, you possibly can open a case with StopNCII and Google ; in the event you’re under the age of 18, you possibly can file a report with NCMEC .












