See3
We built software for proving that images, videos and audio are real. We defend the truth against deepfake disinformation using cryptography.
See3 | AI Detection | C2PA | Watermarking | |
---|---|---|---|---|
Privacy-Preserving | Yes | Yes | No | No |
Regulatory Compliant | Yes | Under Some Conditions | Not When It Tracks Users | Doesn't Support Moderation |
Invisible to Users | Yes | Requires User Judgement | Lacks Central Trust System | A Little Better Than Detection Models |
Easy to Implement | Yes | Yes | Bad Tools | So-So |
Can't Be Faked | Yes | No | Yes | Harder to Fake |
Survives Harmless Edits | Mostly | Yes | No | Mostly |
Supports Image Editors | Yes | No | Yes | No |
Records Of Provenance Data | Yes | No | Yes | Possible |
How does See3 work?
Each See3-enabled device goes through an authentication process, which proves that it is running See3-approved software which will not mark deepfakes as non-deepfakes. If the authentication process succeeds, the See3 Network will issue a unique cryptographic secret to the approved device. This secret enables the device to anonymously mark images as authentic at the moment they are captured using its the camera sensor. It is bound to the device hardware and can not be extracted. Since the cryptographic secret impossible to extract from the hardware and is only usable by honest capture software, only real images will be marked as authentic.
How does See3 ensure privacy?
See3 uses zero-knowledge cryptography to guarantee that the capture device and software are approved by See3 (or our clients), while keeping device-specific information confidential. It is impossible for See3 Media Proofs to be distinguished from each-other, even if you have multiple examples of images from the same devices.
Can I attach my own ID to an image with See3?
Although See3's protocol was built to authenticate devices rather than humans, yes! We'll be introducing a variant of See3 that allows one to verifiably mark content according to the author's public identity, or their membership of an organization. When this is the case, the author chooses how much they want to reveal about themselves at the time-of-capture.
What happens if I edit an image which features a See3 Media Proof?
See3 supports perceptual hashes, which allow for minor, non-malicious changes to images without invalidating the proof. This means you can crop or compress the image without issue. However, for more extensive edits, you'll need to use a See3-enabled editor. This editor will mark all changes made to the image, ensuring transparency and maintaining the integrity of the See3 Media Proof system.
Which camera devices does See3 support?
See3 supports many Android (10+) and iOS devices, with SDKs that are cross-platform, including first-party support for React Native. It is also straightforward to integrate See3 into other forms of hardware, included embedded systems, if you are a manufacturer. If you'd like to integrate See3 onto your own hardware, contact us at hardware@see3.xyz
Who is using See3?
We've introduced See3 on our own media applications, such as Realcaster (like Twitter, but featuring provably real images). This demonstrates the practical application and effectiveness of See3 in real-world scenarios. As we continue to expand, we're also working with various partners across different industries to implement See3 in their platforms and devices. Stay tuned for updates!
What if a See3-enabled device gets hacked, or otherwise modified?
See3 is designed to be tolerant to hardware-hacking and integration slip-ups. If a See3-enabled device is compromised, the impact is limited. With just one example of a mistakenly-tagged image, the See3 TrustCouncil can flag all images from the same fraudulent origin. This makes See3 a highly moderation-friendly solution. Additionally, all identifiable information remains cryptographically sealed through MPC (Multi-Party Computation), ensuring maximum regulatory compliance and maintaining privacy even in the event of a breach.
How does See3 handle deepfakes?
See3 doesn't rely on AI or machine learning to detect deepfakes. Instead, it uses cryptographic proofs embedded at the point of capture to verify the authenticity of media. This approach ensures that even as deepfake technology improves, See3's verification remains reliable and foolproof.
Is See3 open-source?
Yes, See3's SDK and Open Standard are entirely open-source, with open-source implementations that are interoperable with the latest standards such as C2PA.
Join our community to learn more.