
Related
An Open Letter to CISOs: Who Should Own the Proof That You're Human?
10 minute read
The short version:
- Zoom is integrating World (formerly Worldcoin) iris-scanning verification to display "Verified Human" badges in video meetings.
- A World badge confirms a user passed biometric enrollment at a kiosk at some past date. It does not detect deepfakes on the live stream, and it does not survive account takeover.
- World is banned or under regulatory enforcement in seven named jurisdictions including Spain, Germany, Brazil, Hong Kong, Portugal, Kenya, and South Korea, with active scrutiny in five more.
- Enterprise deepfake defense requires continuous authentication across face and voice, not a one-time badge tied to a consumer crypto identity network.
- GetReal Protect delivers real-time deepfake detection and continuous identity protection for Zoom, Microsoft Teams, Cisco Webex, and every other major enterprise collaboration platform — no proprietary hardware, no crypto wallet, no vendor lock-in.
Last week, Zoom — alongside DocuSign, Tinder, Shopify, and others — announced that it is partnering with World — the iris-scanning biometric ID network co-founded by Sam Altman and operated by Tools for Humanity — to verify that the humans on a video call are, in fact, humans. Participants who enroll will get a badge on their video tile. Think of it as a blue checkmark for your face, issued after you stare into a device the company calls The Orb.I want to be clear up front: Zoom is right about the problem. Generative AI has made real-time face and voice impersonation cheap, scalable, and convincing. In our most recent research, 41% of enterprises reported they had already hired and onboarded fraudulent candidates. Gartner predicts that by 2028, one in four candidate profiles worldwide will be fake. The deepfake threat is operational, it is here, and it is targeting the enterprise every single day.
At GetReal Security, we have been saying for the better part of two years that the human layer — the person on the other end of the camera, the voice on the phone, the face in the Teams window — is the new security perimeter. We're glad the market is catching up.
The approach, though, deserves a hard look from every CISO, CIO, and General Counsel whose organization may be asked to adopt it. A crypto-native, consumer-focused, hardware-dependent, single-modality, point-in-time verification system is not a neutral technical choice when the problem you are trying to solve is enterprise security. It is a specific bet with specific consequences.
Here's what those consequences look like.
World Was Built for Consumers. The Enterprise Is Not a Consumer.
Sam Altman’s World, formerly known as Worldcoin, was not built for the enterprise. It was built to issue digital IDs to consumers in exchange for cryptocurrency. The company has signed up roughly 17 million adults globally — but how many of them are enterprise users? How many work in regulated industries? How many are cleared to handle sensitive information? The answer, largely, is we don't know, and neither does Zoom.
World's signup incentive is crypto — roughly $40 in WLD tokens for getting your iris scanned. The Orbs — the company's proprietary iris-scanning hardware — have been rolled out heavily in low-income areas and emerging markets, where that payment is a meaningful incentive. The Electronic Frontier Foundation has criticized the company for exactly this pattern: collecting sensitive biometric data from populations least able to evaluate the long-term privacy trade-off. Amnesty International has flagged the broader category of biometric profiling for discrimination risk.
Enterprises and governments do not have the same threat model as a 19-year-old in Nairobi signing up for $40 in tokens. And they should not be authenticating their boardrooms on a network built on that foundation.
The Regulatory Track Record Is Not Debatable
Before a CISO bets their organization on a World-backed identity layer, they should look at the ledger. This is the current global regulatory posture against World and Tools for Humanity:
- Spain — banned. In March 2024, the Spanish data protection authority (AEPD) issued a precautionary order requiring World to stop collecting biometric data and stop using what it had already gathered, citing insufficient information to users, collection of data from minors, and no ability to withdraw consent. World sued the AEPD in Spain's High Court. The ban was upheld.
- Germany — GDPR violations, deletion order. In December 2024, the Bavarian State Office for Data Protection Supervision (BayLDA) concluded that World's handling of iris-derived biometric data does not comply with the EU's General Data Protection Regulation. The authority ordered deletion of iris codes collected without a sufficient legal basis and compelled World to stand up a GDPR-compliant erasure procedure. World appealed. World has since paused iris scanning in Germany.
- Hong Kong — cease-operations. In May 2024, The Office of the Privacy Commissioner for Personal Data (PCPD) served an enforcement notice directing World to cease all operations in Hong Kong. 8,302 Hong Kong individuals had been scanned. The PCPD found that WorldCoin had contravened several Data Protection Principles and Personal Data Ordinance
- Brazil — banned. Brazil's ANPD ordered Tools for Humanity on 24 January 2025 to stop offering any financial compensation tied to biometric data collection, citing that crypto rewards compromised genuine consent and that World's structure didn't allow deletion or revocation.
- Portugal — temporary suspension, Portugal's CNPD imposed a 90-day suspension on 26 March 2024 over concerns about minors' data, lack of age verification, absence of deletion or consent withdrawal mechanisms, and qualification of biometrics as sensitive under GDPR.
- Kenya — suspended. In August 2023, Kenyan government suspended World with the Ministry of the Interior citing "authenticity and legality" concerns around security, financial services and data protection. A subsequent High Court ruling (5 May 2025) ordered permanent deletion of unlawfully collected biometric data within seven days under ODPC supervision.
- South Korea: In September 2024, South Korea’s Personal Information Protection Commission (PIPC) fined WorldCoin and Tools for Humanity for violations of the Personal Information Act. According to the Commission, Worldcoin did not properly notify the subjects of the purpose of collection and period of possession of their scanned iris data.
- Active investigations / regulatory action: There are several other countries evaluating World and Iris scan credibility, such as Italy, Colombia, Argentina, Indonesia, and Singapore.
That is three outright bans, one cease-operations order, multiple deletion mandates, regulatory fines in South Korea, a court-ordered data deletion in Kenya, and active scrutiny on five continents. "Controversial" is the polite summary. "Structurally adversarial to privacy regulators" is the honest one.
Sit with that list for a moment. Minors enrolled without proper consent. No ability to withdraw. Biometric collection in low-income populations. This is the point where a principle needs to be said out loud: an individual should own their own digital identity. Not their employer. Not a video conferencing vendor. Not any other company. And certainly not a for-profit network operated on behalf of a billionaire. The entire reason this conversation is happening is that AI is collapsing the distinction between a real person and a synthetic one. The answer to that cannot be to hand the canonical record of who is real to a private company and hope they do not misuse it.
An enterprise that integrates a World-backed verification into its workflow is inheriting that regulatory surface. If your Frankfurt office is on a Zoom call with a counterparty verified by World, under which European data protection framework are the inferences drawn from that verification being handled? Zoom and World owe every Global 2000 CISO and General Counsel a concrete answer before the first iris-scanned meeting happens.
And it is worth asking a different question too. Which governments, the ones most publicly focused on protecting their citizens and institutions from disinformation, deepfakes, and AI-driven fraud, have already made a choice about who to trust on this problem? In at least one Southeast Asian jurisdiction, the answer from the government has been to open a criminal investigation into the trade of World IDs, and to stand up their defense against deepfakes with a different stack entirely. That choice did not go to an Orb.
Point-in-Time Authentication Does Not Survive Account Takeover
This is the deepest problem with the Zoom–World model, and it is the one the tech press has largely missed.
A verified badge confirms that the World ID holder passed biometric authentication at some point in the past — the day they visited the Orb. It tells you nothing reliable about the person currently sitting in front of the webcam.
Credential theft remains the leading cause of enterprise breaches. If an attacker takes over a verified user's Zoom account — through phishing, session hijacking, a compromised endpoint, SIM swap, stolen OAuth token — the deepfake on the call inherits the blue check. At that moment the badge works against the defender: it tells every other participant that the synthetic face they are looking at is trusted.
The 2024 Arup case — where the engineering firm lost $25 million after an employee in Hong Kong authorized wire transfers during a video call in which every participant except the victim was an AI-generated deepfake — is the canonical example the entire industry cites. It is exactly the scenario a verified badge fails to prevent. A World badge on the CFO's tile would not have helped the victim, because the attack never depended on the victim's own account being unverified; it depended on the attackers being convincing enough to impersonate people the victim trusted. Badges on verified accounts do not solve that. Continuous deepfake detection on the live stream does.
This is why GetReal is built around continuous authentication. We do not check a credential once. We continuously verify the face and the voice throughout the interaction. If the person on screen changes, if the audio is synthetic, if the face is a replay or a morph, we detect it in seconds and surface it to the security team with forensic-grade evidence of why.
The Zoom–World partnership does not do deepfake detection. It does enrollment and badging. Those are two different products, and only one of them is a security control.
Hardware Dependency Is a Non-Starter
World verification requires an Orb. Not a laptop camera. Not a phone. A physical iris-scanning station, deployed in selected cities, operated by trained staff. To onboard an employee, you send them to a mall.
That is not an authentication workflow. That is a logistical tax on every employee, contractor, candidate, auditor, regulator, and counterparty your business interacts with. It assumes the future of enterprise trust passes through a signup booth.
Our approach is the opposite. GetReal Protect runs as software inside the meeting tools your organization already uses — Zoom, Microsoft Teams, Cisco Webex, and every other major enterprise collaboration platform — and analyzes the face and voice of each participant in real time using the camera and microphone they already have. No kiosk. No new hardware. No special enrollment. That is the only model that scales to enterprise reality.
Iris Alone Is Not Enough — and Iris Can Be Deepfaked
World's entire verification signal is the iris code. One biometric modality, captured once, at the Orb. Every serious biometric security researcher I have spoken with in the last five years will say the same thing: single-modality biometrics are brittle. Iris scanning is not new, it is not magic, and it is not immune to spoofing or generative attack. And unlike a password, you can't rotate your iris if it is compromised.
Systems like Clear pair iris with face, fingerprint, and body scans precisely because one modality is not enough for high-assurance identity. World does not. World is an iris code and a badge.
GetReal's platform is multi-modal by design. We verify face and voice together, continuously, with forensic-grade analysis. We look for the artifacts that generative models leave behind on the actual live stream. We do not issue a badge and walk away.
Individuals Do Not Own Their Identity. The Company and the Advertisers Do.
One more point that has to be said plainly. In the World model, the user does not own their identity. Tools for Humanity does. The company holds the iris code. The company decides what the verification means, who can query it, and on what terms it can be revoked.
Look closely at the economics. World pays each user roughly $40 in cryptocurrency, one time, to enroll. In exchange it receives a verified, unique, lifetime-persistent human identity — exactly the kind of asset that, across the digital advertising ecosystem, generates hundreds of dollars per year of value per user in developed markets, every year, for the rest of that person's life. That is not a fair trade. That is an acquisition.
Your employees, your customers, and your users are effectively renting identity from a private company co-founded by one of the most powerful AI executives in the world — the CEO of a company that is, simultaneously, one of the leading producers of the generative AI capabilities being used to manufacture the deepfakes this system is meant to defend against, and one that has been publicly reported to be building out an advertising business of its own.
GetReal's model is the inverse, and it is a matter of principle, not positioning. We are a security company, purpose-built to protect your identity — not to profit from it. The information we use exists for one reason only: to detect and respond to abuse of that identity. Nothing more. The individual owns their identity outright, including the right to be forgotten. The enterprise owns the security posture around it. No identity is minted into a crypto wallet. No token is emitted. No ad network or consumer data business sits behind the authentication decision. No vendor lock-in. This is a new kind of collaborative partnership — between the enterprise and the employees who are also consumers, and whose identity belongs to them in both contexts. Just security.
The Choice Enterprise Actually Has
The deepfake problem is real. The executive teams and boards asking how to trust a video call are asking the right question. We have been building the answer to that question for years.
We do not send your employees to a mall kiosk. We do not ask your counterparties to trust a crypto project. We do not ask your regulators to accept a vendor that is litigating against European data protection authorities. We authenticate the human on the screen — continuously, across face and voice, without proprietary hardware, with forensic-grade evidence, inside the meeting tools your organization already uses.
That is what enterprise and government actually need. That is the approach we have built.
Zoom picked a partner — showing the importance of this space. Protecting the human layer is the right thing to do, and the urgency is real. But we picked a side — the side of the enterprise, the government, the security teams defending them, and the users whose privacy is on the line. Every one of those teams deserves an authentication stack that does not compromise on privacy, on continuous assurance, on hardware independence, or on who owns the identity.
No compromise. That's what GetReal means.
Frequently Asked Questions
Does Zoom's World ID partnership detect deepfakes?
No. The Zoom–World partnership issues a "Verified Human" badge to participants who previously completed iris-scan enrollment at a World Orb kiosk. It confirms past enrollment; it does not analyze the live video stream for deepfake artifacts or re-verify the person on camera during the call. Deepfake detection and biometric enrollment are two different product categories, and the partnership provides only the second.
What is the difference between World ID verification and continuous authentication?
World ID is point-in-time enrollment: a user visits an Orb, gets scanned once, and receives a badge that persists across future sessions. Continuous authentication, as provided by GetReal Protect, verifies face and voice on every frame of a live video call, looking for the forensic signatures generative AI models leave behind. If a deepfake appears mid-meeting or an account is taken over, continuous authentication detects it. A point-in-time badge does not.
Is World ID safe for enterprise use?
That is a question every CISO and General Counsel should answer for themselves with their compliance team. The facts on the record: World is banned or under regulatory enforcement in Spain, Germany, Hong Kong, Brazil, Portugal, Kenya, and South Korea. It faces GDPR violations in Germany, a court-ordered data deletion mandate in Kenya, regulatory fines in South Korea, and active investigations in Italy, Colombia, Argentina, Indonesia, and Singapore. Any enterprise integrating World-backed verification inherits that regulatory surface area.
Can a World ID badge be spoofed in an account takeover attack?
The badge itself is cryptographically anchored to the verified account. The problem is the account. If an attacker compromises a verified user's credentials through phishing, session hijacking, SIM swap, or stolen OAuth token, the attacker inherits the badge. A deepfake appearing on that call is now displayed as "Verified Human" to every other participant. The badge does not detect the takeover; it amplifies trust in the imposter. This is exactly the attack pattern that produced the 2024 Arup $25 million deepfake loss.
Which countries have banned or investigated World (formerly Worldcoin)?
As of April 2026: Spain (AEPD ban, March 2024), Germany (BayLDA GDPR violations and deletion order, December 2024), Hong Kong (PCPD cease-operations order, May 2024), Brazil (ANPD ban on crypto-for-biometrics exchanges, January 2025), Portugal (CNPD 90-day suspension, March 2024), Kenya (government suspension August 2023, High Court deletion order May 2025), and South Korea (PIPC fines, September 2024). Active investigations are ongoing in Italy, Colombia, Argentina, Indonesia, and Singapore.
What are alternatives to World ID for Zoom deepfake protection?
GetReal Protect is a continuous deepfake detection platform built specifically for enterprise video conferencing. It runs as software inside Zoom, Microsoft Teams, Cisco Webex, and other major platforms, analyzing face and voice in real time with forensic-grade analysis. There is no proprietary hardware requirement, no kiosk enrollment, no crypto wallet, no biometric data handed to a third-party identity network, and no vendor lock-in. Learn more at getrealsecurity.com.
Matt Moynahan is the CEO of GetReal Security and has spent nearly 30 years leading cybersecurity companies through periods of transformation and growth. GetReal Protect delivers real-time deepfake detection and continuous identity protection for Zoom, Microsoft Teams, Cisco Webex, and every other major enterprise collaboration platform. Learn more at getrealsecurity.com.
See what "no compromise" looks like on a live call.