
Auditing the Black Box: Proving Compliance to Regulators with View Keys
There is a fundamental misunderstanding in how we build privacy-preserving architecture.
Teams often assume that building a secure, encrypted system means locking everyone out—including the auditors.
But regulators don’t need your entire dataset to do their jobs. They need verifiable evidence that your controls exist, that they operate as designed, and that outcomes can be reconstructed for specific, targeted cases.
If your system cannot produce this evidence, it is a liability.
View keys are the mechanism that enables selective transparency. They provide auditable access for authorized parties without exposing your entire database to the public or your internal engineering team.
This guide focuses on auditing the black box: proving compliance to regulators with view keys. We will explain how to transition from opaque architectures to inspectable systems, how to handle the rising complexities of machine learning, and how to build a practical audit workflow that satisfies strict regulatory scrutiny.
Why a Black Box Fails an Audit
In plain language, a black box is a system that produces decisions without inspectable, reproducible evidence.
You feed data in, and an answer comes out. The internal mechanics—the algorithm, the weights, the logic—are completely opaque to the operator and the reviewer.
Regulators care deeply about this opacity. Their concern is not mere curiosity; it is about accountability and enforcing a robust risk management framework.
If an architecture cannot be inspected, it is difficult to defend under audit. When machine learning models or complex cryptographic circuits operate as black boxes, they are hard to pass audits with unless you design for traceability. Without proof of how a specific conclusion was reached, a system cannot demonstrate that it followed the law.
What Regulators Actually Ask For
When an examiner reviews your architecture, they are looking for specific, concrete artifacts. They want to see your control environment in action.
Make it concrete. Regulators will ask:
- "Show me the audit trail for decision X."
- "What input data was used to reach this conclusion?"
- "Which specific AI model version or smart contract version ran?"
- "What internal controls prevented misuse or unauthorized access?"
- "Who approved the updates (change management)?"
If your answer to any of these questions is "we don't know, the system is encrypted," you will fail the audit.
Black Box vs Glass Box
The goal is not to open your database to the world. The goal is to build a glass box.
A glass box is an architecture that is not "open to everything," but is "inspectable enough to verify outcomes."
Transparency isn’t a data dump. It’s provable evidence.
In a glass box architecture, you design for selective visibility.
- We don’t put PII on-chain or in plaintext logs.
- We don’t give developers broad visibility into customer data.
- We do produce auditable evidence under strict access control.
By doing this, you ensure security while retaining the ability to verify the system's behavior.

View Keys Explained
View keys are cryptographic keys that grant read-only access to specific, encrypted data payloads.
Unlike a master decryption key that unlocks an entire database, a view key is highly scoped. It allows an authorized entity to decrypt only the data they are explicitly permitted to see, for a specific period, without granting them the ability to alter or spend assets.
View keys are purposefully built for:
- Targeted investigations into suspicious activity.
- Sampling audits by external regulators.
- Incident response and forensics.
- Dispute resolution when a user challenges a system decision.

The Audit Problem View Keys Solve
Historically, builders faced a painful dilemma when designing secure systems.
Either you store everything in plaintext (creating a massive privacy and breach risk), or you encrypt everything blindly (creating an auditability risk).
If everything is blindly encrypted, your compliance function cannot investigate fraud. If everything is in plaintext, you violate data minimization principles.
View keys create a third path: controlled visibility. They allow the core data to remain encrypted at rest and in transit, but provide a secure, mathematical mechanism to "break the glass" and view the data when legally or operationally required.
A Practical Audit Workflow for Compliance Teams
How does this work in reality? Here is how an audit actually happens using a view key architecture.
- Trigger: A case is selected. This could be triggered by a user complaint, an automated anomaly alert, or a regulator requesting a random sampling audit.
- Data Pull: The system pulls the encrypted payload and the case summarization metadata. General application logs contain only identifiers and timestamps, no raw PII.
- Key Access: The compliance officer uses their authorized view key access to decrypt the specific payload related to the case.
- Evidence Generation: The system produces a secure evidence pack. This pack includes the decision record, the model version, references to the data sources, and timestamps.
- Human Review: The compliance officer logs their findings, demonstrating human oversight.
- Regulator-ready artifact: The system exports a tamper proof audit artifact (such as a digitally signed log or a cryptographic hash) proving the review occurred.
Concrete Example: Case ID 8472: A transaction is flagged for review. Compliance requests a time-bound view key. The key decrypts only {originator_country, sanction_check_result, model_version, evidence_hash}. The officer reviews the inputs, confirms the block was justified, and exports a signed evidence pack to the regulator.

Internal Controls for Financial Institutions
For financial institutions, simply having view keys is not enough. The controls surrounding the view keys are what make them defensible to regulators.
A rigorous regulatory compliance program treats view keys like highly sensitive infrastructure.
- Role-Based Access Control (RBAC): Only designated personnel in the compliance teams can request view key access.
- Dual Control: Decrypting sensitive records often requires approval from two separate officers to prevent unilateral abuse.
- Separation of Duties: The developers who build the system and manage the infrastructure cannot access the view keys.
- Access Logging: Every time a view key is used, an immutable record is generated stating who used it, when, and for what case.
- Incident Playbooks: Key rotation policies must be enforced, ensuring that if a view key is compromised, it can be instantly revoked.
In practice, if a system denies a user during onboarding, the compliance team must be able to securely reconstruct the logic using these controls without exposing the rest of the customer base.
Continuous Compliance, Monitoring, and Drift Detection
Compliance is not a "one-and-done" deployment. Systems degrade, models change, and the regulatory landscape shifts.
To maintain compliance, architectures must support continuous monitoring. This is where drift detection becomes critical. If you are using algorithms or AI models to assess risk, the behavior of those models will inevitably change as new data is introduced—a phenomenon known as model drift.
You must actively monitor your systems for drift, setting strict alerting thresholds. Periodic internal review processes should utilize view keys to sample recent decisions, ensuring the system is still operating within its original parameters. As new rules are introduced, your policies and monitoring thresholds must be updated to reflect continuous compliance.
EU AI Act: Expectations for High Risk AI Systems
When systems utilize machine learning, the regulatory burden increases significantly.
The EU AI Act sets a global benchmark for how these systems must be governed, classifying systems into risk categories.
If your architecture handles biometric identification, credit scoring, or access to essential services, it may fall under the high risk category of the AI Act depending on the specific use case. The EU AI Act often triggers stronger governance expectations, pushing heavily toward strict oversight, comprehensive documentation, and guaranteed human review.
You cannot simply deploy an AI model and hope for the best. You must provide traceability. View keys allow human auditors to trace the exact inputs and outputs of AI systems, satisfying demands for transparency without compromising user privacy.
Agentic AI: Auditing Multi-Step Actions
Auditing becomes exponentially harder when dealing with agentic AI.
Unlike traditional models that simply classify data, agentic AI refers to systems that take multi-step actions autonomously. These agents can invoke external tools, query databases, and execute transactions based on dynamic reasoning.
When large language models (LLMs) and generative AI are given agency, the audit challenge multiplies. There are more decisions, more tool invocations, and more potential failure points.
To audit agentic AI, builders must implement per-step logging. Every action the agent takes, and every authorization gate it passes through, must be recorded and accessible via view keys, allowing an auditor to reconstruct the exact chain of logic the agent followed.
Common Failure Modes: AI Governance and Change Management
Even with view keys, systems fail audits if the surrounding AI governance and change management processes are weak. Here are the most common failure modes.
False positives and dispute paths
When an automated system generates false positives (e.g., incorrectly flagging a legitimate transaction), there must be a clear path for dispute resolution. If the dispute path lacks the necessary data to overturn the automated decision, the system fails the user and the regulator.
Bias signals and testing
Models can inadvertently learn bias from their training data. Teams must actively test for bias and establish red flags in their monitoring systems. If a regulator asks for your bias testing results and you have none, you are in violation of responsible AI principles.
Missing lineage for input data
An output is only as trustworthy as its input. If an auditor uses a view key to inspect a decision but cannot determine the provenance of the data sources used, the audit trail is broken. Data lineage must be explicitly tracked.
Undocumented changes in Change Management
One of the fastest ways to fail an audit is poor change management. If an engineer pushes a silent update to the model weights or the smart contract logic without going through a formal, documented approval process, the integrity of the entire system is compromised.
Weak access controls
If view keys are not tightly guarded, they become "god keys." If anyone in the engineering department can generate a view key and look at decrypted payloads, you no longer have a privacy-preserving system; you have a surveillance system.

Implementation Checklist (Builder-Ready)
To transition from a black box to an auditable glass box, builders should follow this implementation checklist:
- [ ] Audit Trail Schema: Define a strict schema for logging that separates PII from general application events.
- [ ] Model Versioning: Implement immutable version control for all machine learning models, rules engines, and LLM prompts.
- [ ] Evidence Pack Template: Automate the generation of evidence packs so compliance officers receive consistent, formatted data when using view keys.
- [ ] Key Management: Enforce RBAC, dual-control approvals, and automated key rotation for all view keys.
- [ ] Drift Detection: Deploy monitoring technology to alert teams when model outputs deviate from expected baselines.
- [ ] Incident Response: Create a documented workflow for handling unauthorized view key access or data exposure.
Conclusion
Regulators do not want to stifle innovation; they want to ensure safety and accountability.
By utilizing view keys, you implement a system of selective transparency that satisfies regulatory demands for proof.
You can remain fiercely private while still being completely auditable. If you treat audits as the targeted production of evidence rather than a demand for mass surveillance, your architecture will survive the scrutiny of the future.
You have now established how to screen users, transfer compliance data, and prove your actions to regulators. But what happens when the user interacts with your systems from a wallet you do not control?
Next, we tackle the compliance realities of self-hosted infrastructure.
Want to learn more?
Explore our other articles and stay up to date with the latest in zero-knowledge KYC and identity verification.
Browse all articles