Your data, your perimeter.
Always.
xAQUA is a zero-data-storage platform. We sit on top of your existing data stack — Snowflake, Databricks, MotherDuck, DuckDB — and your data never moves to us. The result: a security posture that fits inside your existing controls, not next to them.
Zero data stored. Architecturally.
xAQUA queries data in place. Your warehouse, your lakehouse, your DuckDB file — that's where it lives. The platform's job is to translate plain English into governed queries against your systems and return answers under your access controls. No data exfiltration. No shadow store. No second copy.
It is a fundamentally different security posture from "AI on your data" tools that ingest your warehouse into their backend.
How xAQUA stays secure.
Five layers of defense, each one independently auditable.
Encryption at rest. Encryption in flight. Always.
All metadata, configuration, and audit data are encrypted with AES-256 at rest and TLS 1.3 in transit. Customer-managed keys (CMK) supported for enterprise deployments via AWS KMS, Azure Key Vault, or HashiCorp Vault. Key rotation, separation of duties, and key access audits are first-class.
SSO. RBAC. Least privilege. Pass-through.
SAML and OIDC SSO with all major identity providers (Okta, Microsoft Entra, Auth0, Google Workspace, Ping). Granular role-based access control at the workspace, project, agent, and data asset level. Where the underlying warehouse supports it — Snowflake, Databricks, BigQuery — xAQUA passes through user identity so warehouse-level row and column policies still apply.
Every prompt. Every query. Every answer.
xAQUA logs the full chain — who asked what, which agent received it, which assets were resolved, which queries ran, which rows were returned. Full conversation audit, queryable lineage, and exportable bundles for SOC reviews and regulatory audits. Audit logs are tamper-evident and retained per customer policy.
SenseMask runs before the LLM does.
The xAQUA SenseMask agent classifies and redacts PII, PHI, and PCI before any data ever reaches an LLM. Policy-as-prompt configuration. Tokenized substitution preserves analytical utility while protecting identity. For regulated workloads, masking is enforced at the routing layer — the LLM Gateway will not accept payloads that haven't passed the masking gate.
The LLM Gateway is the trust boundary.
Every model call passes through the xAQUA LLM Gateway with tagged customer ID, user ID, and use-case context. Per-customer rate limits, model allowlists, and policy enforcement. Private-deployment customers can route the Gateway to their own LLM instances — Llama, GPT-OSS, Claude, GPT — keeping every token under their perimeter.
Three shapes. Same engine.
Choose the perimeter that fits your risk profile. The product surface is identical across all three.
The xAQUA Essentials and Standard editions. Hosted in our SOC 2 Type II environment. Tenant isolation, encryption, and identity controls are built-in.
- SOC 2 Type II hosting
- Per-tenant encryption keys
- Standard SLAs
- Auto-updates
xAQUA deployed inside your AWS, Azure, or GCP account. Your VPC, your network controls, your monitoring. Connects to your warehouse over private link. No outbound traffic without explicit approval.
- Customer-controlled VPC
- Customer-managed keys
- Private link to warehouse
- Customer SIEM integration
xAQUA running in a fully disconnected environment with on-premises LLMs. No outbound internet. No shared services. Reference deployment: a $300B+ public pension fund.
- No internet egress
- On-prem LLM (Llama / GPT-OSS)
- FedRAMP / IL5-class postures
- Private support channels
Proof, not promises.
Independent attestations, in production today, with refresh cycles in motion.
Found something? We want to know.
Responsible disclosure is welcome. We respond to verified reports within 24 hours and credit reporters by default.
Have a security review to run?
Send the questionnaire. Our security team responds with the standard package within two business days.