The question we hear most often from hospital CIOs and clinical informatics directors before deploying any AI platform is not "does your model perform well?" It is "how do you protect patient data?" The answer matters far more than the performance metrics, because a breach involving Protected Health Information (PHI) carries consequences that no diagnostic accuracy score can offset.
This article describes the security architecture Pegasi built for healthcare AI, why we made specific design choices, and what questions any health system should be asking AI vendors before signing a contract.
Traditional enterprise software security focuses on protecting data at rest and in transit. Healthcare AI has a more complex problem: the model itself must process sensitive data to function, which means PHI enters the computational pipeline. Every design decision — where computation happens, how data flows, what gets logged — has security implications.
There are two broad architectural approaches healthcare AI vendors take. The first is cloud-processing: PHI is sent to the vendor's cloud infrastructure, processed there, and results returned to the health system. The second is on-premises or in-environment processing: the AI model runs inside the health system's own secure environment, and PHI never leaves that perimeter. These approaches have very different security and compliance profiles.
Pegasi's diagnostic platform processes all PHI within the health system's own secure environment. The architecture works as follows:
Within the deployed environment, all data — at rest and in transit between platform components — is encrypted using industry-standard algorithms:
Access to Pegasi's platform components follows the principle of least privilege:
Audit logs capture: user identity, access timestamp, patient record identifier (de-identified in logs), action type, and system component accessed. Logs are retained for a minimum of 6 years to satisfy HIPAA audit requirements and are exportable to the health system's SIEM.
Pegasi operates as a Business Associate under HIPAA. Before any deployment, we execute a Business Associate Agreement (BAA) with the covered entity. Our BAA compliance program includes:
Pegasi maintains SOC 2 Type II certification, covering the Trust Services Criteria for Security, Availability, and Confidentiality. The Type II designation means our controls were tested by an independent auditor across a 12-month observation period — not just designed on paper, but operating effectively over time. Health systems can request a copy of our most recent SOC 2 report under NDA by contacting privacy@pegasiio.com.
If you are evaluating AI platforms for clinical use, here are the questions that cut through marketing language to reveal actual security posture:
The vendors who treat security as a compliance checkbox will produce documentation that sounds thorough and architectures that are not. The vendors who treat security as a core design constraint will make different product decisions from the start — choosing in-environment processing over cloud APIs, choosing append-only audit logs over editable records, choosing mTLS over TLS, choosing to limit their own access to patient data rather than maximizing it for model improvement.
At Pegasi, we made the choice early that we would not build our training data pipeline on access to production patient records from our deployed health system partners. Our models are trained on de-identified datasets from consented research programs. Our production deployments are designed so that Pegasi has no technical ability to access PHI from health system deployments, even if we wanted to. That constraint shapes everything downstream.
Security in healthcare AI is not optional and it is not separable from the clinical value proposition. If you have specific questions about Pegasi's security architecture for your institution's evaluation, contact our security team at privacy@pegasiio.com. We are happy to walk through our full technical security documentation with your IT and compliance teams.