Healthcare orgs lack formal oversight, despite 83% of IT and compliance leaders raising concerns about AI usage.
SAN FRANCISCO–(BUSINESS WIRE)–#ai–Artificial intelligence is being woven into daily workflows across hospitals, clinics, and health systems, before most organizations have figured out how to secure it, which leaves patient data at risk.
The latest research from Paubox, the leader in HIPAA compliant email security, found that 95% of healthcare organizations say employees with access to protected health information (PHI) are already using AI tools in email, yet one in four admit they have not formally approved any AI use at all.
The unmonitored use of AI is known as shadow AI. Behind the scenes, generative AI assistants are summarizing patient notes, drafting billing responses, and even suggesting language for sensitive care communications, often without oversight, audit trails, or HIPAA-required business associate agreements (BAAs). 75% of healthcare IT and compliance leaders believe employees mistakenly assume that tools like Microsoft Copilot are automatically HIPAA compliant.
Inside IT departments, alarm bells are ringing. 83% of healthcare IT and compliance leaders have raised concerns about AI security, but many say the push is coming from both ends of the organization: executives eager to boost productivity and frontline staff looking for faster ways to get work done. That combination has created a race that security teams can’t keep up with.
“This is the classic case of innovation outpacing governance,” said Rick Kuwahara, Chief Compliance Officer at Paubox. “AI adoption is moving faster than its safeguards. Shadow AI is the new shadow IT. Employees are adopting AI tools, often in existing applications, before compliance teams even know they’re in use.”
The new research, based on a survey of 150 U.S. healthcare IT and compliance leaders, found:
- 83% have raised internal concerns about AI security risks
- 95% report staff are already using AI tools in email
- 25% have not approved any AI use at all
- 75% believe employees assume AI tools are HIPAA compliant
“AI is being treated like a harmless add-on, but it’s already handling PHI,” said Hoala Greevy, CEO of Paubox. “Without a BAA or proper controls, that’s a compliance failure waiting to happen.”
Paubox’s report urges IT and compliance leaders to establish formal AI usage policies, require BAAs with any AI vendor touching PHI, and ensure AI tools are evaluated under the same scrutiny as any other data processor. Until that happens, healthcare organizations and patients risk hackers and big-tech accessing PHI.
Download the complete report at: https://hubs.la/Q03NnH0z0
Contacts
Media Contact:
Dawn Halpin
[email protected]