
Organisations have made substantial investments in managed file transfer security over the past decade. Encryption protocols have improved. Access controls have strengthened. Compliance frameworks have matured. Yet, a recent industry survey reveals an emerging vulnerability that traditional Managed file transfer (MFT) controls weren’t designed to address.
In the 2025 MFT Survey Report it was found that 26% of organisations have experienced AI-related data incidents in their MFT environments over the past year. Additionally, 30% permit employees to use AI tools with sensitive files without formal controls, and 12% have not yet assessed AI-related data security risks.
These findings suggest a disconnect between MFT security capabilities and how employees handle data after it leaves the managed transfer environment and before it returns.
Understanding the Bidirectional Data Flow Gap
MFT systems control data movement through defined channels with strong security controls. They encrypt data in transit and at rest, maintain detailed audit logs, and enforce access policies. These capabilities effectively secure transfers between systems, partners, and locations.
However, MFT systems face vulnerabilities in both directions of data flow.
Outbound Risk: MFT systems have limited visibility once an authorised user downloads a file to their endpoint device. At that point, the data exists outside the managed environment, subject only to whatever endpoint security controls the organisation has deployed. Employees who download files from secure MFT portals may subsequently upload those files to AI platforms like ChatGPT, Claude, or other generative AI tools to assist with summarisation, analysis, or content creation. From the employee’s perspective, they’re using a productivity tool. From a security perspective, sensitive data has moved to a third-party commercial service outside organisational control.
Inbound Risk: Equally concerning is data entering MFT systems that has been processed, generated, or potentially poisoned by AI tools. Research shows that over one in four organisations have fallen victim to AI data poisoning, where threat actors manipulate training data or outputs to compromise integrity. When employees use AI tools to create content, generate reports, or analyse data − then upload those AI-generated outputs back into MFT systems − organisations may be ingesting something they shouldn’t. This could be poisoned data designed to corrupt datasets, hallucinated information that AI models present as fact, embedded malicious content hidden within seemingly legitimate files, or manipulated analysis subtly altered to benefit threat actors.
This bidirectional exposure means MFT systems, designed to be secure pipelines for data exchange, can become conduits for both data exfiltration and data contamination. Traditional content inspection tools may not detect these risks because AI-generated content often appears legitimate in format and structure, even when the substance is compromised.
Compliance Implications
The regulatory implications of this data movement pattern are significant across multiple frameworks.
For healthcare organisations, uploading patient information to public AI platforms likely constitutes a HIPAA violation. The survey found that Healthcare organisations, despite achieving 100% end-to-end encryption in transit, protect only 11% of data at rest with AES-256 encryption. This sector reported a 44% incident rate, including an 11% breach rate—among the highest across industries surveyed.
Financial services firms, meanwhile, face potential violations of Regulation FD if material non-public information reaches AI systems. Financial Services achieve a better balance, with a 25% incident rate, the lowest among major sectors. This appears to correlate with more consistent implementation across multiple security dimensions.
Government agencies must consider NIST SP 800-171 requirements and data sovereignty concerns. Despite strong policy frameworks, the survey found only 8% of government agencies implement AES-256 encryption at rest, and 50% reported MFT security incidents in the past year.
For defence contractors subject to CMMC requirements, CUI (Controlled Unclassified Information) reaching commercial AI platforms represents a direct violation of DFARS clauses and could result in loss of certification, contract suspension, and civil penalties.
Five Common MFT AI Risk Scenarios
Both data and industry observations suggest several patterns of AI-related data exposure:
Financial Services: Analysts downloading earnings data or financial projections from secure MFT portals and using AI tools to create executive summaries or investor presentations. This potentially exposes revenue forecasts, strategic initiatives, and other material non-public information. When those AI-generated summaries are uploaded back into the MFT system for distribution, organisations cannot verify the accuracy of AI-generated financial analysis.
Healthcare: Clinical staff downloading patient records or transfer summaries and using AI to help write discharge instructions, care plans, or documentation. This transmits protected health information to unauthorised third parties without patient consent or business associate agreements. AI-generated clinical documentation returned to the MFT system may contain hallucinated medical information presented as fact.
Manufacturing: Engineers downloading proprietary designs, specifications, or CAD files and using AI for optimisation suggestions or technical analysis. This potentially exposes trade secrets including tolerances, materials, and manufacturing processes. Modified designs returned to the MFT system may contain subtle alterations that compromise product integrity or performance.
Legal Services: Paralegals or attorneys downloading client communications and using AI to help draft briefs, motions, or legal analyses. This may waive attorney-client privilege by disclosing confidential information to third parties. AI-generated legal arguments uploaded to the MFT system may include fabricated case citations. A problem that has already resulted in sanctions for attorneys in multiple jurisdictions.
Defence Industrial Base: Program managers or engineers downloading contract documents, technical specifications, or performance data and using AI to create summaries or proposals. This exposes CUI in violation of CMMC and NIST requirements. AI-generated content returned to the MFT system cannot be verified against classification requirements, potentially mixing unclassified AI outputs with classified source material.
Why Traditional Controls Don’t Address This Risk
There are several gaps in how organisations approach this emerging threat.
Lack of Integration: Only 37% of organisations have integrated their MFT systems with security information and event management (SIEM) or security operations center (SOC) platforms. This means 63% cannot correlate MFT download events with subsequent AI platform access, even if both activities are individually logged.
Manual Enforcement Challenges: While 48% report conducting regular AI risk reviews, 40% rely primarily on manual enforcement through training and periodic audits rather than technical controls. However, this approach has limited effectiveness, as organisations with strong policies still experience incidents.
Content Inspection Limitations: Only 27% of organisations have deployed content disarm and reconstruction (CDR) capabilities that could potentially detect anomalies in AI-generated content. Traditional antivirus (63% adoption) and DLP (63% adoption) tools were not designed to identify hallucinated information, poisoned datasets, or subtly manipulated content that AI tools might introduce.
What Effective Controls Look Like
There are common characteristics among those organisations that report no AI-related incidents:
Technical Controls: These organisations deploy data loss prevention systems configured to recognise AI platforms as potential data exfiltration risks. Rules monitor not just file uploads but also clipboard operations and API calls to AI services. They also implement validation processes for content entering MFT systems from external sources.
System Integration: They connect MFT audit logs with endpoint detection systems and DLP platforms. When someone downloads a file from the MFT system and subsequently accesses an AI platform, this correlation triggers alerts for investigation. Similarly, uploads to MFT systems following AI platform usage receive additional scrutiny.
Sanctioned Alternatives: Rather than blanket prohibition, they provide approved AI tools with enterprise agreements. These contracts typically include stronger privacy protections, prohibit training on customer data, and may include business associate agreements meeting HIPAA requirements. Importantly, these enterprise tools maintain better audit trails of AI interactions with corporate data.
Content Verification: Organisations implement processes to verify AI-generated content before it enters MFT systems. This may include human review of AI outputs, automated fact-checking against authoritative sources, or metadata tagging identifying content as AI-generated for downstream recipients.
Recommended Actions
Based on our survey findings, security leaders responsible for MFT systems should consider the following steps:
Assess Current State: Evaluate whether the organisation can detect when files downloaded from MFT systems are subsequently uploaded to AI platforms, and whether AI-generated content entering MFT systems can be identified and validated. This requires examining integration between MFT audit logs, endpoint monitoring, and DLP systems.
Implement Bidirectional Controls: Deploy DLP rules specifically designed to detect AI platform access following MFT downloads. Establish validation processes for content entering MFT systems, particularly when that content follows AI platform usage. Consider metadata tagging to identify AI-generated content throughout its lifecycle.
Establish Governance Framework: Develop policies that specifically address AI tool usage with data from MFT systems. Define what data categories can be processed through which AI services under what conditions, and what verification is required before AI-generated content can enter MFT systems.
Provide Sanctioned Tools: Research enterprise AI agreements that include appropriate privacy protections, data retention limits, and contractual safeguards. Providing approved alternatives reduces pressure to use unauthorised tools and improves audit trail visibility.
Improve Integration: Connect MFT audit logs with security monitoring platforms to enable correlation of download and upload events. This integration allows detection of the bypass pattern in both directions.
Enhance Content Inspection: Move beyond traditional antivirus and DLP to implement CDR and other advanced inspection capabilities. Consider AI-powered tools that can detect anomalies in AI-generated content, though recognise the irony of using AI to detect AI-related threats.
Looking Ahead
Unfortunately, the risk will only intensify as AI tools become more capable and widely adopted, threat actors develop more sophisticated data poisoning techniques and regulators develop enforcement frameworks.
The 26% who have experienced incidents represent an early warning. The 30% permitting uncontrolled AI access face risks in both directions – data exfiltration and contamination. The 12% not assessing AI-related threats may discover exposure through regulatory enforcement rather than internal detection.
For organisations that have invested in MFT security, closing this gap extends existing controls rather than requiring fundamental redesign. Luckily, the needed capabilities of endpoint monitoring, system integration, AI-specific DLP rules, and content validation are all available with current technology.
The question facing security leaders is timing. Whether to address this vulnerability through planned implementation or incident response. Yet, with AI adoption accelerating across the workforce, the window for proactive action is surely narrowing.


