Cyber SecurityDataFuture of AI

How AI Companies Should Think About A Compliance Roadmap

AI-powered SaaS solutions are quickly becoming critical to the infrastructure of all types of businesses. Machine learning is rapidly creating value out of large data sets. However, those same data sets represent an outsized risk to buyers and sellers.

Getting even pilot adoptions accomplished requires that early sales assets identify your organization’s security practices. What are some strategies that AI companies can use to get the necessary credentials to be entrusted with customer data?

Follow the money

The best method of making rapid leaps in security credentials is minimizing the amount of change required in your organization. Anticipate what your buyers will require from your organization to meet their procurement criteria. Identifying buyers’ demands for trust helps keep you focused on the necessary standards; for example, solutions focused on financial institutions in North America as customers will likely need to pass a SOC 2 audit and a PCI-DSS audit, depending on the number of credit card transactions processed.

If your customers are primarily located in Europe and your solution is considered Marketing Technology, you may need to be ISO 27001 certified and definitely GDPR compliant. By focusing your efforts on the laws, regulations, certifications, or audits required for specific customer personas, you can more quickly and efficiently meet their requirements.

Riskier tactics include trying to adopt a very difficult standard before your customers require it. For instance, a NIST audit is usually reserved for federal and government contractors. While it is a very prestigious audit to accomplish, not all customers will require it.

Know your data

With the introduction of cloud computing, the cost of storing data has plummeted. The acquisition of massive data sets continues to drive innovation, especially with machine learning-driven solutions.

However, there is a hidden cost to consider. Long-term data storage may seem like a very static practice, but when dealing with AI products, data scientists must constantly add new datasets, create new models, and migrate data rapidly due to technological innovation. The complexity of long-term data storage, the value of stolen data sets, and an ever-changing networked environment mean that a data breach is not just a risk but likely.

While developing your AI product, it’s essential to design the life-cycle of your data.

Diagramming the data life-cycle for your organization can give you the proper perspective on what data is needed when. This will allow your CTO or Chief Data Scientist the ability to minimize data acquisition to in-scope product development. Tactics that will emerge from these diagrams and documentation include:

  • To develop and train data models, is a form of de-identified personally identifiable data needed?
    • Minimizing sensitive data reduces the dark-web value of your data set, making your organization less of a target or a breach less impactful to the business.
    • By scrubbing data of sensitive information, you can “de-fang” the impact of any breach.
  • How long is data effective for developing accurate models?
    • Identifying and removing data that has exceeded its usefulness reduces the complexity and surface area you need to secure.
    • Reducing the history of the data managed also minimizes the extent of damage a breach can have.

Testing is your friend

For most security certifications, a penetration test by an independent and accredited penetration tester is required. Penetration testers will execute a series of tests against your web application, API’s, and cloud infrastructure. A failed test, such as SQL injection, will be identified and allow your organization to fix the vulnerability. Often, the penetration tester will be brought back in to retest after vulnerabilities have been addressed to ensure resolution.

The resulting penetration test report should remain internal and confidential, but it can be shared with your customers once a mutual NDA has been signed. Sharing your pen test can help you negotiate a compliance roadmap of six to nine months in parallel with vital customer adoption.

If your AI product is heavily consumed through web-based API’s, it may be important to include break / stress testing in the penetration testing process. Many AI solutions on the marketplace provide secure endpoints that allow customers to submit and receive a large number of transactions efficiently.

For example, if your product is predicting the likelihood of credit card fraud, you may need to process millions of transactions every day. Assessing the capability of your infrastructure to support a certain level of processing integrity and availability will help box out competitors who haven’t been as rigorous with their solution. In this example, a stress test may provide valuable insights.

I regularly speak with AI startups aiming for an IT security certification pre-seed, and my advice is often to begin the security and compliance journey early. To accelerate that journey, be sure to consider an IT security certification that is relevant to your buyers. Focusing on data also allows acceleration of the compliance journey which is central to security concerns.

Securing data early in the development of your application is not tricky if you have a good plan. Trying to adjust your architecture after a vulnerability has been identified can be a costly and time consuming disruption to your business. Finally, stay focused on the revenue-generating problem you’re trying to solve, trust.

For example, even before a SOC 2 audit, using achievements like a successful penetration test help create a validated security posture for potential buyers. AI companies require sharing a customer’s most valuable asset, their data, which brings heightened scrutiny during the sales process. Being prepared ensures your business can meet its customer acquisition targets. 

Author

  • Justin Beals

    Justin is the Co-Founder & CEO of Strike Graph, a security compliance company, which he incubated at Madrona Venture Labs in early 2020. As a serial entrepreneur with expertise in AI, cybersecurity and governance, he started Strike Graph to eliminate the confusion related to cybersecurity audit and certification processes. He likes making arcane cybersecurity standards plain and simple to achieve. As the CEO, Justin organizes strategic innovations at the crossroads of cybersecurity and compliance and focuses on helping customers get outsized value from Strike Graph. He also sets a foundational culture of employee growth. Based in Seattle, he previously served as the CTO of NextStep and Koru, which won the 2018 Most Impactful Startup award from Wharton People Analytics. Justin is a board member for the Ada Developers Academy, VALID8 Financial and Edify Software Consulting. He is the creator of the Training, Tracking & Placement System US Patent and the author of “Aligning curriculum and evidencing learning effectiveness using semantic mapping of learning assets,” which was published in the International Journal of Emerging Technologies in Learning (iJet). Justin earned a BA in English and Theater from Fort Lewis College.

    View all posts

Related Articles

Back to top button