Cyber SecurityIoT

IoT Security and the Protection of AI/ML Models at the Edge

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to the AI Experience newsletter and join 50k+ tech enthusiasts.

We are witnessing a true wave of innovation at the network edge. According to Enterprise Management Associates, more than 60 billion smart devices are expected to be online in the coming years. Even more exciting is the level of processing power and intelligence being implemented in smart IoT devices. Systems like factory and building control systems, medical devices, autonomous vehicles and machines, and solutions for the home will be applying machine learning (ML) and artificial intelligence (AI) at the edge in unprecedented volume. In fact, Gartner predicts that edge computing will process 75% of all data generated. The capability to make our industrial world smarter, safer and more efficient is increasing exponentially.

While this development is exciting, it underscores the ever-increasing importance of security at the network edge. The increasing proliferation of connected IoT devices performing mission-critical tasks means that there are more vulnerable access points than ever. In order to enjoy the benefits provided by “smart” devices – especially those implementing ML and AI in their applications – it becomes critical to protect the critical intellectual property that represents a large part – if not all – of the solution’s value. There inherently must be also more interconnectivity built into the products with a need for access to the internet to receive critical updates to AI models, firmware or file systems. While the data center may be thought to be safe from outside intrusion, IoT devices are far more exposed, making them an ideal entry point to those looking for an easier way to exploit systems.

IoT device developers need to ensure their products are protected from attacks, safe and secure through the manufacturing process, and able to be managed securely throughout the life of the product. Without the appropriate implementation of IoT Security, vendors risk damage to their products, credibility and brand, as well as the loss of critical intellectual property.  At the forefront of this critical approach is the protection AI models at the edge.

The approach for protecting AI models (verifying that the model is authentic and hiding it from threats) on a smart device can vary, depending on the nature of the hardware and its application. Factors to consider include:

  1. The memory available to house and encrypt the model or application
  2. The level of risk in the surrounding environment (for example, will the model coexist with other less trusted applications on the same device?)
  3. The hardware being used to execute the application (for example, is a GPU involved in running the AI model and delivering an inference?)

After understanding the environment, developers can protect AI at the Edge with an approach optimized for need. Here are some examples of best practices.

Encrypting the Application at Rest. This method protects the model while it is not being actively used by the device. This approach involves storing an encrypted version of the model in non-volatile memory. Whenever the model is needed, it is authenticated and decrypted. Once the inferences are complete, any un-encrypted data is removed from volatile memory. This greatly reduces the time the model is exposed as well as the associated attack surface.

Isolating the model using ARM® TrustZone®. In an ARM TrustZone architecture, it is possible to house sensitive portions of the model in a secure partition of memory, or secure enclave. Using this approach, the model is never exposed to the non-secure application environment (e.g. Linux). This method requires allocation of memory in the secure area and some development time to build the application to the secure enclave specification, but is the most secure way to protect AI models.

Enabling a model accessing multiple hardware blocks and run-time environments using virtualization.  If dedicated hardware is used to run the model – for example, a GPU – the model may be housed in one virtual environment (e.g. Linux) and accessed by another environment with access to the hardware (GPU) for processing and generating inferences.  Using virtualization, the application can be authenticated, decrypted, and then run in a dedicated environment housed in an isolated virtual machine. The running model can only be accessed by the isolated environment with access to the GPU and inferences sent securely to the run-time environment which houses the encrypted model.

Because there are a vast number of potential IoT security threats to endpoint devices, manufacturers and integrators must remain focused on their search for best-in-class security strategies and products capable of locking down their products and the IP contained within as it simply cannot be compromised. AI model protection methods like these bring security to smart devices, and enable a new era of intelligence at the edge. 

Author

  • Philip Attfield

    Philip Attfield is the CEO of Sequitur Labs Inc. He brings a strong background in computing, networking, security and systems modeling. He has more than 20 years of industry experience in large enterprises and small entrepreneurial firms. Starting his career at Nortel, Phil was a member of its scientific staff and developed software tools and in-house products for modeling, synthesis and verification of telecom and network equipment hardware.

Related Articles

Back to top button