• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Microcontroller Tips

Microcontroller engineering resources, new microcontroller products and electronics engineering news

  • Products
    • 8-bit
    • 16-bit
    • 32-bit
    • 64-bit
  • Applications
    • 5G
    • Automotive
    • Connectivity
    • Consumer Electronics
    • EV Engineering
    • Industrial
    • IoT
    • Medical
    • Security
    • Telecommunications
    • Wearables
    • Wireless
  • Learn
    • eBooks / Tech Tips
    • EE Training Days
    • FAQs
    • Learning Center
    • Tech Toolboxes
    • Webinars/Digital Events
  • Resources
    • Design Guide Library
    • DesignFast
    • LEAP Awards
    • Podcasts
    • White Papers
  • Videos
    • EE Videos & Interviews
    • Teardown Videos
  • EE Forums
    • EDABoard.com
    • Electro-Tech-Online.com
  • Engineering Training Days
  • Advertise
  • Subscribe

How software segregation minimizes the impact of AI/ML on safety-critical software

July 11, 2024 By Aimee Kalnoskas Leave a Comment

With the growing use of artificial intelligence and machine learning in safety-critical software, developers are considering software segregation and guardian applications to mitigate functional safety risks.

By Mark Pitchford, LDRA

The push toward artificial intelligence (AI) and machine learning (ML) in embedded systems raises questions about adapting functional safety processes and tools to achieve compliance. While markets demand innovations based on AI/ML, developers cannot compromise safety-critical guidelines and principles – nor can businesses risk financial and reputational damage.

With few examples to learn from and standards bodies lagging behind innovation, we can forge ahead by examining existing risk mitigation strategies and adapting them to new AI/ML-based development.

Understanding artificial intelligence and machine learning

With all the uncertainty around AI/ML – fueled by ChatGPT, Microsoft Copilot, and similar generative AI products – it is useful to adopt a broader definition of terms that addresses the complete set of capabilities available to embedded software developers.

The Oxford English Dictionary (OED) defines artificial intelligence as “the capacity of computers or other machines to exhibit or simulate intelligent behavior” and machine learning as “the capacity of computers to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyze and infer from patterns in data.” It further specifies that more recent uses of AI include “the capacity of computers or other machines to exhibit or simulate intelligent behavior.”

There are two major classifications of AI:

  • Narrow (or weak) AI, which excels at the task it has been trained for but lacks general human-like intelligence.
  • General (or strong) AI, which possesses human-like intelligence.

One could argue that certain safety-critical applications built over the past 20 years fall under the category of weak AI. For example, an automotive ECU and a medical infusion pump are built to perform specific tasks and do not exhibit human-like intelligence. Similarly, a flight management computer can adapt to changing aircraft and environmental conditions, but only within a certain predefined and deterministic envelope.

It is conceivable that the functional safety techniques used to develop such systems can also apply to new AI/ML-based components, namely, software segregation.

Segregation consolidates risk into separate components

Within the medical device industry, for example, software segregation is used to protect certain software items from the unintended influence of other software items. Essentially, it is the partitioning of software to control and reduce functional safety risks. Among its other uses, segregation minimizes the impact of classifying all software within a system at the highest level to reduce effort and costs across planning, developing, testing, and verification.

Guidelines within the IEC 62304:2006 +AMD1:2015 standard help to minimize development overhead by permitting software items to be segregated. As stated, “The software ARCHITECTURE should promote segregation of software items required for safe operation and should describe the methods used to ensure effective segregation of those SOFTWARE ITEMS.” The standard further states the following:

  • Annex B.4.3 outlines that each software item can have its own software safety classification.
  • Section 4.3 defines how software safety classifications shall be assigned.
  • Section 4.3.d calls for a rationale for each of the different classifications, explaining how the software items are effectively segregated so that they may qualify for separate software safety classifications.

Amendment 1 clarifies the position on software segregation by stating that it is not restricted to physical separation and permits “any mechanism that prevents one SOFTWARE ITEM from negatively affecting another.”

Figure 1. An example of partitioning software items according to IEC 62304:2006+AMD1:2015

Figure 1 shows the standard’s example. In it, a software system has been designated Class C. That system can be segregated into one software item to deal with functionality with limited safety implications (software item X) and another to handle highly safety-critical aspects of the system (software item Y). This principle can be repeated hierarchically, such that software item Y can be segregated into software items W and Z, and so on.

The underlying assumption is that no segregated software item can negatively affect another. In principle, such segregation can be applied equally to AI/ML-based applications.

Segregation and AI/ML risk mitigation

Few, if any, embedded systems are based entirely on AI/ML. Conventionally engineered applications more commonly ingest the outputs of AI/ML-based algorithms, whether they are part of the same system or a separate one. It is, therefore, logical to extend the principles of domain separation as promoted by IEC 62304. This approach contains AI/ML algorithms within dedicated domains and enables domains that receive AI/ML-generated data to mitigate risks.

Conventional wisdom for any segregated software item is to regard any incoming software as potentially tainted, and this arrangement is no different. Taint analysis tools can be used to determine potential risks, and developers are familiar with “sanity checking” incoming and outgoing data, whether from an AI/ML software item or elsewhere.

Introducing guardian applications to mitigate risk

In their study “On Safety Assessment of Artificial Intelligence,” authors Braband and Schäbe generalize the role of functional safety by referring to a conventional Electric, Electronic, and Programmable (E/E/PE) control system. They reference equipment under control, information from sensors that enter the control system, and actors operated by the control system. Figure 2 illustrates such a system.

Figure 2. An example of a conventional safe control system

Depending on the consequences of the control system’s faulty behavior, functional safety standards dictate that the actors be assigned some form of safety integrity level (e.g., SIL, ASIL, Class) and that a solution be developed per the standards to ensure risk is within acceptable bounds.

Braband and Schäbe argue that any black box system will be safe because the SIL evaluation occurs before the code is written, provided it keeps that risk within acceptable bounds. That black box can, therefore, be an AI/ML system with that same caveat. Figure 3 illustrates this approach.

Figure 3. An example of a “Black Box”–based control system

Braband and Schäbe discuss mathematical proofs designed to show what is required to limit risk and offer a way to do this in software.

As presented in Figure 4, if a “Guardian” application can be shown to mitigate risk for any rogue black box (AI/ML) output data, then it acceptably limits risk. Such guardians can also be developed in accordance with existing functional safety principles.

Figure 4. An example of mitigated “Black Box”–based control system

Figure 4. An example of mitigated “Black Box”–based control system

Limiting the impact of AI/ML going forward

Ensuring that data from an AI/ML-based algorithm is treated with suspicion and minimizing the associated risks aligns with existing best practices across safety-critical sectors. As AI/ML-based components become more popular in embedded systems, developers must separate areas of high risk from areas of low risk and consider introducing guardian applications to protect applications further.

AI/ML components will operate inside conventionally developed systems for the foreseeable future. New and updated safety standards are being developed to guide teams on the future of these technologies, and developers must know how to combine them with traditional verification techniques to achieve their goals.

You may also like:


  • How do AI and ML enhance SASE security?

  • What are the top programming languages for machine learning?
  • green artificial intelligence
    How “green” is your Artificial Intelligence?
  • artificial intelligence
    Artificial Intelligence, Machine Learning, Deep Learning, and Cognitive Computing

  • What is machine learning?

Filed Under: Applications, Artificial intelligence, FAQ, Featured, Featured Contributions, Machine learning Tagged With: FAQ, LDRA

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Featured Contributions

Five challenges for developing next-generation ADAS and autonomous vehicles

Securing IoT devices against quantum computing risks

RISC-V implementation strategies for certification of safety-critical systems

What’s new with Matter: how Matter 1.4 is reshaping interoperability and energy management

Edge AI: Revolutionizing real-time data processing and automation

More Featured Contributions

EE TECH TOOLBOX

“ee
Tech Toolbox: 5G Technology
This Tech Toolbox covers the basics of 5G technology plus a story about how engineers designed and built a prototype DSL router mostly from old cellphone parts. Download this first 5G/wired/wireless communications Tech Toolbox to learn more!

EE Learning Center

EE Learning Center

EE ENGINEERING TRAINING DAYS

engineering
“bills
“microcontroller
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, tools and strategies for EE professionals.

DesignFast

Design Fast Logo
Component Selection Made Simple.

Try it Today
design fast globle

Footer

Microcontroller Tips

EE World Online Network

  • 5G Technology World
  • EE World Online
  • Engineers Garage
  • Analog IC Tips
  • Battery Power Tips
  • Connector Tips
  • DesignFast
  • EDA Board Forums
  • Electro Tech Online Forums
  • EV Engineering
  • Power Electronic Tips
  • Sensor Tips
  • Test and Measurement Tips

Microcontroller Tips

  • Subscribe to our newsletter
  • Advertise with us
  • Contact us
  • About us

Copyright © 2025 · WTWH Media LLC and its licensors. All rights reserved.
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media.

Privacy Policy