When building a safety-critical system—whether for an ECU in automotive, a flight controller in aerospace, or a train signaling system—one of the earliest architectural decisions can have massive downstream impact:
👉 How do you model the MCU or SoC in your safety architecture?
As a monolithic black box? Or as a set of interconnected internal components?
This isn’t just a modeling detail. It directly affects your ability to decompose safety goals, manage ASIL tailoring, reuse vendor claims, and minimize diagnostic burden across your system.
Let’s dive in.
🎯 The Black Box Trap
It’s tempting to treat your MCU or SoC as a single component in your SYS.1/SYS.2 architecture:
- It simplifies your block diagrams.
- It reduces the number of interfaces to analyze.
- It “contains” the complexity in one place.
But here’s the problem: when you treat a multi-core SoC as one indivisible unit, you lose visibility into the fault containment regions (FCRs), freedom from interference (FFI), and safety mechanism boundaries that ISO 26262 expects you to consider.
What’s the consequence?
- ASIL propagation: You can’t argue that a QM core is independent from an ASIL D one—because you haven’t modeled that separation.
- Diagnostic blow-up: You have to apply high DC/PMHF coverage to everything, including blocks that don’t need it.
- Decomposition blockers: You can’t show the architectural independence needed for safety goal decomposition.
The result: a higher system ASIL, conservative hardware metrics, and longer (often painful) DFA and audit discussions.
🧩 A Smarter Approach: Granular Modeling
Instead, consider breaking the MCU/SoC into internal building blocks during your safety architecture phase:
- Cores: Core A (ASIL-D), Core B (QM), Safety Island
- Memory areas: Flash, RAM, ROM with ECC support
- Peripherals: CAN, LIN, SPI, etc.
- Clock & power domains: Often separate and independently diagnosable
- Safety mechanisms: WDG, LBIST, MBIST, error injectors, SMU, etc.
This level of detail allows you to:
✅ Claim freedom from interference between blocks
✅ Allocate different ASIL levels within the same chip
✅ Justify decomposition based on independence (ISO 26262-9, clause 5)
✅ Reuse vendor safety mechanisms to reduce PMHF/SPFM burden
✅ Improve safety mechanism mapping and avoid duplication
✅ Target diagnostics precisely (less testing, better performance)
Yes, this means more effort in the early SYS.1/SYS.2 stages.
But the benefits downstream—in SYS.3, HWFM/HWTF, software safety, and tool qualification—often outweigh the upfront cost.

📉 A Real-World Example
At Kentia, we helped a client working on a domain controller for EVs. Initially, their architecture treated the SoC (a multi-core Renesas RH850 with safety island) as a single block. This led to:
- ASIL D propagation to all software
- PMHF > 200 FIT
- Unjustified DFA assumptions
After refactoring the model to split the SoC into individual cores, memories, and safety functions, and establishing FFI via the SMU and memory protection units (MPUs), the system was able to:
- Assign ASIL QM to the OTA module
- Allocate partial diagnostics to vendor SMs
- Reduce PMHF by 40%
- Achieve successful decomposition of safety goals into parallel architectural paths
🧠 Final Thoughts
Modeling is never just about diagrams.
It’s a strategy for managing risk, cost, and certification effort.
When it comes to the MCU or SoC, don’t just draw a black box and move on.
Take the time to understand and model its internal architecture—you’ll save weeks of rework, reduce your safety case complexity, and gain more flexibility in decomposition and reuse.
Because in functional safety, simplifying too early often means complicating too late.
💬 How do you model your SoCs for ISO 26262 safety analyses?
Do you go granular, or keep it simple? What have your auditors said?
Let’s share some lessons.


