I've written about the ways to attack or undermine systems with highly optimized tolerance in the past (systems that fit this description include everything from financial networks to electricity networks). Generally, my conclusion was that the best approach is to focus on design flaws. However, this analysis was less than complete. Systems that exhibit highly optimized tolerance are vulnerable to ALL of the following:
- Design flaws. These attacks come from outside the system (exogenous) variables. In warfare, the design of COIN (the US military's counter-insurgency doctrine) didn't account for open source insurgency. As a result, it was of little use in Iraq. Fortunately (or unfortunately, since it took four years of failure to grasp that the model was flawed) new methods/models of counter-insurgency were designed on the fly (based on local observations) that created the Anbar Awakening. In finance, we designed a system of national economic investment that relied nearly exclusively on capital markets (at the expense of individuals/incomes). The result was that capital markets became, without the decisions/input of hundreds of millions of Americans making daily investment decisions, completely divorced from reality.
- Exploitation. I touch on this in my recent brief on "Bow-Tie Control Systems." In short, its often possible for bad actors to access the core features/services of complex systems. This allows them to exploit these features for personal gain. If the exploitation becomes too intense, the entire system fails. In biology, cancer exploits the core machinery of the cell to propagate/function. In finance, investment banks accessed the mortgage market to exploit its cash flow for personal gain. In each case, too much success resulted in the death of the host system.
- Black Swans (events with small probabilities that yield exceptional or disastrous outcomes). Basically, this is what happens when a variable in your system's design (that you think you understand) blows up. Why does this happen? In most cases, we don't have nearly enough experience with almost all of our complex systems to truly understand the magnitude of potential variation in many of the variables we use (as in: hindsight doesn't work if the period of observation is too short). Let's use another example from Iraq (warfare). While we understood that terrorists could make things worse through targeted attacks , we completely under appreciated the magnitude of disruption possible. As a result, the attack on the Golden Mosque sent the country into civil war for a year. In finance, the historical probability distribution of defaults on home mortgages was completely wrong, by several orders of magnitude. The result was systemic financial failure as top rated assets quickly became worthless junk.