Unpredictable Patterns #97: Red teaming
Attacking yourself, complexity and alternative analysis, policy threat modelling and Deng Xiaoping's concise strategy
Dear reader,
November is fun in many ways - we are trying to cram a lot of months into this “Norway of months” as Emily Dickinson called it. There is a lot to do, and little time to step back and reflect. That is why I thought it could be a good time to think more carefully about a key technique that we use in policy very rarely - red teaming - and how it can help us prioritise, and think about the future in strategic ways.
Thank you for all the comments on the last note. It is clear that geopolitics is increasingly relevant to everyone - not just us tech policy nerds - and I hope to return to the subject in a near future. There is much more to say here, for sure.
Attacking yourself
The original idea behind red teaming is as simple as it is brilliant. It simply says that you should, given that you know so much about yourself, be able to attack yourself in interesting and surprising ways. And if you figure out what those ways are, you are much more likely to be able to defend yourself well. It goes back to the one of the most profound insights in Sun Zi - an insight that we have come back to again and again: that you cannot guarantee victory in attack, but that you can build your own position so that it becomes unassailable, and then wait for your enemy to make a mistake. That there is no guaranteed way to win by attacking is a key insight in all kinds of conflict - from war to office politics - and building a position that is unassailable is a strategy that dominates all others.
But in order to be as close to unassailable as possible you have to figure out how you should be attacked. And this, correspondingly, is about identifying mistakes that you have made. This is why red teaming is painful for many organisations, since the implication behind a successful attack - and this follows from Sun Zi - is that a mistake was made somewhere in our own organisation. Identifying those mistakes, however, is of the utmost importance, since they provide a path to a stronger position.
The idea that there is a way you should be attacked is important here. It is based on the observation that for any conflict there is a clear set of possible options that can be ranked in effectiveness and cost, and that uncovering those is key to figuring out how to change for the better.
Red teaming, then, in this sense, is not just a game where we try to outsmart our own defences - it is the discovery of what the preferred strategy of attack should be against our position. Every leader should be able to answer this simple question: how should I be attacked - and draw the necessary conclusions.
Now, to become truly unassailable is impossible. But what we can do is to at least minimise the probability that the most rational attack against our position succeeds, and that is an important step towards a better position. In the end evolution’s imperative is key here: it is not about absolute advantage, but relative advantage — and then making sure that relative advantage compounds.
Designing attacks
In attacking ourselves we can use any number of different models. In the following we will explore a couple of mental models that can be helpful in figuring out where we should be attacked.
The first model is a simplistic evolutionary model. Evolution should be understood in at least three different dimensions. The first is how we evolve to adapt to our environment. The environment is everything that we can keep fairly stable in our analysis - or put differently: anything that does not adapt back. The second dimension, then, is the other actors in our environment. The third dimension is drift - the random changes in us that occur outside of any selection pressures.
Each dimension offers possible vectors of attack. If we look at the environment, we can find key dependencies and resources that we are locked into. The existence of water in an environment is often key for life, and removing the water is a decisive attack. By exploring how an organisation depends on its environments, the key resources it needs, we can uncover environmental weaknesses that should be attacked.
The seeds of our destruction are often found in our greatest successes - because these successes sets us drifting without any clear selection pressures.
The second dimension, the other actors, also provides an interesting set of options. Are there key dependencies on others that also need to adapt? Here the vectors of attack include customer groups that are essential or concentrated revenue streams from certain sectors of the economy, as well as the need to keep up with competitors. One interesting vector of attack is to strengthen a competitor to the point where more resources need to go into keeping up with them, weakening our position in other places. The cold war evolved into a dynamic like this, where the Soviet Union needed to keep up with the investments in military technology and so - according to some theories - was unable to continue to produce a middle class at a reasonable pace. Many other authoritarian governments are equally dependent on their ability to continue to produce a middle class at pace, and will have corresponding weaknesses.
Drift, the third dimension, provides us with an opportunity to explore where an organisation has not been challenged in a long time — and what that means in terms of blindness and weakness. An organisation in drift accrues inefficiencies and slows down, building complexity without any corresponding capability, and that means that its entire ability to react is weakened. Identifying where an organisation has been in drift for an extended time is often among the most fruitful ways of finding weaknesses in their position.
The seeds of our destruction are often found in our greatest successes - because these successes sets us drifting without any clear selection pressures.
The evolutionary model is coarse-grained and fairly general, but it provides a simple framework to think about weaknesses and attacks that we should expect to see. Other red teaming methods can be more specific.
A very common method is exploring the business model and cost structure of an organisation, to figure out where the organisation is weakest. A company that has 98% of its revenues from a single undifferentiated source should expect that source to be a key vector of attack, and seek to protect it. Understanding the economic flows in an organisation is key to attacking it - and so the better you understand your own economic realities, the better you will be able to find those weaknesses.
The economic model can even be broken down into individual customers, or subsets of customers, to understand which ones would create significant harm if they were captured by someone else. This could be done at a very sophisticated level, looking into network effects and dependencies where some customers are core customers and others edge customers - core customers take with them ecosystems, whereas edge customers are more isolated.
A third model for red teaming that is specific to the craft of public policy consists of designing policy proposals and legislation that targets weak points in the organisation. Red teaming here is essentially putting yourself in the shoes of an aggressive, rival policy team’s shoes - asking what policy moves would be most problematic for a competitor.
Such red teaming needs to be developed in two different ways.
The first looks at the existing product or service or organisation, and tries to figure out if there are policy proposals that would uniquely and asymmetrically affect our organisation. The qualification here is important: “uniquely and asymmetrically” means that we are less interested in attacks on our industry overall, and more focused on policy proposals that would make it relatively harder for us to survive and continue to evolve.
The second aspect is if there are policy proposals that severely constrain our ability to grow the product, service or organisation in ways that we think are necessary or integral to our strategy. This is actually the more important red teaming exercise - since it is about attacking what the organisation wants to become, rather than attacking what it is.
A simple example of red teaming that should have happened could be an exercise in any company in the so-called gig economy. The objective should have been to understand what the application of legacy employment law would mean for the business model. The relative advantage of these companies that everyone realised should be attacked was exactly the assertion that they had no employer - employee relationship with the workers in their business. Understanding that and looking at possible defences and mitigation would become essential to current day efforts in many of these companies, and that this would happen was clearly predictable. The companies best prepared to deal with this, then, would have had a significant advantage.
This is also means that any company that saw this, prepared and then came out in favour of a policy proposal they knew should come, would have created an unassailable position and forced the others to eat the cost of their mistakes. We have seen some of this in the competition dynamics in the sector, but if there had been extensive red teaming, we should have expected more.
Red teaming the future design space of a product, service or the organisation as a whole requires some idea of what key assets will be important. One way to do this is to construct asset maps - maps of all the things that are needed to create value - and figure out how each such asset class can be weakened or attacked. An asset map can include things like personal data, compute, talent, customer inertia — and many other things. Modelling your product, service or organisation as a set of assets also allows you to think about how to invest in protecting the assets over time.
Another way to do this is functional analysis. Functional analysis essentially entails asking of your product, service or organisation how it breaks. Figuring out how something breaks is key to understanding how it should be attacked. If you have an advertising funded business model, you know that it breaks if you do not have the ad revenue - but how, then, does the ads component break? Is it through reduced accuracy in the ads? Ads being prohibited overall? The evolution of user interfaces and interaction models where ads cannot be interjected? All of these attack vectors are interesting to explore from a policy perspective.
A good policy red teaming exercise should produce two basic outcomes: one is a list of policy proposals that - formulated as concretely as possible - show rational attacks (unique and asymmetric) on the organisation and the second is a set of possible moves in response, and here it is important to index on the asymmetry: an attack that is symmetric lends itself to a first mover advantage in embracing the policy if you believe that your cost of adapting to it is lower than your competitor’s.
Red teaming applications and methods
Red teaming originated as a military term, where the opposing force (OPFOR) was often referred to as the red team. The idea then evolved into what in intelligence circles is sometimes called alternative analysis. Red teaming is increasingly becoming more and more important as the complexity and capability of opponents is increasing.1
Today red teaming is increasingly used in developing large language models, since these have now reached a level of complexity where red teaming beats other debugging methods (and in one way we can think - correspondingly - of red teaming as debugging an organisation!).2 The use of artificial intelligence in red teaming - for analysis and attack development - is also likely to grow fast, even automating some uses of red teaming in cyber security.3
The most extensive uses of red teaming today may well be in the field of information security, where penetration testing and ethical hackers are two examples of possible red teaming applications. These red teaming efforts can also include - and probably should include - organisation penetration testing and attacks, exploring more general weaknesses. This is a domain of its own, however, so I will not dig more deeply into it — just note that it is a response to the increasing complexity of our systems and the growing number of attacks on these systems.
Red teaming is surprisingly simple to get started with, and there are plenty of resources for those that are interested to get going. One good place to start is the UK government’s Red Teaming Handbook (3rd edition).4 The handbook combines scenario thinking - identifying the key drivers that determine an outcome and varying them across uncertainties - with red teaming in an interesting way. Red teaming here becomes a way to challenge the assessment of the key drivers and where they are likely to end up.
Another version of red teaming is built on what is called assumption-based planning, where the key is to articulate all the assumptions we make when we plan to be able to scrutinise them clearly.5 Here the red teaming effort consists in both uncovering lost or hidden assumptions and challenging the assumptions in different ways. The methodology consist of surveys asking for assumptions and gathering a team to explore these.
A lot of red teaming is about discovering cognitive vulnerabilities in your own organisation - biases and heuristics that work in a majority of cases, but create serious weaknesses in a subset of critical situations. This work - finding your organisation’s blind spots - is really hard, and sometimes it is worthwhile bringing in third parties to help find this out. This, too, presents a challenge: it is really hard to find good consultants that will tell you where you are wrong. The economics of this makes it really tricky - since the consultant will hope to have a long relationship with you. One possible technique to use here is the one-off consultant contract that clearly stipulates an end to the relationship and no continuing business, for at least 5 years. Few consultants offer such contracts, even though they should be able to do so at a premium, and a rational buyer should want to have some percentage of consultants be on such contracts to extract maximum value in honesty and disagreement.
It is also worth coming back to the the concept of “alternative analysis” as another method to explore. Here the key idea is to provide a different and conflicting version of the answer to the Rumeltian question “What’s going on here?” and really dig into the differences. In the Rumelt tradition of strategy, all strategy starts from a diagnosis and this is one key point where it might be helpful to really seek alternative understandings of the environment we act in, as well as what the problem we are dealing with actually is. A simple version of red teaming is to simply argue that we are, collectively, as an organisation, answering the wrong question.
So what?
Red teaming is increasingly attractive in complex environments, and can be used as key means to secure our own position. Statistics indicate that red teaming - especially in information security - is becoming more and more common.6
“to observe carefully, secure our position, hide our capacities, bide our time, be good at maintaining a low profile, never claim leadership”
One of our time’s most subtle strategic thinkers - Deng Xiaoping - summed up his strategy for China in a very simple statement: “to observe carefully, secure our position, hide our capacities, bide our time, be good at maintaining a low profile, never claim leadership”.7 This concise strategic department captures a lot of what we have been circling around in these notes too: the focus on capacities or capabilities, the extreme importance of observation, the virtue of hiding and biding time (waiting) and carefully avoiding to be head above parapet. Red teaming can help with both securing our position and biding our time, and so should be a key tool in our toolbox.
In the end we win because we know our own weaknesses better than anyone else.
Thanks for reading,
Nicklas
See e.g. Gold, Ted, and Bob Hermann. The Role and Status of DoD Red Teaming Activiites. OFFICE OF THE UNDER SECRETARY OF DEFENSE WASHINGTON DC ACQUISITION TECHNOLOGYAND LOGISTICS/BUSINESS SYSTEMS, 2003. https://apps.dtic.mil/sti/pdfs/ADA417931.pdf
See e.g. Rando, Javier, et al. "Red-Teaming the Stable Diffusion Safety Filter." arXiv preprint arXiv:2210.04610 (2022), Ganguli, Deep, et al. "Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned." arXiv preprint arXiv:2209.07858 (2022) and Perez, Ethan, et al. "Red teaming language models with language models." arXiv preprint arXiv:2202.03286 (2022).
See e.g. Yuen, Joseph. "Automated cyber red teaming." (2015). https://apps.dtic.mil/sti/pdfs/ADA618584.pdf and de la Vallée, Paloma, Georgios Iosifidis, and Wim Mees. "Cyber Red Teaming: Overview of Sly, an Orchestration Tool."
Available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1027158/20210625-Red_Teaming_Handbook.pdf other handbooks include Kardos, Monique, and Patricia Dexter. A simple handbook for non-traditional red teaming. Defence Science and Technology Group Edinburgh, SA, 2017.
See e.g. Popper, Steven W. "Designing a Robust Decision–Based National Security Policy Process: Strategic Choices for Uncertain Times." Adaptive Engagement for Undergoverned Spaces (2022): 287 and Dewar, James A., et al. Assumption-based planning; a planning tool for very uncertain times. RAND CORP SANTA MONICA CA, 1993, developed in Dewar, James A. "Assumption-based planning." Cambridge Books (2002).
See https://www.exabeam.com/security-operations-center/2020-red-and-blue-team-survey/ and https://www.darkreading.com/endpoint/68-of-companies-say-red-teaming-beats-blue-teaming
See Black, J Military Strategy: A Global History (2022) p 255. Black notes that China now has abandoned this strategy, and his prediction is that this severely will weaken their situation.
Outstanding work. I am recognizing that dedicated strategy work, practically, is just constant red teaming.