Most people start their speeches by saying their impact outweighs, listing off impact calc criteria they learned as a novice, and then just reexplaining the impact under each one. There is a better way.
Here’s an example of a common impact overview:
“Econ decline outweighs and turns the case on probability. It erodes trade connections which lessens the opportunity cost of conflict. It promotes domestic crisis which incentivizes leaders to produce ‘rally around the flag’ effects through diversionary military action. When leaders initiate a crisis there is an inherent risk of escalatory spirals driven by miscalculations causing nuclear war. Outweighs on magnitude because you can only die once and time frame because impacts are triggered by the mere perception of impending decline.”
This sounds reasonable. The problem is that with respect to helping the judge decide the debate, it only minimally moves the needle. It tells the judge what your impact is and gets a head start on answering defense – nothing more. It pretends to do a comparison by saying the word ‘outweighs,’ but doesn’t do any actual comparison. You can tell because if there was a comparison, there would be some talk of what the other side has said and why it is worse.
By giving a canned overview like this you are giving up an opportunity to do something way more useful. Let’s get into why this is a problem and how we can do impact debating better.
Why Do We Care About Impactful Comparisons?
Everything you do in a 2AR / 2NR should be filtered through one lens: maximizing the number of robust paths to victory. Whenever you make an argument, it should be doing one of two things: a) strengthening an existing path to victory, or b) creating a new path to victory. Any words that do not do this are hurting you by trading off with better words that would help.
We talk a lot about efficiency in terms of the number of words used to express an idea. This gets at a more fundamental issue of strategic efficiency: spending time making arguments that do not increase your odds of victory. Judged by this criterion, there are two broad ways to ‘waste’ ideas in your final rebuttal:
‘Win more’ arguments. This happens when arguments you make only matter under a set of conditions where you are already extremely likely to win. A common example is when you have an impact, your opponent has conceded that it would be very bad if it happened but loaded up on arguments about why it wouldn’t happen, and you spend lots of speech time repeating why it would be bad if it happened. This helps you if you have already won that your impact will happen, but if you’ve won that, you will probably win anyway.
Fighting doomed battles. This happens when arguments you make only matter under a set of conditions that is out of reach. An example is when you rant about how much probability matters even though you have a very improbable impact, or when you go for an advantage that doesn’t connect to a solvency deficit against the neg’s CP. Doomed battles can also be produced by overstretch. Remember – strategic choice is very important. If you spend two sentences going for conditionality and it’s not dropped, it might seem like you have created a new path to victory, but in reality, you have likely wasted those two sentences because the odds of a judge caring about your throwaway argument are very low.
Evaluating our example impact overview according to these metrics, it is easy to see why it is suboptimal.
First, it is a ‘win more’ argument because it is strategically duplicative with answers to impact defense. Assuming the best-case scenario – the judge believes every word you have said – the only thing that is accomplished is having your own impact. If you have already defeated your opponent’s impact, that might be enough. But if you’ve already defeated your opponent’s impact, you were probably already going to win. Your pretty description does not create any ballots in your favor that didn’t exist before.
What a judge wants to hear are instructions on what to do if both you AND your opponent have won some chance of an impact. What are some criteria that they can use as tiebreakers? Which impact best passes the ‘laugh’ test, beyond the granular details of the flow? Both impacts might be fast, but which is faster, and how can the judge tell?
Second, impact overviews are often fighting doomed battles because they are occurring in a vacuum. Merely describing your argument does not demonstrate any awareness of what is going on in the debate, whether that be CPs that solve a large risk of your impact, or defense that, if we’re being honest, makes your impact a lot less probable than you want it to be. This is a problem because even if you were making a real comparison, it would not help the judge vote for you since the prerequisite to your comparison being a reason your side is better has not been satisfied.
Better Impact Description: Reasonable Core Uncertainty Multiplier Worst Case Multiplier
Before getting into examples of specific factors or benchmarks, I want to suggest a different way to think about impact description. You have probably heard of:
Risk = Probability x Magnitude
This is a helpful way to analyze risks, but it is too abstract and too connected to impact defense to be helpful in comparing them.
Let’s assume an actual debate is occurring, everyone’s impacts are mitigated, no one has dropped everything, and there is approximate parity between the quality of impact and impact defense evidence.
The following components help move the needle in this situation:
Impact = Reasonable Core x Uncertainty Multiplier x Worst Case Multiplier
Let’s unpack each of these.
Every stupid impact can be massaged into a non-stupid related premise. Start with this reasonable version. Doing this first shuts down presumption ballots since it gets a probable version of your impact off the ground and establishes that some version of what you have said corresponds to the judge’s experience of reality. You are looking for language that would not sound out of place in a Congressional hearing involving serious, qualified people.
One species going extinct might not kill everyone, but the theory behind biodiversity has reasonable-sounding concepts like keystone species and ecological interdependencies. Disease impacts killing everyone seems implausible at face value, but disease creating broad-based disruption to society and killing tens of millions of people is within our range of experience. Economic declines causing nuclear wars seems implausible at face value, but economic declines contributing to radicalization, social dislocation, populism, and nationalism seems much more likely. AI might not turn into a computer god that enslaves humanity, but it might result in social disruption as it displaces workers or make military hardware more dangerous or unpredictable as its integration into combat increases.
Even before going further, it is worth remembering that while this ‘reasonable core’ often won’t rise to the level of extinction (it probably doesn’t in any of the above examples, aside from biodiversity), it is often still a high-probability, high-magnitude impact that can outweigh what your opponent has said by itself.
How sure are you about the range of possible outcomes of letting the impact happen? At an even more meta-level, how sure is it conceivably possible to be? When you have a stupid impact, you want this uncertainty to be as large as possible. To vote for impact defense is ultimately to roll the dice; the judge is gambling that the impact won’t be as bad as you say based on information they are provided in the debate. Make this gamble explicit.
We’ll stick with AI, since it is a common terminal impact. To vote neg on AI impact defense, a judge must evaluate the sum of technical and political science evidence and arguments presented by a variety of cards and experts and conclude the risks of AI are overstated. You should make a fundamental epistemological challenge to the judge’s and impact defense authors’ ability to do this.
Whatever else you might think about AI, it is a transformative technology. It interfaces with the internet and with the military—two of the most powerful forces in society—and allows them to make decisions at a rate and level of autonomy unparalleled in human history. It also has the potential to recursively improve, since AI can support inventive processes that yield new generations of itself. The result could be an explosion of technological growth that unfetters humans, allowing us to achieve things that previous generations can only dream about.
It is the height of hubris to assume that anyone from their vantage point in the present moment can definitively say how this will go, especially if their reasoning is based on technological constraints. It is like asking someone in 1980 to predict the effects of the internet. The idea that one can be confident about the effects—one way or the other—is farcical.
Worst Case Multiplier
You have set up that to let your impact happen is, at best, a gamble. Now, set the stakes. Assume your ‘reasonable core’ metastasizes and takes on its most destructive form. You may know this as the ‘tail-end risk’ in discussions of climate change. How bad can it get?
Let’s assume AI has suffered a control failure. Remember from earlier—the exact probability of this happening is unknown and unknowable. It is surely not zero. An out-of-control AI can wreak all sorts of havoc. Its interests might be misaligned with humans, causing it to derail society in subtle and unpredictable ways. It may even view humans as an impediment, causing it to work against our interests and even directly damage us. In the context of world-threatening technologies like nuclear weapons, something as simple as a bug could cause dangerous weapons to be used, threatening humanity.
This combination gets at the same issues as ‘probability times magnitude,’ but in a manner that is more helpful to a judge performing evaluations. It doesn’t just answer impact defense, it builds a framework for how a judge should evaluate impact defense in the first place.
Better Metrics For Comparison
So far, none of this is a comparison—we still don’t know how any of these criteria relate our impact to our opponent’s. How do you do that?
In short: use your brain! Explain why your advantage matters more than what the other side the way you would to an actual smart person. Pretend you are talking to or writing for your social studies teacher or writing a letter to your congressperson.
Let’s get into a few factors you can use that gel with our newly refined impact explanation and might register with a normal person. Remember that this is not a checklist; you should use comparisons that make sense in each context. This list also isn’t exclusive; you should obviously feel free to come up with your own comparisons if these are a poor fit. The key is to provide specific direction to a judge, beyond simply ‘magnitude’ or ‘probability.’
Most people agree that it is the default tendency of humans to try to avoid self-destruction. The international relations theory of realism says that this premise applies to heads of state and that heads of state translate this premise to avoiding national self-destruction as well. These assumptions make it extremely improbable, at a baseline, that certain types of extinction events will occur.
With a few debatable exceptions, there has not been a conflict in which great powers fight one another directly since World War Two. There have certainly been close calls. However, a common thread connecting these close calls has been the extreme care taken by participating leaders to avoid the worst-case scenario. Kennedy and Khrushchev both made massive concessions for fear of nuclear escalation during the Cuban Missile Crisis. The Russian invasion of Ukraine is another case in point. Debaters have often read Russia war impacts in which it was simply assumed that any conflict, because of ‘miscalculation,’ the ‘fog of war,’ or ‘escalation spirals,’ would go nuclear. But experience has shown that in reality, leaders in NATO and Russia have painstakingly avoided taking steps that would cross the other side’s escalatory red lines, and a broader war has been avoided.
Other kinds of threats can be subjected to a similar analysis. Nuclear technology is designed with fail-safes and tens of thousands of people have dedicated their lives to ensuring existential accidents don’t occur. A similar idea holds for blackouts and critical infrastructure systems.
The upshot is that wars (especially between great powers, and especially involving the use of nuclear weapons), as well as existential catastrophes that require leaders to choose self-destruction or an extremely large number of experts to have a critical lapse in judgment simultaneously, are extremely unlikely to occur. By contrast, impacts that are the product of a coordination or collective action problem (no single set of individuals can ‘veto’ the impact from happening or is motivated to do so) or impacts that are not the product of human decisions at all have a higher probability of reaching their worst possible form. Such impacts have a higher uncertainty modifier.
This would suggest that one should be more concerned about harms stemming from the environment or from rogue actors deploying disproportionately powerful technologies than from rational heads of state taking steps that will result in self-annihilation.
What does history tell us about our impacts? This can go in both directions.
Some impacts have enough similarities to events that have taken place before that history should make us more concerned about them. In other cases, historical precedent reveals that concerns are overblown.
Either way, history can be useful to invoke. The world is very complicated. It is very difficult to make predictions about things. We can get a better sense of what things will be like in the future if those things or close analogs have happened before. We should cast a critical eye on theories of threat that presume a fundamental departure from the way similar situations have played out in the past.
There is also value to pointing out that an event is unprecedented in history. If one side’s impact has happened before and the other’s hasn’t, the impact that hasn’t happened before benefits from a larger uncertainty multiplier.
‘Threat Multiplier’ / Generator Function
There is an existential risk scholar that most people don’t read named Daniel Schmachtenberger. He is a founding member of The Consilience Project, which aims to elevate the discourse on existential risk.
Schmachtenberger suggests that we should think about existential risk using the lens of ‘generator functions’: simple processes which, when layered or combined, produce complex, unpredictable, and negative effects. He argues that certain generator functions contribute to structurally increasing the probability of existential risks occurring. In the context of climate change, this idea is often expressed using the term ‘threat multiplier,’ which suggests that pre-existing risks will interact with warming in a way that makes them more dangerous.
Though this way of thinking is at odds with the scenario-based approach that pervades many 1ACs, it offers a helpful vocabulary for describing ‘structural’ threats. Consider an advantage such as ‘education.’ A common way to impact improvements in educational attainment is to focus on its economic benefits, and to say that there are conflict risks from economic decline or losses in economic competitiveness.
This does not really get at the main way educational attainment interacts with existential risk, which is that a society with lots of smart people is stronger, more nimble, more adaptive, and more likely to learn from the mistakes of the past. This is difficult to connect to a ‘scenario’ in the traditional sense, but that is a feature, not a bug. A more educated population might not have voted for Trump. A more educated population might experience lower levels of poverty, reducing the kinds of social alienation that contribute to xenophobia and right-wing nationalism. A more educated population might be more likely to think critically about climate skepticism, building political momentum for avoiding ecological disasters. A more educated population might be more likely to produce a genius inventor who discovers a universal vaccine or a genius political scientist who figures out how society might finally rid itself of weapons of mass destruction. Undereducation is thus a ‘generator function’ that contributes to many different existential risks—even ones we cannot yet imagine. To vote against such an impact because of impact defense that mitigates the probability of one threat, such as climate change, would be nonsensical.
Another example is disinformation. Disinformation certainly connects to traditional impacts—spoofing of nuclear systems is one example of disinformation and can certainly have very dangerous effects. But disinformation also has broader ramifications. Democracy works by translating what people want into decision-making by elected government officials. When what people want is informed by ideas about the world that don’t correspond with reality, that translates into government decisions that are based on inaccurate priors.
The harms of this can’t be predicted in advance, but we can look back at history to get some idea of what this kind of distortion causes. Would the Iraq war have happened if our epistemic commons remained healthy in the face of post 9/11 opportunistic securitization? Would ISIS have formed in the vacuum that followed US intervention? Would the trillions of dollars the US committed to the conflict have instead been spent on a robust welfare state that broadly reduced social and economic dislocation? Would Iran have been able to exploit the instability to intervene in the Syrian civil war? Would Russia have gained military and operational confidence from its participation in the conflict? No one can say for sure. What one CAN say is that the stakes of the most powerful nation on Earth charting its foreign policy course based on reality instead of fantasy are massive.
This explanation helps increase your uncertainty multiplier as well as your worst-case multiplier. The range of threats that a generator function can cause is massive and terrifying, and the magnitude of the internal link is unknowable in advance but could be quite high.
Such broad-based interventions against existential risk may therefore be more valuable than narrow interventions whose benefits are contingent on the truth of a series of mutually dependent internal link arguments. Though the effect in any individual situation may not be decisive, the sheer number of situations in which mitigating an x-risk generator function may be beneficial makes it highly likely to be decisive in some number of contexts over time. While this broad intuition resembles the inane ‘conjunctive fallacy’ argument that some teams would like to use to categorically dismiss ‘DAs’ or ‘internal link chains,’ combining that frame with specific internal link mitigation and terminal impact arguments of your own can produce genuinely impactful comparisons.
While this is one of the ‘traditional’ categories of impact comparison, it is more straightforward than the others, so its invocation isn’t as prone to replacing a nuanced explanation with something vacuous or boilerplate.
Some impacts are clearly faster than others. This matters for obvious reasons. It is better to die later than sooner. Avoiding something imminent should be a higher priority than avoiding something far in the future since we can find solutions to distant threats later down the line. An earlier impact can cause a later impact, but not the other way around. These questions implicate the reasonable core, the uncertainty multiplier, and the worst-case multiplier. Urgent threats are worse than distant ones and have a lower probability of naturally resolving themselves.
The key pitfall to avoid is saying your impact is fast without drawing a comparison. Make sure you explain why your impact is faster than your opponent’s, not just urgent in the abstract.
Turning/Subsuming the Other Side
When one side’s impact includes the other’s, that is a good way for the judge to decide that the larger impact outweighs the smaller one without determining the absolute risk of either. This is easier to do when you are going for a ‘generator function’ impact but worth attempting regardless of what kind of impact you have.
Proportionality To Internal Link
Impacts only matter insofar as they connect to an internal link that someone uniquely solves. This means your impact description should be contextualized to your unique internal link.
Suppose you are aff against a process CP. The neg has essentially conceded your case, but they are likely to win that they solve most of the advantage. Your best hope is a mediocre certainty argument.
The typical 2AR will start by describing the impact in a vacuum, saying it is important, and saying it is dropped. A better 2AR will describe the proportion of the conceded impact that is produced by uncertainty and compare that section of the impact to the neg’s internal net benefit.
A helpful approach in this situation is to add linearity or an invisible brink argument – an uncertainty multiplier specific to an impact to a solvency deficit. The CP might solve a lot of biodiversity, but given the complexity of the biosphere, how can we be sure it is enough? The CP might solve a lot of AI research, but R&D is inherently probabilistic and expensive, so any increment of deficit might mean the critical experiment isn’t conducted or the critical hire isn’t made, and innovation lapses. Are you willing to roll those dice for the sake of the net benefit? The way you describe your impact in the first place should synergize with this strategy on the CP page so that your speech has a coherent narrative about what thresholds are important.
So far, we have avoided focusing too much on debate conventions. In a real debate, however, these are worth emphasizing. Conceded defense is one of the surest ways to reduce the uncertainty or worst-case multipliers for your opponents’ impacts, or to increase the size or certainty of your own. If you have a conceded impact or there is conceded defense to your opponent’s impact, it is always worth instructing the judge to give this fact significant weight in their evaluation.
At bottom, there is no easy trick to good impact comparison—only guidelines that can suggest good criteria when they are specifically applied. Everyone can do this at some level, but many debaters fall into the trap of using impact buzzwords that mean nothing by themselves and crowd out substantive comparisons. Hopefully these ideas have inspired you to think of new, real-world, jargon-free ways to explain why your arguments are more important than your opponent’s.