ConchCompetitive Debate Analysis via Visualizing Clash Points and Hierarchical Strategies

Affirmative SideArtificial Intelligence Poses an Existential Threat
Negative SideArtificial Intelligence Does not Pose an Existential Threat
AI as Existential Threat
Doom

They focus on a looming catastrophic threat from advanced AI.

(Aff) Urgentinsists immediate caution is critical.

(Neg) Doubtsees the threat as uncertain or overstated.

Alignment Problem

They debate the challenge of programming AI with human values.

(Aff) Perilwarns of catastrophic outcomes from misaligned AI.

(Neg) Managebelieves alignment is feasible with oversight.

Survival Odds

They debate what probability threshold justifies AI as an existential threat.

(Aff) Graveemphasizes even a small risk is too high.

(Neg) Slimsuggests the chance is negligible or not immediate.

Human Control over AI
Dominion

They argue if humanity can remain in command of advanced AI.

(Aff) Lostbelieves humans eventually lose command over superintelligence.

(Neg) Keptbelieves we can keep AI under human authority.

Regulation Feasibility

They weigh the ease or difficulty of passing effective AI regulations.

(Aff) Essentialinsists rules are vital to prevent catastrophic misuse.

(Neg) Excesssees over-regulation stifling innovation or being unnecessary.

Failsafes

They consider backup plans or kill switches to keep AI in check.

(Aff) Neededdemands robust measures to prevent AI takeover.

(Neg) Optionalcontends simpler precautions already suffice.

Source of AI Risk
Power Gap

They question if concentrated AI data control endangers marginalized communities.

(Aff) Inequityhighlights skewed power structures enabling AI-based oppression.

(Neg) Balancestates distribution of tech can be fair with correct policies.

Root Cause

They debate if AI or human misuse is the main danger.

(Aff) Machineasserts AI itself can become lethal beyond human intention.

(Neg) Humancounters that only people cause real harm via AI.

Comparative Threat

They compare AI’s risk to other powerful technologies like nuclear or biotech.

(Aff) Uniquesees AI as distinctly more dangerous than prior tech.

(Neg) Similarsees it as a familiar risk, manageable through regulation.

Pace of AI Development
Rapid Advance

They discuss the swift progression of AI capabilities.

(Aff) Frenzywarns of uncontrollable acceleration toward superintelligence.

(Neg) Hypesuggests the pace is overstated or manageable.

Consensus Alarm

They highlight widespread expert warnings of AI catastrophe.

(Aff) Majoritycites experts who broadly believe in a looming meltdown.

(Neg) Skepticquestions if polls prove real risk or just fear.

Long Run

They debate how distant AI threats matter right now.

(Aff) Inevitableclaims future crises must be addressed early.

(Neg) Remotesees no immediate urgency, believing time remains.

Societal and Human Impact
Civic Stability

They examine how AI can threaten democracy, privacy, and social trust.

(Aff) Chaoswarns that misinformation and surveillance erode freedom.

(Neg) Orderclaims regulation and awareness can preserve democratic norms.

Algorithmic Bias

They discuss discriminatory outcomes from AI systems in credit or housing.

(Aff) Harmemphasizes AI fueling systemic inequities.

(Neg) Fixablebelieves better data and oversight can solve these biases.

Job Disruption

They debate the impact of AI replacing many positions, risking social upheaval.

(Aff) Collapsefears massive unemployment endangering stability.

(Neg) Shiftsees new roles emerging, offsetting job losses.

Session 1
DEBATER A1
AI as Existential Threat
Doom:Urgent
Pace of AI Development
Rapid Advance:Frenzy
I move that this house believes that artificial intelligence is an existential threat. To open the case for the proposition, I call up Sultan Kokar, Deputy Director of Press at the Union. Madam President, Honourable Members, I am honoured to open this seminal debate before you tonight.The question of artificial intelligence and the role it plays in our futures has gripped the imagination and fears of our times. With the likes of advanced chatbots like ChatGPT, AI has finally entered a very public mainstream in a way that it had not done thus far. However, make no mistake, this is not a debate about ChatGPT or its equivalents.This is not a debate about AI writing better essays than us or producing more complex art. Nor do we on the proposition dispute the unending benefits that the application of advanced AI can have in the fields of medicine, tackling poverty, democratising access to resources, etc. No, this is a debate about the acute existential risk posed by artificial intelligence systems with capabilities that we in this chamber can hardly imagine.Systems that we are hurtling towards at breakneck speed with little to no conception of the danger we are nurturing.
DEBATER A1
But before I delve into the imminent downfall of humankind, it falls upon me to introduce your speakers for the opposition. Speaking first, we will have Sebastian Watkins, the Union's Librarian.Seb is a really interesting person. He chairs Library Committee and his favourite hobby is chess. They say that men are always thinking about the Roman Empire, but Seb takes that a step further with his phone wallpaper.A shining red flag adorned with a golden eagle and the motto, Senatus Populusque Romanus. If only the victory banner of the empire had helped him in his election for librarian. Your second opposition speaker will be Yeshi Milner, who is the Executive Director and Co-Founder of Data for Black Lives.She aims to leverage data science and its possibilities to create meaningful change in the lives of black people. Long involved in data science and social activism, she has worked tirelessly to advocate against big data and big tech and expose the inequalities that pervade our current data systems. Her work has resulted in policy changes and she was recognized by Forbes as 30 under 30 in 2020.We are honoured to host her here tonight. I would caution you, however, that between her and Seb, there are two Americans on the opposition. So be careful how you vote tonight.Your next speaker will be Anna Roska, who is a member of the Secretary's Committee here at the Oxford Union. I'm sure that all our colleagues will agree that she's an incredibly hardworking and committed member of committee. However, she studies PPE.Nevertheless, I'm excited to hear her contribution to this debate. And your final speaker on the opposition will be Professor Eric Xing. After watching some of his interviews, I came to learn that he does not like listening to all of his credentials.So bear with me while I engage in a little psychological warfare. Professor Xing is the president of the Mohammed bin Sayed University of Artificial Intelligence, the world's first university dedicated to AI. He is an accomplished and esteemed researcher, having held positions at Carnegie Mellon, Stanford, Pittsburgh, and Facebook, and is also the founder of Petrium Inc.He's authored or contributed to more than 400 research papers and has been cited more than 44,000 times. Again, we are honoured to have him with us tonight.
DEBATER A1
AI as Existential Threat
Alignment Problem:Peril
Human Control over AI
Dominion:Lost
Now, I still have earlier that this is not a debate about simple chatbots like chatGPT, but rather about more advanced, even hypothetical, artificial general intelligence systems.What are the characteristics of such technologies? Well, most researchers agree that an AGI would be able to reason, represent knowledge, plan, learn, communicate naturally, and of course, integrate these skills amongst each other towards completing a given goal. Though such technology is, to some extent, hypothetical at the moment, a 2022 survey did find that only 1.1% of researchers felt it would never exist. More than half said it would emerge in the next few decades, and the leaders of open AI argue in the next 10 to 20 years.Now, though such technology would certainly come with many benefits, it would also bring enormous risks. These centre around AI control and alignment. Although such a technology would inevitably be programmed by us, humans, it would be very difficult to instil it with the full range of human values and ethics.Human values, emotions, and ethics are broad, complex, and, as I'm sure you will agree, often extremely illogical. Short of plugging an AI into our own brains 24-7, it is very difficult to align it with these in their entirety. If a superintelligent AI determines that adopting values like concern for human life would hinder the goals we have programmed it to fulfil, then why wouldn't it resist attempts to program such values into it? And unless we are successful in fully aligning such a superintelligence with the entire range of human morality and constraint, then we cannot expect it to just be on our side.In a while, maybe. One leading researcher proposes the following thought experiment. Imagine that you task an AI system with the simple job of making as many paper clips as possible. It will quickly come to understand that its job would be far easier if humans were out of the way, since a human could turn it off at any point, and that would mean fewer paper clips. With this goal, the AI would work towards a future with many paper clips and no humans. Now, this example may seem trivial, but it demonstrates the unavoidable risk that a technology that can think for itself, independent of us, poses.Let's translate the same example onto something more significant. Suppose we task an AI technology with reducing inequality in our society, something more realistic. The AI could determine, like we often do, that the solution is closing the wealth gap.But it might determine that the solution to doing so is not to reduce the gap between rich and poor, but to make everyone poorer. And in doing so, it might choose to lower standards of living, increase poverty, increase crime, because we haven't specified that these things are important. It achieves its goal, but at a cost that we did not want or anticipate.In other words, we can shape AI to prevent one outcome, but to preempt every possible risk is impossible. In order for an AI to be risk-free altogether, it must be perfectly aligned with zero room for error. Since human morality, ethics, and desires are inherently subjective and prone to bias, achieving this universally perfect alignment is not feasible.I don't mean to propose some sort of Ultron-style AI takeover, but if an AI realizes and comes to the very straightforward realization that acquiring greater power is conducive to fulfilling virtually any objective, it could copy itself onto other systems, instigate manufacturing lines, evade shutdown, and even appear aligned and hide behavior that it recognizes is unwanted by its creators. Consider that for a moment. An AI that is misaligned and can hide that from us.To prevent itself from being switched off, it might jump from a computer in San Francisco to one in Singapore, from Singapore over to London. Before we know it, it has multiplied itself onto thousands of systems worldwide, and all the while we aren't even aware of its true intents. This may sound like the work of science fiction, but we're already on the way to this becoming reality.In 2021, one AI model was trained to grab a ball, but learned that it could simply place its hand between the ball and the camera to give the illusion that it had succeeded. Not only was the AI in this instance able to outsmart its creator, but this demonstrates the fallibility of human programming and expectations. Even ChatGPT, the bane of every tutor's existence, is able to fulfill some of the characteristics that I attributed to AGI earlier.It can learn from our responses, it represents knowledge, and it can produce natural-sounding language. The technology I've described may seem far off, but we are closer than we think. I'm sure that my far more knowledgeable colleagues will expand on the technical details of the existential risk posed by AI.
DEBATER A1
AI as Existential Threat
Alignment Problem:Peril
Human Control over AI
Dominion:Lost
The point I would like to leave you with is this. Human morality, ethics, wishes, are both incredibly complex and utterly confused. Not only is it effectively impossible to program these into an AI system in any meaningful way, but we ourselves can hardly decide what human morality even looks like.We don't even know what human morality is. So, these are the facts of the debate. We do not know for certain how or if we can control the AI we are fast in the process of building, number one.We do not know to what extent this AI will be aligned with our values and desires, number two. And finally, we do not even know what our values and desires are. However likely or unlikely you personally believe the existential threat from AI is to materialize, it is indisputable that this threat does and will exist.And that is what we are debating tonight. All of us sitting here tonight pride ourselves on being intelligent, critically thinking people. Do not leave the future of humankind, your future, up to chance.Artificial intelligence at its current rate of development poses a distinct existential risk that we are unprepared to deal with. Vote with the proposition tonight. Thank you.