Conch: Competitive Debate Analysis via Visualizing Clash Points and Hierarchical Strategies
They focus on a looming catastrophic threat from advanced AI.
(Aff) Urgent: insists immediate caution is critical.
(Neg) Doubt: sees the threat as uncertain or overstated.
They debate the challenge of programming AI with human values.
(Aff) Peril: warns of catastrophic outcomes from misaligned AI.
(Neg) Manage: believes alignment is feasible with oversight.
They debate what probability threshold justifies AI as an existential threat.
(Aff) Grave: emphasizes even a small risk is too high.
(Neg) Slim: suggests the chance is negligible or not immediate.
They argue if humanity can remain in command of advanced AI.
(Aff) Lost: believes humans eventually lose command over superintelligence.
(Neg) Kept: believes we can keep AI under human authority.
They weigh the ease or difficulty of passing effective AI regulations.
(Aff) Essential: insists rules are vital to prevent catastrophic misuse.
(Neg) Excess: sees over-regulation stifling innovation or being unnecessary.
They consider backup plans or kill switches to keep AI in check.
(Aff) Needed: demands robust measures to prevent AI takeover.
(Neg) Optional: contends simpler precautions already suffice.
They question if concentrated AI data control endangers marginalized communities.
(Aff) Inequity: highlights skewed power structures enabling AI-based oppression.
(Neg) Balance: states distribution of tech can be fair with correct policies.
They debate if AI or human misuse is the main danger.
(Aff) Machine: asserts AI itself can become lethal beyond human intention.
(Neg) Human: counters that only people cause real harm via AI.
They compare AI’s risk to other powerful technologies like nuclear or biotech.
(Aff) Unique: sees AI as distinctly more dangerous than prior tech.
(Neg) Similar: sees it as a familiar risk, manageable through regulation.
They discuss the swift progression of AI capabilities.
(Aff) Frenzy: warns of uncontrollable acceleration toward superintelligence.
(Neg) Hype: suggests the pace is overstated or manageable.
They highlight widespread expert warnings of AI catastrophe.
(Aff) Majority: cites experts who broadly believe in a looming meltdown.
(Neg) Skeptic: questions if polls prove real risk or just fear.
They debate how distant AI threats matter right now.
(Aff) Inevitable: claims future crises must be addressed early.
(Neg) Remote: sees no immediate urgency, believing time remains.
They examine how AI can threaten democracy, privacy, and social trust.
(Aff) Chaos: warns that misinformation and surveillance erode freedom.
(Neg) Order: claims regulation and awareness can preserve democratic norms.
They discuss discriminatory outcomes from AI systems in credit or housing.
(Aff) Harm: emphasizes AI fueling systemic inequities.
(Neg) Fixable: believes better data and oversight can solve these biases.
They debate the impact of AI replacing many positions, risking social upheaval.
(Aff) Collapse: fears massive unemployment endangering stability.
(Neg) Shift: sees new roles emerging, offsetting job losses.