Conch: Competitive Debate Analysis via Visualizing Clash Points and Hierarchical Strategies

Authors

Qianhe Chen1

[email protected]

Yong Wang2

[email protected]

Yixin Yu1

[email protected]

Xiyuan Zhu1

Xuerou Yu1

Ran Wang1,3 *

[email protected]

1 School of Journalism and Information Communication, Huazhong University of Science and Technology

2 College of Computing and Data Science, Nanyang Technological University

3 School of Future Technology, Huazhong University of Science and Technology

* Corresponding author

Demo Video

Demo video coming soon

Resources

Try Conch Demo
Paper (Coming Soon)
arXiv
The user interface of Conch
The user interface of Conch. (A) Overview consists of two parts: the Process View (A0) and the Session View (A1) showing the evolution and interactions of debate among blocks based on clash points, disagreements, and viewpoints; the Strategy View (A2) displaying the usage and co-occurrence of debate strategies. (B) The Content View presents the specific textual content (B1), allowing users to closely examine arguments and strategies. In this figure, Conch visually represents the evolution and interactions of debates among blocks, helping users intuitively understand how disagreements and strategies develop and interact over time.

Abstract

In-depth analysis of competitive debates is essential for participants to develop argumentative skills and refine strategies, and further improve their debating performance. However, manual analysis of unstructured and unlabeled textual records of debating is time-consuming and ineffective, as it is challenging to reconstruct contextual semantics and track logical connections from raw data. To address this, we propose Conch, an interactive visualization system that systematically analyzes both what is debated and how it is debated. In particular, we propose a novel parallel spiral visualization that compactly traces the multidimensional evolution of clash points and participant interactions throughout debate process. In addition, we leverage large language models with well-designed prompts to automatically identify critical debate elements such as clash points, disagreements, viewpoints, and strategies, enabling participants to understand the debate context comprehensively. Finally, through two case studies on real-world debates and a carefully-designed user study, we demonstrate Conch's effectiveness and usability for competitive debate analysis.

Keywords: Competitive debate, debate analysis, clash point, visual analytics

Introduction

The competitive debate is a structured and competitive form of communication that challenges participants' comprehensive abilities, including logical thinking, expression skills, rapid analysis, argument construction, and rebuttal techniques. By analyzing previous debates, they can learn effective strategies, identify common mistakes, and understand how successful arguments are built. However, this process currently relies on manually reviewing long transcripts or videos, which is time-consuming and makes it hard to track how arguments develop or connect across different parts of a debate. For example, a team might establish a strong argument early but fail to defend it later, and this common tactic detail often goes unnoticed during conventional manual analysis. Therefore, automated methods are essential to help debaters and coaches uncover hidden patterns and interaction dynamics in historical debates effectively and efficiently to improve performance.

Previous work on debate analysis (e.g., online, formal, political debates) has contributed to identifying argument components, labeling claims, and applying predictive models to assess persuasiveness. These approaches offer valuable insights into what is being said in a debate, especially at the sentence level. However, they remain insufficient to address two key challenges faced by debaters and coaches when analyzing competitive debates. On one hand, most existing methods focus on extracting claims, arguments, or keywords from individual sentences. While this is useful for basic content analysis, it overlooks key elements that are essential to competitive debating, such as clash points, disagreements, viewpoints, and strategies. These elements usually connect multiple speaking turns and require examining the full debate context to properly identify and interpret them.

On the other hand, current approaches pay little attention to how debates unfold over time. Competitive debates involve multiple sessions, speaker roles, and strategy interactions across different stages. Debaters often respond to earlier disagreements, adapt their strategies, and establish dominance at critical clash points over the debate's progression. However, most existing methods for debate analysis typically process debates as disconnected claims or arguments, overlooking how opposing views develop over time and interact across different debate stages to form meaningful argument structures. As a result, existing approaches struggle to adequately address two fundamental aspects of competitive debates for experts: what to debate and how to debate.

To address these challenges, we first introduce clash points and refutation strategies as essential elements for analyzing competitive debates.Clash points, representing the core disagreements between two sides, provide a focused understanding of critical conflict areas. Meanwhile, refutation strategies, such as evidence-based or reasoning-based refutation, illustrate how debaters systematically dismantle opposing arguments. By combining these two elements, we can track the progression of debates more effectively, highlighting both the key points of contention and the strategy approaches used by debaters and coaches. This approach allows for a more detailed and dynamic analysis of debate interactions, capturing both the structure and tactics central to competitive debates.

We propose Conch, an interactive visualization system that enhances the analysis of debate competitions by summarizing debate dynamics and visualizing key logical interactions. The system consists of two primary views: an overview and a detail view. The overview is divided into two parts: (1) A process view, which illustrates the evolution of clash points, their progression, and the interactions between debaters over time; (2) A strategy view, which depicts the distribution and co-occurrence of various refutation strategies used throughout the debate. These visualizations provide users with a comprehensive and in-depth understanding of both the argumentative structure and strategy choices. Additionally, the detail view complements the overview by displaying the specific textual content of the debate, allowing users to closely examine the arguments and strategies presented. The spiral-shaped design of Conch, inspired by the natural growth patterns of conch shells, uses circular timelines to show how competitive debates develop and interact over time. The effectiveness of Conch was demonstrated through two case studies and a carefully-designed user interview. In summary, our main contributions are:

  1. An interactive system, Conch, for the first time, is proposed to facilitate the analysis of clash points and strategy interactions hierarchically in competitive debates for debaters and coaches.
  2. A compact parallel spiral visualization is designed to represent the temporal and structural evolution of competitive debates centered on clash points. Inspired by Archimedean spirals, the layout is arranged to optimize space while enabling block-wise exploration of debate dynamics across sessions.
  3. An augmented stacked bar chart design that enables detailed comparison of strategies both within and across different debate sessions, providing a comprehensive view of strategy patterns.
  4. Two case studies and a user study are conducted on two distinct datasets with experts on competitive debates to show the usefulness and effectiveness of our system.

User Study Results

Upon completing the debate analysis task, the experimental group using Conch showed shorter analyzing time than other two control groups and a better cognitive load profile for learning.

Completion time. Participants in the experimental group completed the tasks faster than those in the video control group and the text control group. Specifically, participants using Conch in the experimental group (M = 51.1, SD = 20.2) spent 32.8% less time than participants watching debate video (M = 76.1, SD = 17.8) and 45.9% less than those reading debate text (M = 94.4, SD = 23.8), which demonstrated statistically significant differences (p < 0.01) suggested by the Mann-Whitney U test. Since debate analysis requires a global consideration of the entire content, single task is difficult to be completed and timed independently.

Total task completion time for the experimental group and two control groups.
Total task completion time for the experimental group and two control groups.

Cognitive Load. Cognitive load consists of intrinsic load (IL), extraneous load (EL), and germane load (GL). IL relates to task complexity and participants' prior knowledge, EL arises from unhelpful instructional features, and GL results from beneficial instructional features. Thus, lower EL and higher GL indicate whether a system provides better support for users in performing tasks effectively.

We found that IL scores were similar across all three groups (experimental: 3.67 ± 2.17, control-video: 3.26 ± 1.73, control-text: 3.96 ± 2.02), indicating comparable task difficulty and participant knowledge. Using the Mann-Whitney U test, the experimental group had significantly lower EL than the text control group (1.15 ± 1.12 vs. 3.15 ± 2.33, U = 17.5, p = 0.046). Compared to the video control group, the experimental group's EL was lower on average, but the difference was not statistically significant (1.15 ± 1.12 vs. 1.59 ± 2.34, U = 44.5, p = 0.755). Additionally, the experimental group's GL was significantly higher compared to both the video-based group (7.03 ± 1.85 vs. 4.22 ± 2.58, U = 66.5, p = 0.024) and the text-based group (7.03 ± 1.85 vs. 3.94 ± 2.58, U = 64.0, p = 0.042).

Cognitive load and performance comparison among the experimental and control groups.
Cognitive load and performance comparison among the experimental and control groups. (a) Comparison of intrinsic, extraneous, and germane cognitive load among the experimental and control groups. (b) Scores for each question across the experimental and control groups.

Questionnaire. Participants in the experimental group gave high ratings to Conch, with most of the closed-ended questions receiving positive feedback. Specifically, participants expressed satisfaction with the effectiveness and usability of the system, as they rated questions in these dimensions around 6 out of 7. They also provided generally positive ratings for visual design and user interaction, although scores in these two dimensions were slightly less consistent. However, scores for "easy to understand" and "interaction is easy for users with any level of debate experience" were relatively lower. For the understanding aspect, participants indicated that the visual design was somewhat challenging to understand due to the introduction of multiple visual components and novel visualization methods. Regarding user experience levels, participants indicated that the system offered different benefits based on user experience. For beginners, Conch served as a guide to understand the core elements of a debate, such as clash points and strategies; for experts, it functioned as a tool to make their analysis more efficient.

In the open-ended feedback, participants suggested that Conch was broadly beneficial. For example, experts noted that the Process View was effective for tracking the evolution of arguments, showing which points were introduced, dropped, or modified over time, and clearly highlighted the main conflicts. The Strategy View was also widely praised for summarizing common tactics and identifying strategic strengths or weaknesses. Finally, experts found the Content View useful for filtering information to focus on the most meaningful content. Overall, most users reported that Conch effectively helped them quickly learn main arguments, different strategies, and the clear evolution of debates.

User interview questionnaire results, including the effectiveness, visual design, interaction, and usability, rated on a 7-point Likert scale.
User interview questionnaire results, including the effectiveness, visual design, interaction, and usability, rated on a 7-point Likert scale.

Acknowledgments

This work was supported in part by the Taihu Lake Innovation Fund for Future Technology, Huazhong University of Science and Technology (HUST), under Grant 2023-B-8; in part by the Fundamental Research Funds for the Central Universities, HUST, under Grant 82400049. The computation is completed in the HPC platform of Huazhong University of Science and Technology. Ran Wang is the corresponding author ([email protected]).

Citation

@article{}