Call for extended Abstracts
To foster the growth of an active community, we aim to involve both senior and junior researchers in our workshop. To curate an insightful and engaging program, we invite submissions of extended abstracts up to 2 pages in length for research across various stages: already published work, ongoing work not yet submitted elsewhere, and finished work that has not yet been published. The workshop will also allow authors of in-progress work to receive constructive feedback from the community to help refine their papers. Accepted contributions will get chance to present their work in a short lightning talk as well as a poster in a dedicated session.
Topics of interest may include but are not limited to:
- Novel Methods and Models for Explainability: Development of new algorithms, techniques, or models to explain IR systems.
- Internal Analysis and Model Interpretability: Novel techniques for analyzing the inner workings of IR systems, insights learned from analysis and interpretation.
- Evaluation: Evaluation of explanation quality, metrics for explainable systems.
- Human-Centered Explainability: Work focused on identifying, investigating, or integrating users’ explainability needs in IR systems.
- Practical Applications of Explainability: Explainability in high-stakes domains (e.g., healthcare, legal, finance, etc.), applications to personalized search/recommendation, bias mitigation and fairness.
- Other: Other topics not mentioned in this list but still pertaining to explainability in IR.
Perspectives on the Challenges of Explainability in IR
In addition to “traditional” topics in explainability, we also open extended abstract submissions for Perspectives on the Challenges of Explainability in IR, which include but are not limited to the following topics:
- Explainable IR in the Era of LLMs: What challenges for explainability arise with the emergence of LLMs in the IR pipeline and how should we begin to solve them?
- Defining and Evaluating Explainability: How can we unify different notions and definitions of explainability and achieve consensus over what constitutes a “good” explanation?
- Insights from Other Fields: What takeaways or solutions from fields outside of IR can be used to advance XIR research?
- Interdisciplinary XIR: How do ethical concerns, regulatory requirements (e.g., GDPR), and domain-specific constraints (e.g., health, legal, finance, etc.) affect explainability research?
Submission guidelines
Extended abstracts (up to 2 pages, double column, references not included in page limit) can be submitted via the workshop’s EasyChair Link. All submissions will be peer reviewed (single blind) by the program committee and judged based by their relevance to the workshop and themes identified above. All submissions must be written in English and formatted according to the latest ACM SIG proceedings template available at https://www.acm.org/publications/proceedings-template using the following document class:
\documentclass[sigconf, review, anonymous=false]{acmart}
If you prefer, anonymous=true is also fine. Please note that at least one of the authors of each accepted paper must register for the workshop and present the paper through a lightning talk as well as a poster in the corresponding sessions.
Important Dates
- Submission Deadline:
April 16, 2025April 23, 2025 - Acceptance Notifications: (Tentative) May 14, 2025
- Workshop Date: July 17, 2025
All deadlines are 11:59 pm, Anywhere on Earth (AoE).
Contact
For any questions, you may contact the workshop organizers at sigir25xirworkshop@gmail.com.git