Call for Papers
Introduction
Artificial Intelligence and Machine Learning (AI/ML)-enabled medical devices are advancing rapidly to address the needs of patients, clinicians, and manufacturers in the MedTech industry. However, the pace of technological innovation has outstripped the development of evaluation methods in some instances, creating challenges for developers and for regulatory bodies charged with ensuring safety and efficacy.
The workshop aims at introducing regulatory science for AI-enabled devices to the ISBI community, with a focus on the assessment of medical video AI and methods for uncertainty quantification. These areas present unique technical challenges, including frame-to-frame variability, motion artifacts, and temporal consistency. We aim to foster dialogue among researchers, clinicians, and regulators to discuss technical and regulatory science challenges and help reduce the gap between development of novel AI technologies and their clinical adoption. We will discuss the development of regulatory science tools, testing methods, and metrics for assessing AI-assisted devices. While this workshop will focus on medical video AI systems, assessment many of the concepts generalize to other device types.
Rationale
The rapid adoption of AI across a broad range of medical devices highlights the need for robust evaluation frameworks and assessment approaches tailored to AI-based technologies. Critical regulatory science challenges include assessing generalizability and uncertainty. Addressing these challenges is essential to support evidence-based regulatory decision-making and to ensure that safe and effective AI-enabled medical devices are developed by innovators.
Extending these considerations to the domain of medical video AI introduces additional complexities. Medical video AI systems are relatively new, designed to assist care providers by improving the identification of abnormalities within temporal streams of imaging data. Taking the colonoscopy computer-aided detection (colon-CADe) devices as an example, six colon-CADe devices are authorized for marketing in the US. Disease detection in video imaging procedures differs from many radiology AI applications which typically operate on static images and potentially non-real time image interpretation. In contrast, video AI systems must handle challenges such as frame-to-frame variability, motion artifacts, and lesion persistence across time, and often the need for real-time prompting as diagnosis are commonly made during the clinical procedure/video collection such that appropriate timely intervention can be performed. It remains unclear which specific performance metrics and standalone study protocols are adequate for comparing colon-CADe algorithms as AI testing protocols and metrics characterizing video-based performance are still evolving. While many ISBI workshops have explored algorithm development, novel applications and platforms, a significant gap remains tackling practical assessment challenges, real‐world deployment hurdles, and core regulatory science questions necessary for the development of innovative technology.
If you are working on research related to medical AI, clinical deployment, or evaluation, then: Medical Video AI Assessment and Uncertainty Quantification is the right place for your work!
Topics that will be covered in the workshop:
- Study designs for benchmarking computer-aided detection models in medical video-based applications
- Standardized evaluation frameworks/methods for AI-enabled video devices
- Evaluation metrics at potentially multiple levels (e.g., frame-, lesion-, and patient-level) for performance, video detection latency, temporal consistency, false-positive burden, etc.
- Development and curation of reference datasets for medical video AI evaluation
- Methods to quantify, calibrate, and report predictive uncertainty in AI-enabled medical devices, such as computer aided detection (CADe) devices
- Frameworks for uncertainty analysis and interpretability
- Benchmarking and validation of AI-enabled devices using uncertainty metrics
- Strategies to integrate uncertainty estimates into real-time clinical AI workflows
- Analysis revealing disconnects between AI development, evaluation metrics, and regulatory requirements
- Comparative regulatory science across countries: lessons for global alignment
- Perspective and opinion papers on limitations of current evaluation paradigms for medical video AI
- Viewpoints on dataset bias, annotation uncertainty, and ground truth definition in AI-enabled devices
- Bridging gaps between AI development, evaluation and regulation
Proceedings
Accepted papers will be published in the ISBI conference proceedings.
Paper Format & Submission
Submissions must be anonymized, use the official IEEE ISBI format , and be no more than 4 pages of main content plus up to 1 page for Ethical Standards, Acknowledgements, and References. So, the maximum length is five pages (including this additional content). Manuscripts exceeding 5 pages will be rejected. The fifth page requires a $200 fee, which will be paid for during registration.
Submit your manuscript via OpenReview: OpenReview Submission Site .
Submission Evaluation Criteria
- Relevance: Alignment with AI evaluation, regulatory science, or deployment challenges in healthcare.
- Clarity & Structure: Logical organization, clear writing, and accessibility to a multidisciplinary audience.
- Empirical Rigor: Robust measurement or validation of new or existing concepts (for empirical work).
All submissions undergo a double-blind review. Please omit author names, affiliations, and self-identifying references. Use the official Springer LNCS format (Word & LaTeX templates provided).
Important Dates (Anywhere on Earth)
- Full paper deadline:
Feb 15, 2026Feb 20, 2026 - Notification of acceptance:
Feb 28, 2026March 2, 2026 - Camera-ready deadline: March 7, 2026 (No deadline extension)
- Workshop date: April 9, 2026
Questions?
Reach us at videoai.isbi2026@gmail.com