Call for Workshop Papers
Benchmarking Weakly Supervised, Segmentation-Free AI Models for Autonomous Deep Vein Thrombosis Detection
Introduction
Pulmonary Embolism (PE) is an acute and a life-threatening condition. It is estimated that PE causes nearly 300,000 deaths per year in the US. Pulmonary Embolism is typically caused by blood clots that detach from their primary formation site and block an artery in the lungs. These blood clots often originate the deep veins in the lower limbs, a condition known as Deep Vein Thrombosis (DVT). Therefore, prompt diagnosis and treatment of DVT are essential to decrease to prevent pulmonary embolism.
In clinical practice, patients with apparent clinical DVT symptoms are referred for compression ultrasonogphy. However, there are circumstances in which a complete examination may not be possible in a clinically relevant time by an expert ultrasonographer, e.g. in emergency departments.
Over the past years Machine and Deep Leanring ML/DL models have emerged as a valuable tool for assisting prompt diagnosis. Current state-of-the-art models relies on pixel-wise segmentation of ultrasound videos to assess vein compressibility, an indicator of DVT or no-DVT. However, preparing and validating such annotated datasets for model training requires an enormous amount of time and effort.
To address this challenge, this workshop aims to foster the development of Weekly Supervised, Segmentation Free AI/ML models, reducing the burden of manual labeling.
A dataset of compression ultrasound videos has been made publicly available for this purpose. Based on this dataset and the respective labels, this workshop focuses on the development of AI/ML models with the following clinically relevant tasks:
- Anatomical site identification: Classify the ultrasound video according to vein location (Common Femoral Vein, Great Saphenous Junction, Femoral Vein, Popliteal Vein).
- DVT prediction: Determine deep vein thrombosis (DVT) presence based on vein compressibility.
Prospective participants are invited to submit papers describing their methodology and their validation results. To maintain scientific integrity, we encourage result submission to the data competition on Kaggle, for automated assessment of models performance, available page at:
- https://www.kaggle.com/competitions/thrombus_chalenge_2_1
- https://www.kaggle.com/competitions/thrombus_chalenge_2_2
Datasets are available via the Zenodo repository following the links:
Training: https://zenodo.org/records/17659415
Testing: https://zenodo.org/records/17664207
Topics
The workshop encourages multidisciplinary collaboration among clinicians, sonographers, ML researchers, and health informatics professionals. The proposed workshop aligns with several core themes of the conference, including:
- Trustworthy and explainable AI in cloud infrastructures
- Edge–cloud–fog continuum and decentralized AI frameworks
- Real-time inference, model optimization, and deployment pipelines in the cloud
- Cloud-based digital twins, autonomous agents, and decision-making systems
- Benchmarks, performance evaluation, and reproducibility in AI-cloud research
- Cloud Computing, Edge Computing and Edge-to-Cloud
Important Dates
- Submission deadline 30th April 2026
- Notification deadline 15th May 2026
- Camera-ready deadline 30th May 2026
Workshop Publication
Accepted and presented papers will be published alongside the main conference proceedings as a sub-section/chapter. Paper formats should, therefore, correspond to the templates of the publisher of the main conference.
Submission Guidelines
- Go to Confy+ website.
- Log in or sign up as a new user.
- Select your desired track.
- Click the ‘Submit Paper’ link within the track and follow the instructions.
Alternatively, go to the Confy+ homepage and click on “Open Conferences.”
Submission Guidelines:
- All papers must be submitted in English.
Submitted PDFs should be anonymized.
- Previously published work cannot be submitted, nor can it be concurrently submitted to any other conference or journal. These papers will be rejected without review.
- Papers must follow the Springer formatting guidelines (available in the Author’s Kit section).
- Authors must read and agree to the Publication Ethics and Malpractice Statement.
- As per new EU accessibility requirements, going forward, all figures, illustrations, tables, and images should have descriptive text accompanying them. Please refer to the document below, which will assist you in crafting Alternative Text (Alt Text)
Paper Submission
Papers should be submitted through EAI ‘Confy+‘ system, and have to comply with the Springer format (see Author’s kit section).
- Workshop papers should be 6-11 pages in length.
All conference papers undergo a thorough peer review process prior to the final decision and publication. This process is facilitated by experts in the Technical Program Committee during a dedicated conference period. Standard peer review is enhanced by EAI Community Review which allows EAI members to bid to review specific papers. All review assignments are ultimately decided by the responsible Technical Program Committee Members while the Technical Program Committee Chair is responsible for the final acceptance selection. You can learn more about Community Review here.
Springer AI Policies and Guidance
Full information: https://www.springernature.com/gp/policies/book-publishing-policies
AI Authorship Policy
Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. We thus ask that the use of an LLM be properly documented in the Acknowledgements, or in the Introduction or Preface of the manuscript.
The use of an LLM (or other AI-tool) for “AI assisted copy editing” purposes does not need to be declared. In this context, we define the term “AI assisted copy editing” as AI-assisted improvements to human-generated texts for readability and style, and to ensure that the texts are free of errors in grammar, spelling, punctuation and tone. These AI-assisted improvements may include wording and formatting changes to the texts, but do not include generative editorial work and autonomous content creation. In all cases, there must be human accountability for the final version of the text and agreement from the authors that the edits reflect their original work. This reflects a similar stance taken on the AI generative figures policy, where it was acknowledged that there are cases where AI can be used to generate a figure without being concerned about copyright e.g. to generate a graph based on data provided by the author.
AI Authorship Guidance
Authors should familiarise themselves with the current known risks of using AI models before using them in their manuscript. AI models have been known to plagiarise content and to create false content. As such, authors should carry out due diligence to ensure that any AI-generated content in their book is correct, appropriately referenced, and follow the standards as laid out in our Book Authors’ Code of Conduct.
AI-generated Images Policy
The fast-moving area of generative AI image creation has resulted in novel legal copyright and research integrity issues. As publishers, we strictly follow existing copyright law and best practices regarding publication ethics. While legal issues relating to AI-generated images and videos remain broadly unresolved, Springer Nature journals and books are unable to permit its use for publication.
Exceptions:
- Images/art obtained from agencies that we have contractual relationships with that have created images in a legally acceptable manner.
- Images and videos that are directly referenced in a piece that is specifically about AI and such cases will be reviewed on a case-by-case basis.
- The use of generative AI tools developed with specific sets of underlying scientific data that can be attributed, checked and verified for accuracy, provided that ethics, copyright and terms of use restrictions are adhered to.
* All exceptions must be labelled clearly as generated by AI within the image field.
As we expect things to develop rapidly in this field in the near future, we will review this policy regularly and adapt if necessary.
Note: Examples of image types covered by this policy include: video and animation, including video stills; photography; illustration such as scientific diagrams, photo-illustrations and other collages, and editorial illustrations such as drawings, cartoons or other 2D or 3D visual representations. Not included in this policy are text-based and numerical display items, such as: tables, flow charts and other simple graphs that do not contain images. Please note that not all AI tools are generative. The use of non-generative machine learning tools to manipulate, combine or enhance existing images or figures should be disclosed in the relevant caption upon submission to allow a case-by-case review.
AI-generated Images Guidance
For more information on the inclusion of third party content (i.e. any work that you have not created yourself and which you have reproduced or adapted from other sources) please see Rights, Permissions, Third Party Distribution.
Author’s kit – Instructions and Templates
Papers must be formatted using the Springer LNICST Authors’ Kit.
Instructions and templates are available from Springer’s LNICST homepage:
Please make sure that your paper adheres to the format as specified in the instructions and templates.
When uploading the camera-ready copy of your paper, please be sure to upload both:
- a PDF copy of your paper formatted according to the above templates, and
- an archive file (e.g. zip, tar.gz) containing the both a PDF copy of your paper and LaTeX or Word source material prepared according to the above guidelines.
Workshop Organizers
Dr Stylianos Didaskalou, ATHENA Research Center, Greece
Dr Alaa AlZoubi, Data Science Research Centre, University of Derby
Workshop TPC members
Dr Alaa AlZoubi, Data Science Research Centre, University of Derby,
Dr Stylianos Didaskalou, ATHENA Research Center, Greece
Dr Harry Yu, Data Science Research Centre, University of Derby
Prof. Longzhi Yang, Northumbria University

