Chinedu Nwoye

profile photo

AI Research Scientist, BS.c., MS.c.2, Ph.D.
Post-doctoral research fellow at ICube Lab, IHU Strasbourg, University of Strasbourg, France.

Research
Projects
Services
Bio
Contact

© Website is maintained by CIDSoft ®

I’m interested in computer vision, machine learning, optimization, and image processing. Much of my research is about the use of these AI methods for the analysis and modeling of surgical activities using real endoscopic videos. Representative papers are highlighted.

Selected publications

CholecTriplet2022: Show me a tool and tell me the triplet - an endoscopic vision challenge for surgical action triplet detection
CholecTriplet 2022 C.I. Nwoye, T. Yu, S. Sharma, A. Murali, D. Alapatt, A. Vardazaryan, K. Yuan, J. Hajek, W. Reiter, A. Yamlahi, F. Smidt, X. Zou, G. Zheng, B. Oliveira, H.R. Torres, S. Kondo, S. Kasai, F. Holm, E. Ozsoy, S. Gui, H. Li, S. Raviteja, R. Sathish, P. Poudel, B. Bhattarai, Z. Wang, G. Rui, M. Schellenberg, J.L. Vilaca, T. Czempiel, Z. Wang, D. Sheet, S.K. Thapa, M. Berniker, Patrick Godau, Pedro Morais, Sudarshan Regmi, Thuy Nuong Tran, Jaime Fonseca, J. Nolke, E. Lima, E. Vazquez, L. Maier-Hein, N. Navab, P. Mascagni, B. Seeliger, C. Gonzalez, D. Mutter, N. Padoy
An endoscopic vision challenge organized at MICCAI 2022 for the recognition and localization of surgical action triplets in laparoscopic videos. private access to the large-scale CholecT50 dataset, summary and assessment of 11 state-of-the-art deep learning methods proposed by the participants. Analysis of the significance of the results obtained by the presented approaches, a thorough methodological comparison, an in-depth result analysis, a rich selection of qualitative results analysis on different surgical conditions and visual challenges, a survey, and a highlight of interesting directions for …
Submitted to Medical Image Analysis 2023 (IF: 8.545)
project page / journal / arXiv / code / bibtex


Data splits and metrics for method benchmarking on surgical action triplet datasets
project image C.I. Nwoye, and N. Padoy
This work introduces the standard splits for the CholecT50 and CholecT45 datasets and show how they compare with existing use of the dataset. CholecT45 is the first public release of 45 videos of CholecT50 dataset. We also develop a metrics library, ivtmetrics (now an open-source project), for model evaluation on surgical triplets. Furthermore, we conduct a benchmark study by reproducing baseline methods in the most predominantly used deep learning frameworks (PyTorch and TensorFlow) to evaluate them using the proposed data splits and metrics and release them publicly to support future research.
arXiv report 2022 (...updating state-of-the-arts...)
arXiv / code / bibtex


Dissecting self-supervised learning methods for surgical computer vision
project image S. Ramesh, V. Srivastav, D. Alapatt, T. Yu, A. Murali, L. Sestini, C.I. Nwoye, I. Hamoud, A. Fleurentin, G. Exarchakis, A. Karargyris, N. Padoy
In this work, we address this critical need by investigating four state-of-the-art SSL methods (MoCo v2, SimCLR, DINO, SwAV) in the context of surgical computer vision. We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection. We examine their parameterization, then their behavior with respect to training data quantities in semi-supervised settings. Correct transfer of these methods to surgery, as described and conducted in this work, leads to substantial performance gains over generic uses of SSL...
Submitted to Medical Image Analysis 2022 (IF: 8.545)
journal / arXiv / poster / supplement / code / bibtex


CholecTriplet2021: A benchmark challenge for surgical action triplet recognition
project image C.I. Nwoye, D. Alapatt, T. Yu, A. Vardazaryan, F. Xia, Z. Zhao, T. Xia, F. Jia, Y. Yang, H. Wang, D. Yu, G. Zheng, X. Duan, N. Getty, R. Sanchez-Matilla, M. Robu, L. Zhang, H. Chen, J. Wang, L. Wang, B. Zhang, B. Gerats, S. Raviteja, R. Sathish, R. Tao, S. Kondo, W. Pang, H. Ren, J.R. Abbing, M.H. Sarhan, S. Bodenstedt, N. Bhasker, B. Oliveira, H.R. Torres, L. Ling, F. Gaida, T. Czempiel, J.L. Vilaça, P. Morais, J. Fonseca, R.M. Egging, I.N. Wijma, C. Qian, G. Bian, Z. Li, V. Balasubramanian, D. Sheet, I. Luengo, Y. Zhu, S. Ding, J. Aschenbrenner, N.E. Kar, M. Xu, M. Islam, L. Seenivasan, A. Jenke, D. Stoyanov, D. Mutter, P. Mascagni, B. Seeliger, C. Gonzalez, N. Padoy
An endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. private access to the large-scale CholecT50 dataset, summary and assessment of 20 state-of-the-art deep learning methods proposed by the participants. Analysis of the significance of the results obtained by the presented approaches, a thorough methodological comparison, an in-depth result analysis, a novel ensemble method for enhanced recognition, and a highlight of interesting directions for …
Accepted at Medical Image Analysis 2022 (IF: 8.545)
project page / journal / arXiv / poster / supplement / code / bibtex


Rendezvous: Attention mechanisms for the recognition of surgical action triplets in endoscopic videos
project image C.I. Nwoye, T. Yu, C. Gonzalez, B. Seeliger, P. Mascagni, D. Mutter, J. Marescaux, and N. Padoy
A transformer-inspired neural network that detects surgical action triplets directly from surgical videos by leveraging attention at two different levels: (1) spatial attention to capture individual action triplet components in a scene; called Class Activation Guided Attention Mechanism (CAGAM), and (2) semantic attention to resolve the relationships between instruments, verbs, and targets leveraging self and cross attention mechanisms; this is called Multi-Head of Mixed Attention (MHMA).
Medical Image Analysis 2022 (IF: 8.545)
journal / arXiv / video / poster / supplement / code / bibtex


Comparative validation of machine learning algorithms for surgical workflow and skill analysis with the HeiChole benchmark
project image M. Wagner, B.P. Müller-Stich, A. Kisilenko, D. Tran, P. Heger, L. Mündermann, D.M. Lubotsky, B. Müller, T. Davitashvili, M. Capek, A. Reinke, T. Yu, A. Vardazaryan, C.I. Nwoye, N. Padoy, X. Liu, E.J. Lee, C. Disch, H. Meine, T. Xia, F. Jia, S. Kondo, W. Reiter, Y. Jin, Y. Long, M. Jiang, Q. Dou, P.A. Heng, I. Twick, K. Kirtac, E. Hosgor, J.L. Bolmgren, M. Stenzel, B. von Siemens, H.G. Kenngott, F. Nickel, M. von Frankenberg, F. Mathis-Ullrich, L. Maier-Hein, S. Speidel, S. Bodenstedt
The purpose of this study was to establish an open benchmark for surgical workflow and skill analysis by providing a state of the art comparison of machine learning algorithms on a novel and publicly accessible data set that improves representativeness with multi-centric data clinical data.
Accepted at Medical Image Analysis 2022 (IF: 8.545)
project page / journal / arXiv / poster / supplement / code / bibtex


Recognition of instrument-tissue interactions in endoscopic videos via action triplets
project image CI Nwoye, C Gonzalez, T Yu, P Mascagni, D Mutter, J Marescaux, and N Padoy
First research to tackle the recognition of surgical fine-grained activities modeled as action triplets (instrument, verb, target). Led to the creation of first triplet dataset, CholecT40. Followed by the development of Tripnet: first deep learning method to recognize these triplets directly from the video data leveraging two novel modules, class activation guide (CAG) and 3D interaction space (3Dis), for respectively capturing the individual components of triplet and resolving their association as triplets.
MICCAI 2020 (Oral presentation)
journal / arXiv / video 1 / video 2 / poster / supplement / code / bibtex


2020 CATARACTS semantic segmentation challenge
project image I. Luengo, M. Grammatikopoulou, R. Mohammadi, C. Walsh, C.I. Nwoye, D. Alapatt, N. Padoy, Z.L. Ni, C.C. Fan, G.B. Bian, Z.G. Hou, H. Ha, J. Wang, H. Wang, D. Guo, L. Wang, G. Wang, M. Islam, B. Giddwani, R. Hongliang, T. Pissas, C.R.M. Huber, J. Birch, J.M. Rio, L. da Cruz, C. Bergeles, H. Chen, F. Jia, N. KumarTomar, D. Jha, M.A. Riegler, P. Halvorsen, S. Bano, U. Vaghela, J. Hong, H. Ye, F. Huang, D.H. Wang, D. Stoyanov
An 2020 MICCAI EndoVis Challenge with three sub-tasks to assess participating solutions on anatomical structure and instrument pixel-wise semantic segmentation in cataract surgery videos...
arXiv report 2021
arXiv / project page / bibtex


Weakly supervised convolutional LSTM approach for tool tracking in laparoscopic videos
project image CI. Nwoye, D. Mutter, J. Marescaux, N. Padoy
A deep learning method for surgical tool tracking trained on tool binary presence labels only. It exploit temporal information in laparoscopic videos using convolutional LSTM. The model achieved a state-of-the-art performance on tool detection, localization and tracking for weakly supervised models...
IPCAI 2019 (Oral presentation, Audience choice award: Best paper presentation)
journal / arXiv / video 1 / video 2 / poster / supplement / code / bibtex


… standing on the shoulders of giants