Ultrasound Localization Microscopy (ULM) is an innovative super-resolution technology in the field of medical ultrasound imaging. In a collaborative project with researchers from the Sorbonne Université (Paris, France), this work aims to develop a rapid and precise ULM method by harnessing the potential of raw data with our geometric deep learning framework. The animation presents accumulated localization frames in slow motion with an exponential acceleration of the frame rate for improved rendering comprehension.
This video won the 1st prize in the video category at the Image Competition of the Swiss National Science Foundation (SNSF).
ToF sensing faces challenges from ambient influences, accuracy, resolution, and multipath interference, making inverse modelling from sparse ToF data intractable. To overcome these limitations, this project utilizes deep neural networks inspired by computer vision techniques. More information can be found in the paper published at ICASSP 2024 in Seoul, Korea.
The project, in collaboration with Intuitive, a leading developer of the da Vinci robot used in minimally invasive surgery (MIS), focuses on advancing scene perception and understanding within this context. Scene perception involves processing data from integrated sensors to create a comprehensive model of the surgical environment, encompassing surface geometry, texture, and semantic information related to various objects. This technology is integral to augmented reality guidance systems, enhancing surgeon comprehension of patient anatomy and aiding decision-making during surgery.
Our submission Learning How To Robustly Estimate Camera Pose in Endoscopic Videos at IPCAI 2023 won the Best Paper Award.
This study introduces a novel 3-D Time-of-Flight localization model named Parallax among Corresponding Echoes (PaCE), which has been accepted as a paper for publication at ICRA.
The prototype only consists of one acoustic emitter and three receivers to locate a target moved by a robot. This technology has immense potential to be an alternative or extension to phased-arrays. The IP is managed by unitectra. Please feel free to reach out with any questions or opportunities for collaboration.
torchimize offers parallelized numerical optimization methods for PyTorch data types. The main motivation for this project is to enable convex optimization on GPUs based on the torch.Tensor class, which (as of 2022) is widely used in the deep learning field. This package features multiple cost aggregation and is capable of minimizing several least-squares optimization problems at each loop iteration in parallel.
In 2021, I gave an academic talk at the ARTORG Center (University of Bern) to present a research overview on stereo vision and 4-D light-field rendering algorithms. The slides can be found here.
optimizay is an online repository in which root-finding and numerical optimization methods are distilled to interactive Jupyter notebooks.
depthy enables depth reconstruction from multiple views while supporting *.pfm and *.ply file exports to store a 2-D depth map or 3-D point cloud.
Source | Target | Result |
---|
color-matcher enables color transfer across images which comes in handy for automatic color-grading of photographs, paintings, film sequences or light-fields.
This talk unveils the underlying physical and computational concept of the Lytro-type plenoptic camera in a concise and simplified manner while presenting an open-source software tool capable of rendering light field photogaphs.
Plenoptic cameras and their ability to change focus and perspective view after the fact has intrigued scientists, programmers, photographers and tech-hobbyists world-wide. With this presentation, the fundamentals of a light field captured by a plenoptic camera are provided to a broader audience without requiring prior knowledge of such. It is of primary interest to raise awareness of this technology and invite peers to contribute to presented open-source software tool [PlenoptiCam](https://github.com/hahnec/plenopticam).
More technical details and further educational material is found on my research website https://www.plenoptic.info.
This open-source software enables lightfield rendering of raw image captures taken by a plenoptic camera such as that from Lytro. It is available for download on GitHub. The groundwork for this project was laid in my doctoral thesis.
This open-source application is meant to help understand mechanisms of a plenoptic camera and to support at their conceptual development stage. It can be downloaded from GitHub or run on my research website. The groundwork for this project was laid in my doctoral thesis.
While being a research fellow at Brunel University as well as the University of Bedfordshire, I came up with the Standard Plenoptic Ray Tracing Model which describes light rays travelling through a plenoptic camera. Based on geometrical optics, the proposed model helps understand the idea of computational refocusing and estimating the refocusing distance just as determining the baseline of the camera. Besides, related work on this project presented the first hardware architecture to accomplish real-time refocusing for a plenoptic camera.
Since September 2021, I have been affiliated with the University of Bern developing artificial intelligence algorithms for medical imaging.
In February 2016, I joined the trinamiX GmbH which is a wholly owned subsidiary of BASF. Founded in 2015, trinamiX aims at developing cutting-edge sensor technology. My responsibilities included research on laser imaging and image processing algorithms.
I started working for Pepperl+Fuchs SE in March 2019 focusing on depth reconstruction algorithms and synthetic aperture modeling for multi-channel ultrasonic sensor data.
Short stay to finish off research projects on plenoptic cameras.
Development of a software application which first acquires geometrical information about a captured scene and then provides physical distances to the user. The work was commissioned by Morrison Utility Services.
I am an Advanced Postdoctoral Researcher affiliated with the AI Medical Imaging (AIMI) Group at the ARTORG Center as part of the University of Bern. My current research interests concentrate on algorithm development in the fields of computer vision, audio processing and depth sensing.
During undergraduate studies at the University of Applied Sciences in Hamburg, I began my professional career as an intern in R&D departments of companies such as Rohde & Schwarz, where I was evaluating digital video transmission protocols, and Arnold & Richter (ARRI) starting with research on plenoptic cameras.
When graduating in 2012, I enrolled for a guest studentship at Brunel University in West London. This led to an MPhil course, supervised by Prof. Amar Aggoun, to pursue plenoptic camera research with the development of an FPGA-based refocusing technique.
To accompany my doctoral advisor Amar, I transferred to the University of Bedfordshire in 2013 where I carried out image processing research in a bursary-funded PhD programme. During that time, I taught a Master's course in Embedded Systems using C programming, supervised postgraduate dissertations, developed software prototyping for commissioned work and built up a profound understanding in algorithm development, signal processing, parallel computing and English as a foreign language.
While finishing my doctoral studies in the UK, I was approached by the newly founded trinamiX GmbH, a subsidiary of BASF SE in Germany, which I happily joined in 2016. After three exciting years at the German spin-off, I then had the opportunity to conduct research on acoustic signal algorithms for the Pepperl+Fuchs SE based in Mannheim (Germany), which resulted in the complete prototype development of an ultrasound phased array.
“A good engineer thinks in reverse and asks himself about the stylistic consequences of the components and systems he proposes.” Helmut Jahn