Technology Portfolio

Details

[PATENT] Image Processing Device, System, and Method

European Patent (EP4050512) covering the main technical solutions associated to the IP of MIDAX. Patent pending, filed in 2020.

Inventors:   Mahmood Nazari, Ph.D.   •   Karl Bäckström, Ph.D.   •   Andreas Kluge, M.D. Ph.D.
Details

Explainable AI to improve acceptance of convolutional neural networks for automatic classification of dopamine transporter SPECT in the diagnosis of clinically uncertain parkinsonian syndromes

2022 European journal of nuclear medicine and molecular imaging

This study employs layer-wise relevance propagation (LRP) to explain CNN-based classification of DAT-SPECT in patients with clinically uncertain parkinsonian syndromes.

Authors:   Mahmood Nazari, Ph.D.   •   Andreas Kluge, M.D. Ph.D.   •   Ivayla Apostolova, M.D. Ph.D.   •   Susanne Klutmann, M.D. Ph.D.   •   Sharok Kimiaei, M.D. Ph.D.   •   Michael Schroeder, M.D. Ph.D.   •   Ralph Buchert, M.D. Ph.D.
Details

ASAP.SGD: Instance-based Adaptiveness to Staleness in Asynchronous SGD

2022 International Conference on Machine Learning

Concurrent algorithmic implementations of Stochastic Gradient Descent (SGD) give rise to critical questions for compute-intensive Machine Learning (ML). We introduce (i) ASAP.SGD , an analytical framework capturing necessary and desired properties of staleness-adaptive step size functions and (ii) tail-τ , a method for utilizing key properties of the execution instance, generating a tailored strategy that not only dampens the impact of stale updates, but also leverages fresh ones. We recover convergence bounds for adaptiveness functions satisfying the ASAP.SGD conditions for general, convex and non-convex problems, and establish novel bounds for ones satisfying the Polyak-Lojasiewicz property. We evaluate tail-τ with representative AsyncSGD concurrent algorithms, for Deep Learning problems, showing tail-τ is a vital complement to AsyncSGD, with (i) persistent speedup in wall-clock convergence time in the parallelism spectrum, (ii) considerably lower risk of non-convergence, as well as (iii) precision levels for which original SGD implementations fail.

Authors:   Karl Bäckström, Ph.D.   •   Marina Papatriantafilou, Ph.D.   •   Philippas Tsigas, Ph.D.
Details

The Impact of Synchronization in Parallel Stochastic Gradient Descent

2022 International Conference on Distributed Computing and Internet Technology

In this paper, we discuss our and related work in the domain of efficient parallel optimization, using Stochastic Gradient Descent, for fast and stable convergence in prominent machine learning applications. We outline the results in the context of aspects and challenges regarding synchronization, consistency, staleness and parallel-aware adaptiveness, focusing on the impact on the overall convergence.

Authors:   Karl Bäckström, Ph.D.   •   Marina Papatriantafilou, Ph.D.   •   Philippas Tsigas, Ph.D.
Details

Automated and robust organ segmentation for 3D-based internal dose calculation

2021 EJNMMI Research Journal

In this work, we address image segmentation in the scope of dosimetry using deep learning and make three main contributions: (a) to extend and optimize the architecture of an existing convolutional neural network (CNN) in order to obtain a fast, robust and accurate computed tomography (CT)-based organ segmentation method for kidneys and livers; (b) to train the CNN with an inhomogeneous set of CT scans and validate the CNN for daily dosimetry; and (c) to evaluate dosimetry results obtained using automated organ segmentation in comparison with manual segmentation done by two independent experts.

Authors:   Mahmood Nazari, Ph.D.   •   Luis David Jiménez-Franco, Ph.D.   •   Michael Schroeder, M.D. Ph.D.   •   Andreas Kluge, M.D. Ph.D.   •   Marcus Bronzel, M.D. Ph.D.   •   Sharok Kimiaei, M.D. Ph.D.   •  
Details

Data-driven identification of diagnostically useful extrastriatal signal in dopamine transporter SPECT using explainable AI

2021 Nature Scientific Reports

This study used explainable artificial intelligence for data-driven identification of extrastriatal brain regions that can contribute to the interpretation of dopamine transporter SPECT with 123I-FP-CIT in parkinsonian syndromes. A total of 1306 123I-FP-CIT-SPECT were included retrospectively. Binary classification as ‘reduced’ or ‘normal’ striatal 123I-FP-CIT uptake by an experienced reader served as standard-of-truth.

Authors:   Mahmood Nazari, Ph.D.   •   Andreas Kluge, M.D. Ph.D.   •   Ivayla Apostolova, M.D. Ph.D.   •   Susanne Klutmann, M.D. Ph.D.   •   Sharok Kimiaei, M.D. Ph.D.   •   Michael Schroeder, M.D. Ph.D.   •   Ralph Buchert, M.D. Ph.D.
Details

Consistent lock-free parallel stochastic gradient descent for fast and stable convergence

2021 IEEE International Parallel and Distributed Processing Symposium   |   Awarded Best Paper Honorable Mention

Stochastic Gradient Descent (SGD) is an essential element in Machine Learning (ML) algorithms. Asynchronous shared-memory parallel SGD (AsyncSGD), including synchronization-free algorithms, e.g. HOGWILD!, have received interest in certain contexts, due to reduced overhead compared to synchronous parallelization. Despite that they induce staleness and inconsistency, they have shown speedup for problems satisfying smooth, strongly convex targets, and gradient sparsity. We propose Leashed-SGD, an extensible algorithmic framework of consistency-preserving implementations of AsyncSGD, employing lock-free synchronization, effectively balancing throughput and latency. Leashed-SGD features a natural contention-regulating mechanism, as well as dynamic memory management, allocating space only when needed. We argue analytically about the dynamics of the algorithms, memory consumption, the threads' progress over time, and the expected contention. We provide a comprehensive empirical evaluation, validating the analytical claims, benchmarking the proposed Leashed-SGD framework, and comparing to baselines for two prominent deep learning (DL) applications: multilayer perceptrons (MLP) and convolutional neural networks (CNN). We observe the crucial impact of contention, staleness and consistency and show how, thanks to the aforementioned properties, Leashed-SGD provides significant improvements in stability as well as wall-clock time to convergence (from 20-80% up to 4 ×improvements) compared to the standard lock-based AsyncSGD algorithm and HOGWILD!, while reducing the overall memory footprint.

Authors:   Karl Bäckström, Ph.D.   •   Ivan Walulya, Ph.D.   •   Marina Papatriantafilou, Ph.D.   •   Philippas Tsigas, Ph.D.
Details

MindTheStep-AsyncPSGD: Adaptive Asynchronous Parallel Stochastic Gradient Descent

2019 IEEE International Conference on Big Data

Stochastic Gradient Descent (SGD) is very useful in optimization problems with high-dimensional non-convex target functions, and hence constitutes an important component of several Machine Learning and Data Analytics methods. Recently there have been significant works on understanding the parallelism inherent to SGD, and its convergence properties. Asynchronous, parallel SGD (AsyncPSGD) has received particular attention, due to observed performance benefits. On the other hand, asynchrony implies inherent challenges in understanding the execution of the algorithm and its convergence, stemming from the fact that the contribution of a thread might be based on an old (stale) view of the state. In this work we aim to deepen the understanding of AsyncPSGD in order to increase the statistical efficiency in the presence of stale gradients.

Authors:   Karl Bäckström, Ph.D.   •   Marina Papatriantafilou, Ph.D.   •   Philippas Tsigas, Ph.D.
Details

An efficient 3D deep convolutional network for Alzheimer's disease diagnosis using MR images

2018 IEEE International Symposium on Biomedical Imaging

Automatic extraction of features from MRI brain scans and diagnosis of Alzheimer's Disease (AD) remain a challenging task. In this paper, we propose an efficient and simple three-dimensional convolutional network (3D ConvNet) architecture that is able to achieve high performance for detection of AD on a relatively large dataset. Experiments conducted on an ADNI dataset containing 340 subjects and 1198 MRI brain scans have resulted good performance (with the test accuracy of 98.74%, 100% AD detection rate and 2,4% false alarm). Comparisons with 7 existing state-of-the-art methods have provided strong support to the robustness of the proposed method.

Authors:   Karl Bäckström, Ph.D.   •   Mahmood Nazari, Ph.D.   •   Irene Yu-Hua Gu, Ph.D.   •   Asgeir Store Jakola, M.D. Ph.D.

End-to-end AI-powered support for medical professionals, from diagnosis through medical image analysis, drug discovery, to personalized therapy.

Our Services

Copyright © MIDAX