European Patent (EP4050512) covering the main technical solutions associated to the IP of MIDAX. Patent pending, filed in 2020.
This study employs layer-wise relevance propagation (LRP) to explain CNN-based classification of DAT-SPECT in patients with clinically uncertain parkinsonian syndromes.
Concurrent algorithmic implementations of Stochastic Gradient Descent (SGD) give rise to critical questions for compute-intensive Machine Learning (ML). We introduce (i) ASAP.SGD , an analytical framework capturing necessary and desired properties of staleness-adaptive step size functions and (ii) tail-τ , a method for utilizing key properties of the execution instance, generating a tailored strategy that not only dampens the impact of stale updates, but also leverages fresh ones. We recover convergence bounds for adaptiveness functions satisfying the ASAP.SGD conditions for general, convex and non-convex problems, and establish novel bounds for ones satisfying the Polyak-Lojasiewicz property. We evaluate tail-τ with representative AsyncSGD concurrent algorithms, for Deep Learning problems, showing tail-τ is a vital complement to AsyncSGD, with (i) persistent speedup in wall-clock convergence time in the parallelism spectrum, (ii) considerably lower risk of non-convergence, as well as (iii) precision levels for which original SGD implementations fail.
In this paper, we discuss our and related work in the domain of efficient parallel optimization, using Stochastic Gradient Descent, for fast and stable convergence in prominent machine learning applications. We outline the results in the context of aspects and challenges regarding synchronization, consistency, staleness and parallel-aware adaptiveness, focusing on the impact on the overall convergence.
In this work, we address image segmentation in the scope of dosimetry using deep learning and make three main contributions: (a) to extend and optimize the architecture of an existing convolutional neural network (CNN) in order to obtain a fast, robust and accurate computed tomography (CT)-based organ segmentation method for kidneys and livers; (b) to train the CNN with an inhomogeneous set of CT scans and validate the CNN for daily dosimetry; and (c) to evaluate dosimetry results obtained using automated organ segmentation in comparison with manual segmentation done by two independent experts.
This study used explainable artificial intelligence for data-driven identification of extrastriatal brain regions that can contribute to the interpretation of dopamine transporter SPECT with 123I-FP-CIT in parkinsonian syndromes. A total of 1306 123I-FP-CIT-SPECT were included retrospectively. Binary classification as ‘reduced’ or ‘normal’ striatal 123I-FP-CIT uptake by an experienced reader served as standard-of-truth.
Stochastic Gradient Descent (SGD) is an essential element in Machine Learning (ML) algorithms. Asynchronous shared-memory parallel SGD (AsyncSGD), including synchronization-free algorithms, e.g. HOGWILD!, have received interest in certain contexts, due to reduced overhead compared to synchronous parallelization. Despite that they induce staleness and inconsistency, they have shown speedup for problems satisfying smooth, strongly convex targets, and gradient sparsity. We propose Leashed-SGD, an extensible algorithmic framework of consistency-preserving implementations of AsyncSGD, employing lock-free synchronization, effectively balancing throughput and latency. Leashed-SGD features a natural contention-regulating mechanism, as well as dynamic memory management, allocating space only when needed. We argue analytically about the dynamics of the algorithms, memory consumption, the threads' progress over time, and the expected contention. We provide a comprehensive empirical evaluation, validating the analytical claims, benchmarking the proposed Leashed-SGD framework, and comparing to baselines for two prominent deep learning (DL) applications: multilayer perceptrons (MLP) and convolutional neural networks (CNN). We observe the crucial impact of contention, staleness and consistency and show how, thanks to the aforementioned properties, Leashed-SGD provides significant improvements in stability as well as wall-clock time to convergence (from 20-80% up to 4 ×improvements) compared to the standard lock-based AsyncSGD algorithm and HOGWILD!, while reducing the overall memory footprint.
Stochastic Gradient Descent (SGD) is very useful in optimization problems with high-dimensional non-convex target functions, and hence constitutes an important component of several Machine Learning and Data Analytics methods. Recently there have been significant works on understanding the parallelism inherent to SGD, and its convergence properties. Asynchronous, parallel SGD (AsyncPSGD) has received particular attention, due to observed performance benefits. On the other hand, asynchrony implies inherent challenges in understanding the execution of the algorithm and its convergence, stemming from the fact that the contribution of a thread might be based on an old (stale) view of the state. In this work we aim to deepen the understanding of AsyncPSGD in order to increase the statistical efficiency in the presence of stale gradients.
Automatic extraction of features from MRI brain scans and diagnosis of Alzheimer's Disease (AD) remain a challenging task. In this paper, we propose an efficient and simple three-dimensional convolutional network (3D ConvNet) architecture that is able to achieve high performance for detection of AD on a relatively large dataset. Experiments conducted on an ADNI dataset containing 340 subjects and 1198 MRI brain scans have resulted good performance (with the test accuracy of 98.74%, 100% AD detection rate and 2,4% false alarm). Comparisons with 7 existing state-of-the-art methods have provided strong support to the robustness of the proposed method.
Copyright © MIDAX