31 March 2019 to 5 April 2019
Academia Sinica
Asia/Taipei timezone

“Faux”-tomography: Changing the Tomographic Paradigm via Deep Learning

4 Apr 2019, 16:00
30m
Conference Room 1 (Academia Sinica)

Conference Room 1

Academia Sinica

Oral Presentation Data Management & Big Data Data Management & Big Data

Speaker

Dr William B. Seales (University of Kentucky)

Description

The acceleration of advances in machine learning (ML) as applied to image-based problems has produced robust solutions to challenges often considered too difficult - or even impossible - to solve by computer: face recognition, 3D object recognition, landmark detection, image-based geo-location. This project re-imagines the accepted imaging paradigm in the context of computed tomography and a cloud-based platform for at-scale machine learning. We focus on x-ray computed tomography as an imaging method, and the inclusion of ML techniques running at-scale in the cloud as a standard way to enhance, improve, and transform the details that tomography captures. We call the approach “photomography” (or “faux” tomography), and the idea is to incorporate ML as a crucial part of the tomographic imaging pipeline for the purpose of enhancing and amplifying signals in the data that are impossible for a human to perceive or for other algorithms to detect. The specific types of signals we seek to amplify include changes in topological patterns that give strong clues to material composition and integrity. These signals, which manifest tomographically in small but statistically significant variations in intensity, are present but not readily visible to the unaided human eye. This paper presents a systematic paradigm for how to apply ML - convolutional neural networks (CNNs) and autoencoders - in the context of tomography; and points to results from specific demonstrations of the power of this approach to amplify important signals. We emphasize the following four technical areas as primary components: 1. **Automated approaches to massive data acquisition:** Tomographic systems typically discard information that we believe should be kept and used at the ML stage. 2. **Reference libraries from photographs:** Sets of large-scale reference libraries must be constructed and organized into CNNs from longitudinal data and supervised examples. This is computationally very expensive in terms of data sizes and required cycles. 3. **Amplification:** Reference to a pre-computed library delivers a result - an estimate of the degree to which a specific signal is present – using cloud-based architectures. The resulting estimate is pushed back into the original data for subsequent algorithmic analysis or human decision-making. 4. **Multi-modal rendering:** Through cloud-based ML reference libraries it is possible to acquire data with one modality (e.g., tomography), and render a realistic result that simulates a different modality (e.g., photograph). We discuss the computational and structural framework for collecting tomographic data, including additional cues such as multi-power responses, fluorescence measurements, and phase shift estimates that are typically never centralized in the capture of tomographic data. Using the acquired data, we construct specific reference libraries - at scale - by training CNNs and informing the process through autoencoders. These libraries enable us to deliver predictive results in the data for the purpose of subsequent algorithmic processing and for human visualization to see, recognize, and quantify patterns previously thought to be invisible in tomography. Finally, we demonstrate the overall value of this technique in example areas where tomography is being used to solve new and difficult problems: analysis of bone density phenomena, and the analysis of antiquities (inks and fibers).

Summary

We discuss the computational and structural framework for collecting tomographic data, including additional cues such as multi-power responses, fluorescence measurements, and phase shift estimates that are typically never centralized in the capture of tomographic data. Using the acquired data, we construct specific reference libraries - at scale - by training CNNs and informing the process through autoencoders. These libraries enable us to deliver predictive results in the data for the purpose of subsequent algorithmic processing and for human visualization to see, recognize, and quantify patterns previously thought to be invisible in tomography. Finally, we demonstrate the overall value of this technique in example areas where tomography is being used to solve new and difficult problems: analysis of bone density phenomena, and the analysis of antiquities (inks and fibers).

Primary authors

Mr Seth Parker (University of Kentucky) Mr Stephen Parsons (University of Kentucky) Dr William B. Seales (University of Kentucky)

Co-author

Mr Charles Pike (University of Kentucky)

Presentation materials

There are no materials yet.