a purple winter sunset with trees and buildings with gothic architecture in the foreground
Credit: Megan Mendenhall
© Duke University, all rights reserved

The Problem

Transthoracic echocardiogram (TTE) studies are widely used as non-invasive evaluations of cardiac function and structural heart disease. The volume of studies completed at Duke has been increasing over time with over 22,000 studies completed in 2017 alone. Part of this growth is represented by multiple TTE studies on the same patient during a single admission in the era of bundled care. The current process for obtaining and evaluating TTE images at Duke, and many other large centers, consists of a sonographer procuring images and then completing a preliminary report. An attending echocardiographer then evaluates this report, along with the corresponding images. Edits to the report are made, as needed, and then the finalized results are uploaded to the electronic health record. Subsequently, the ordering clinician can review the information and make clinical decisions based on the results.  Although average scan to result time is currently 4 hours at DUHS and nearly 100 TTEs are completed on a single weekday, approximately 1 in 5 studies are not completed within 24 hours of being ordered. This sometimes results in prolonged hospital stay, especially over the weekend, for patients who cannot be discharged prior to the performance of an echocardiogram. If even 5 minutes of time spent by the sonographer were saved by automation of the TTE preliminary report, it is estimated that this would lead to an additional 60 hours of useable time by sonographers each week, during which additional studies could be completed.  In addition to improved efficiency, automated interpretation would also help to reduce interobserver variability.  Currently, even among core-laboratory trained echocardiographic readers, there is up to 25% variability in estimated ejection fraction. Given that multiple pharmacologic and device therapy decisions are made based on ejection fraction, this variability has significant implications for cardiac care.    

In this project, in an effort to expedite results and reduce overall cost, we sought to develop and validate a computer algorithm to correctly estimate left ventricular ejection fraction (LVEF) as an initial step toward a fully automated echocardiogram evaluation. 

We were seeking to understand the causes of overaggressive care at the end of life. We identified that most aggressive care was due to over admission of patients (as opposed to ED visits or chemotherapy treatments).

Our Solution

Identification of TTE studies

We identified 1,074 TTE studies from the PROMISE study and an initial set of 3,000 Duke TTEs with at least moderate image quality to begin model development. PROMISE TTEs have been core lab adjudicated, which provides a gold standard echocardiographic interpretation by two independent cardiologists. Additionally, use of these images, which are obtained from institutions across the United States, improves the potential generalizability of our final product. The Duke TTEs give a large number of studies from a diverse population of patients to utilize for training the machine learning algorithm.

Creation of a durable solution for image transfer

Working with individuals from the Heart Center, DHTS and DCRI IT, we developed a semi-automated process by which Duke TTE studies are transferred from Phillips PACS to a secure PACE environment. In a step-wise manner, studies are identified in the Phillips PACS server based on study ID. Each study is then transferred to the vendor neutral archive (VNA), a secure intermediate step where studies can be stored and easily accessible. Upon availability, TTEs are then transferred into the secure PACE environment where the model development can occur. We intentionally created a system that is not only applicable for our project but that could also be utilized for future research at Duke. This same pipeline of transfer can be used for any studies housed in Phillips PACS, including TTEs and angiograms from the Duke Cath Lab. During our study period, we were able to successfully transfer 3,000 TTEs to the VNA with over 1,000 moved into PACE for model development.

Echocardiographic view identification

As a first step toward model development, we used machine learning to correctly identify different echocardiographic views for analysis, per previously published methods.1 The views included for the LVEF estimation were parasternal short axis, AP 2 chamber, AP 3 chamber and AP 4 chamber. Our preliminary image modeling approach provided image frame level classification with a 98% accuracy and DICOM level accuracy at 80% (Figure 1).

figure showing the accuracy of the project's image classification with 98% accuracy

Segmentation of the Left Ventricle and Left Atrium

In order to build the machine learning algorithm for LVEF estimation, we started with segmentation of the LV and LA. In this process, trained cardiac sonographers trace the endomyocardial border of the left ventricular and left atrial cavities (Figure 2). These segmentation images can then be used for training of the computer model for LVEF estimation. At the time of this report, we have completed segmentation of 23 PROMISE studies. Each segmented study includes four views (short axis, AP2, AP3 and AP4) with three frames per view (systole, mid and diastole). This totals 276 frames successfully segmented.

echocardiogram of left ventricular and left atrial cavaties

LVEF Estimation

We began LVEF estimation by using a recently published algorithm from UCSF1 to try to estimate LVEF using PROMISE study echocardiograms. However, while this algorithm worked well for the view identification above, the performance of LVEF estimation was less robust with many LVEF estimates of zero, suggesting a potential issue with algorithm generalizability. Following this, we have begun initial estimation of LVEF using a preliminary machine learning algorithm which was developed using data from our segmentation work. This algorithm will continue to be updated as ongoing segmentation work is completed.

Impact

During the time of our DIHI award, we have been able to: 1) Create a durable solution for transfer of images from the Phillips PACS server to the secure PACE environment to allow for image processing; 2) Use machine learning to identify the correct echocardiographic view with 98% frame level classification accuracy; 3) Fully segment the LV and LA for 276 unique frames from the  PROMISE study and 4) Begin development of a computer algorithm, utilizing machine learning techniques, to accurately estimate LVEF.

Our immediate next steps include completion of the computer algorithm and validation of this algorithm using both PROMISE and Duke echocardiographic studies. Following model validation, we plan to implement use of the model in the Duke North echo lab through an API in the LUMEDX reporting system. After this successful implementation, we would plan subsequent expansion to other parts of the Duke Health System. Eventually, the algorithm has the potential to be utilized at outside institutions. By using PROMISE studies, which were obtained at numerous institutions, as well as the large number of potential Duke studies, we believe that our model will have a generalizability that is not possible using other data sources.

Beyond implementation, we plan to continue model development which will focus on additional aspects of the TTE study, including valvular heart disease and pericardial disease, as a step toward our overarching goal of the ability for a computer algorithm to identify a fully normal TTE study.

From an output perspective, we plan to file for intellectual property rights upon model validation. We will also submit our work as an abstract for presentation at a national cardiovascular meeting and plan to submit manuscripts to peer reviewed journals which will detail the model development, model validation and implementation phases of our project.