Edico Genome CEO Pieter van Rooyen recently penned an article for XConomy about the reconfigurable future of health care. As van Rooyen points out, the industry must adopt more powerful computing tools than traditional CPU-based machines if it is to keep pace with the surge of health care data.
“Unless we do so, we will continue to face a data bottleneck that stands in the way of quick answers for healthcare providers and researchers,” he stated.
“Fortunately, an answer exists in a technology that has actually been available for decades – FPGAS, or field-programmable gate arrays – which are now going mainstream.”
As van Rooyen explains, the logic circuit of an FPGA can be replicated many thousands of times, creating a massively parallel computing architecture rather than the minimally parallel nature of a CPU that has only a few cores or threads available. The result is rapid speed, says van Rooyen, with an output provided nearly instantaneously after an input is applied.
“The use of FPGAs in healthcare is still nascent, but in areas where they are being applied, we are beginning to get a taste of their potential to change how medicine is practiced and health is managed,” he elaborated. “FPGAs provide the ability to analyze a whole human genome with a single computer – down from 80 computers using CPUs – and in a fraction of the time – 22 minutes instead of more than 30 hours.”
According to van Rooyen, Edico Genome created a specialized FPGA-based processor known as DRAGEN for analyzing next-generation sequencing data.
“This chip powers a data analysis platform that is being used to speed turnaround time of genomic tests, resulting in faster diagnoses of critically ill newborns, cancer patients, and expecting parents awaiting prenatal tests,” he continued. “Faster answers also benefit researchers: Scientists at Baylor College of Medicine studying 3-D structures of DNA were able to accelerate by nearly 20-fold the analysis of the massive data sets generated.”
It should also be noted that FPGAs are being used to explore convolutional neural networks. As Nicole Hemsoth of The Next Platform recently reported, Dr. Peter Milder of Stony Brook University and his team have developed an FPGA based architecture, dubbed Escher, to tackle convolutional neural networks in a way that minimizes redundant data movement and maximizes the on-chip buffers and innate flexibility of an FPGA to bolster inference.
“The big problem is that when you compute a large, modern CNN and are doing inference, you have to bring in a lot of weights—all these pre-trained parameters. They are often hundreds of megabytes each, so you can’t store them on chip—it has to be in off-chip DRAM,” Milder told The Next Platform. “In image recognition, you have to bring that data in by reading 400-500 MB just to get the weights and answer, then move on to the next image and read those same hundreds of megabytes again, which is an obvious inefficiency.”
The goal, says Milder, is to create an architecture that is flexible enough for whatever layer users want to compute with.
“What we did with Escher was to produce an accelerator for CNN layers that is flexible enough to work on fully connected and the convolutional layers themselves—and can have batching applied to all of them without the overhead.”
According to Miller, the current interest in FPGAs is staggering.
“Just a few years ago, there would only be a few people working on these problems and presenting at conferences, but now there are many sessions on topics like this. People are seriously looking at FPGA deployments at scale now,” he explained. “The infrastructure is in place for a lot of work to keep scaling this up. The raw parallelism with thousands of arithmetic units that can work in parallel, connected to relatively fast on-chip memory to feed them data means the potential for a lot of energy efficient compute at high performance values for deep learning and other workloads.”
Interested in learning more about FPGAs and data acceleration? You can check out our article archive on the subject here.