Big Data Takeaways from GPU Technology Conference 2015
In this article
The WWT AI and Analytics team spends a majority of its time working with the "traditional" big data technologies such as Hadoop, NoSQL and in-memory databases. We have used these technologies to help our customers optimize their data warehouses and ultimately bring value by implementing the right analytics use cases. While Hadoop, NoSQL, and in-memory databases are changing the game for many companies today, we want to know what will be the technology of tomorrow.
One area in which we see great promise is High Performance Computing (HPC), and how it can be leveraged with big data technologies. Since NVIDIA Graphical Processing Units (GPUs) are playing a huge role in many HPC systems today, we have been researching their applicability to our customer's big data use cases.
To learn more about this technology, members of the WWT Big Data Practice traveled to San Jose in March to attend the 2015 GPU Technology Conference.
Goals for the Conference
Before arriving at the conference, we had a feeling that a majority of the talks and products were going to be geared toward the scientific community instead of business use cases. We knew there was some interesting work being done by companies like Google and Facebook as mentioned, but how applicable would this be to our customers? And if we did want to bring this capability to our customers, how much specialized training would be needed on both the programming and engineering side?
We were hoping to see three things at this conference that would convince us that GPUs are an area for us to pursue:
- Use cases that can bring immediate business value to our customers
- Products that abstract away the specialized CUDA coding
- Products that will allow our customers to easily scale-out these systems as their data grows
Overall, the conference was a very positive experience, and we found some interesting products that touched on the three characteristics mentioned above.
From a machine learning stand-point, we think there was too much focus on deep learning. While this is a computationally expensive algorithm that benefits immensely from GPUs, we think there are plenty of other algorithms out there that are extremely valuable to businesses that could also be accelerated (e.g. clustering, regression, random forest, etc.). In addition, basic database functions such as sorts, joins and group-bys on very large/fast datasets could also benefit from GPUs and would be extremely valuable to our customers.
While we do think there was too much focus on deep learning at the conference, it was clear that the time is right for WWT to get more involved in this community.
Here are some of the products that truly stuck out.
GIS-Federal's GPUdb
GPUdb truly hit on all three points mentioned above. This product has been incubating with the U.S. Army since 2009, and their experience with GPUs shines. They have created a lightning fast database that can ingest and process data orders of magnitude faster than anything out there because it was built from the ground up to leverage GPUs. (It can also work on Xeon Phis as well as just CPUs, but it was purpose-built for GPUs). In addition, they have built a native geospatial visualization suite right into the fabric, which makes this the tool for any large real-time mapping/tracking use case.
GPUdb allows an organization to have queryable insight into their extremely high volume, variety and velocity data flow, real-time. They have abstracted away all of the low-level code and built in common database functions such as SELECT, JOIN, GROUP BY, etc. These commands can all be performed on the fly over a wide variety of schemas without any upfront indexing.
From an engineering standpoint, they have also significantly reduced the complexity of deploying and maintaining a scaled-out cluster of nodes. Each node can have any number or type of GPUs, and an organization can swap nodes and/or GPUs in and out without having to worry about reprogramming everything.
More details about their abstraction of the low-level coding and flexible scale-out capabilities are captured in the patent.
NVIDIA's DIGITS DevBox
One of NVIDIA's big announcements was for their packaged product around GPUs, the DIGITS DevBox. The DIGITS DevBox is a purpose-built desktop computer with four Titan X GPUs (seven teraFLOPS of single precision processing power), Ubuntu 14.04, the open-source DIGITS software, Caffe, Torch, Theano, BIDMach, cuDNN v2, and CUDA 7.0. This all plugs into a normal outlet in the wall and is very user-friendly for building deep learning models. I was able to build and score a model that recognized handwritten numbers in a snap!
While this is probably the most horsepower you can find in any desktop computer in the world, I do think the sole focus on deep learning is a bit limiting. I am sure, with some work, other machine learning algorithms could be brought into the software, but it would have been nice to see them available off the bat. Also, this product was definitely built for research purposes, not for implementing use cases in an enterprise.
Overall, I loved the simplicity that the DIGITS DevBox offers for getting your hands on deep learning and GPUs. In addition, the price tag of $15,000 was very reasonable. Our customers can benefit from this product by using it as a starting point into the world of GPUs and deep learning. It is perfect for a data science R&D group. They can easily bring in data, build models and then make predictions to see how it can benefit their business before bringing it in across the enterprise.
SYSTAP's MapGraph
MapGraph stands for "Massively Parallel Graph processing" and is a GPU-accelerated version of GraphLab's Gather-Apply-Scatter vertex-centric API. This system has been shown to traverse nearly 32 billion edges per second on a 64 node GPU cluster. This is about 100,000 times faster than graph technologies based on key-value stores such as HBase, Titan and Accumulo.
Similar to GPUdb, Mapgraph has abstracted away the low-level programming necessary for GPUs to make it easy to write graph programs. Read this technical paper for a more detailed understanding of the product.
Cortexica's image recognition and visual search
Cortexica has taken all of the great work being done with GPUs and computer vision and turned it into a fantastic product for the retail space.
Imagine you are sitting at home and you need some new blankets and pillows for your bed. You take a picture of your carpet, your furniture and your curtains. Next you submit these images to your favorite bedding store's website. Within milliseconds, the website sends back suggestions for blankets and pillow cases that match the patterns and colors of the images you sent. This is Cortexica's Image Recognition and Visual Search technology in a nutshell.
This is all powered by GPUs and deep learning, along with several other patented technologies that allow for varied lighting, occlusion by other objects, varying object distances, cluttered backgrounds and different viewing angles. This is a mobile-centric product that has been used by companies such as Macy's and Ebay. This video from the 2014 GPU Conference gives a great demonstration of how this product works. Enjoy!
Other highlights from the conference
- Learning Atari – Jeff Dean from Google showed how his team had a computer learn to play Space Invaders on Atari. After about 300 tries, it was better than any human!
- John Canny's BIDMach – Professor John Canny's talk on his programming language, BIDMach, was eye opening. The language has a MATLAB-style syntax and is purpose built for machine learning on GPUs. This could become the de facto language for data scientists who want to use GPUs.
- The Elon Musk – At the end of the first Keynote, Elon Musk came out to have a conversation with Jen-Hsun Huang, the CEO of NVIDIA. Musk is an icon and a true innovator who had some interesting thoughts about self-driving cars and the future of machine learning.
Final thoughts
Overall, we believe that HPC does have a play in the big data space, especially for more sophisticated customers looking to get into streaming and fast machine learning. In particular, GPUs are becoming more mature, and the products being built around them are starting to abstract away a lot of the complexity. There is still a lot of work to be done, but we'll be offering this capability to customers interested in accelerating their analytics processes.