We felt it wasn’t fair that only features in our major releases were getting the limelight, so this will be the first in a series of short blog posts featuring an interesting feature or improvement in our regular minor releases of MapD’s GPU-accelerated Core database and Immerse visualization software.
Back when we started the current incarnation of the MapD Core database, we wrote our own parser (written using flex and GNU bison), semantic analysis and optimizer.
Continuing where we left off in our earlier post on MapD 2.0’s Immerse visualization client, today we want to walk you through some of version 2.0’s major improvements to our GPU-accelerated Core database and Iris Rendering Engine.
The taxi dataset is one of the most popular on our site and for good reason, it is not often that you can get behind the wheel of a supercomputer for free.
While 2016 was the year of the GPU for a number of reasons, the truth of the matter is that outside of some core disciplines (deep learning, virtual reality, autonomous vehicles) the reasons why you would use GPUs for general purpose computing applications remain somewhat unclear.
2016 was a pretty amazing year for MapD. Not only did we launch our company with the announcement of our A Round of funding in late March, but we were able to steadily build on that event throughout the year, culminating in the release of our 2.0 version of the product just nine months later.
After many months of hard work, refinement and improvement, we’re very happy to announce the release of version 2.0 of the MapD Core database and Immerse visual analytics platform.
A couple of months back we hosted a BrightTALK webinar with Sam Madden, MIT Professor and Chief Scientist at Cambridge Mobile Telematics.
This morning Google Cloud announced the upcoming availability of powerful, innovative GPU instances. As a an beta tester of the new offering we had the opportunity to take the instances for a spin and test them out against the 1.2 billion row taxi dataset.