When I started supporting enterprise application platforms at the turn of the century, the model was pretty simple: a single, huge server with expensive high-performance storage behind it. If you outgrew the storage, you could add more, and if you outgrew the server, you bought the next larger box. It was a very monolithic design, and if you’ve seen 2001: A Space Odyssey, you’d be forgiven for worrying about monoliths.
In the last couple of years, a lot of mindshare and a lot of actual production business-critical applications have moved toward what could be called “web-scale,” with Hadoop infrastructures being the poster child for these models. We’ve disaggregated performance and capacity, as well as compute from storage, but some applications are still too critical to migrate to take advantage of the new developments.
While a traditional Hadoop platform gives you the ability to spread your storage out over less costly, more granularly scalable devices, it has not traditionally been easy to access from pre-Hadoop-era applications. There are also performance and availability concerns when you work with HDFS, especially if your application doesn’t speak HDFS natively (and most don’t).
Enter MapR and the Converged Data Platform.
Visit my guest post on the MapR blog for more details, and to register for the webcast. We’ll talk about the unique joint solution between superpowers Cisco, MapR, and SAP, featuring the Cisco UCS 40 Gigabit Ethernet infrastructure. You’ll hear from experts from Cisco IT and MapR on why this design makes sense and how/why Cisco is using it internally. There may even be a mention of toaster pastries.