Intel and Cisco are two of the first companies you think of when discussing the Internet of Things – and with good reason. Both companies are at the forefront of bringing the power of connectivity to unsuspecting places. I had the good fortune of standing on stage today at the Intel Developer Forum with my friend Doug Davis, VP & GM of Intel’s Internet of Things (IoT) Group, to talk about how our companies are working together.
The possibilities for an IoT world are practically endless; and so Cisco and Intel are joining forces to focus on a number of areas where IoT can make an immediate impact. Let’s look at energy management – a hot-button issue as energy costs rise and corporations are trying to to reduce their environmental footprints. Consumers, communities and businesses are all starting to realize that energy awareness makes sense from both an economical and environmental standpoint. Cisco research shows that smart buildings are poised to generate $100B by reducing energy consumption through the integration of HVAC and other systems which will lower operating costs.
Using Intel architecture and Cisco Energywise and IP Network, we are creating solutions that get to the root of the problem – identifying where energy is being used excessively. The integration of our technologies allows for both IP and non-IP appliances to be exposed to greater analytics and control. It also introduces the opportunity for discrete sensors to be added to the items, granting even greater levels of visibility and control of building systems. These efforts will enable building operators to achieve their green, sustainability and cost saving objectives while maintaining a safe, secure, and comfortable environment for occupants/tenants.
This is just a small example of what Cisco and Intel can achieve by identifying (and then delivering a solution) where IoT can make a big impact. We are currently joining forces to focus our efforts on networking, API management, and security to help us scale IoT solutions into multiple segments. However, we realize that Cisco and Intel can’t do this without the help of the developer community. By opening up APIs and providing development tools, developers can create use cases for our technology that creates new use cases previously unexplored. true driving force for the Internet of Things will come directly from the developers who create solutions they will actually use, and doing so on a platform that lets them share their solutions with others.
To that end, Cisco DevNet is a new and growing developer community that offers the tools and resources for them to integrate their software with Cisco infrastructure. Developers can tap the DevNet ecosystem and use the tools and community to create innovative network-aware applications. The DevNet portal features more than 100 fully documented APIs, with more being added each week. We hope DevNet provides a space where the Internet of Things can grow, and where true value can be discovered.
Cisco and Intel are tackling the challenge of creating, testing and validating the most relevant use cases for the Internet of Things across multiple verticals, and we are documenting and sharing the best practices coming from practical experiences in the field to broadly to promote the development of the market. It is an incredibly exciting time for the Internet of Things – Cisco and Intel are standing on the edge of true innovation, ready to take the plunge.
Data traffic has grown dramatically in the recent years, leading to increased deployment of network service appliances and servers in enterprise, data center, and cloud environments. To address the corresponding business needs, network switch and router architecture has evolved to support multi-terabit capacity. However, service appliance and server capacity remained limited to a few gigabits, far below switch capacity.
ITD (Intelligent Traffic Director) is a hardware based multi-Tbps Layer 4 load-balancing, traffic steering and clustering solution on Nexus 7xxx series of switches. It supports IP-stickiness, resiliency, NAT (EFT), VIP, health monitoring, sophisticated failure handling policies, N+M redundancy, IPv4, IPv6, VRF, weighted load-balancing, bi-directional flow-coherency, and IPSLA probes including DNS. There is no service module or external appliance needed. ITD provides order of magnitude CAPEX and OPEX savings for the customers. ITD is available on Nexus 7000/7700 series in NX-OS 6.2(8) or later. It is available for demo on Nexus 5k/6k. ITD is much superior than legacy solutions like PBR, WCCP, ECMP, port-channel, layer-4 load-balancer appliances.
I speak with many business leaders about “the cloud” and how best to use it to improve collaboration. Quite often, discussions end up getting into specific services and technologies but I always try to ensure that some basic considerations are a primary focus – namely People, Processes and Culture. This video is a great overview and insight into how important it is to get the foundations right, and what questions you should ask before you start looking for a specific solution or ‘technology’.
The Three Considerations
People are your company’s greatest asset and you need to enable them fully and effectively. Increasingly, they “vote with their feet.” They use their own solutions or those provided directly by their departments instead of official IT options (shadow IT). For many reasons public cloud services are a big hit, but you can’t afford for the virtualized environment you have painstakingly created to be used only for functional or legacy workloads. Nobody can afford a discrete, separate underutilized platform -- unappreciated and with hidden value. Read More »
In this week’s episode of Engineers Unplugged, Cisco’s CTO, Padmasree Warrior (@padmasree) and Satinder Sethi (VP, UCS Product Management and Data Center Solutions) whiteboard the UCS Grand Slam announcement, and what it means for customers and for the modern data center. Don’t miss this one!
It wouldn’t be Engineers Unplugged without a unicorn challenge, and Padma and Satinder delivered!
This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
Episodes will publish weekly (or as close to it as we can manage)
Rick van der Lans is data virtualization’s leading independent analyst. So when he writes a new white paper, any enterprise that is struggling to connect all their data (which is pretty much every enterprise), would be wise to check it out.
Rick’s latest is Data Vault and Data Virtualization: Double Agility. In a nutshell, the paper addresses how enterprises can craftily combine the Data Vault approach to modeling enterprise data warehouses with the data virtualization approach for connecting and delivering data. The result is what Rick calls double agility as each approach accelerates time to solution in complex data environments.
Data Vault Pros and Cons
Adding new data sources such as big data and cloud to an existing data warehouses is difficult. The Data Vault approach provides the extensibility required. This is the first agility.
Unfortunately, from a query and reporting point of view developing reports straight for a Data Vault‐based data warehouse results in complex SQL statements that almost always lead to bad reporting performance. The reason is Data Vault models distribute data over a large number of tables.
Losing Agility Due to Data Mart Proliferation
To solve the performance problems with Data Vault, many enterprises have built physical data marts that reorganize the data for faster queries.
Unfortunately valuable time must be spent on designing, optimizing, loading, and managing all these data marts. And any new extensions to the enterprise data warehouse must be re-implemented across the impacted marts.
Data Virtualization Returns the Agility
To avoid the data mart workload, yet retain agile warehouse extensibility, Rick has worked with Netherlands based system integrator Centennium and Cisco to provide a better, double agility, alternative.
In this new solution, Cisco Data Virtualization, together with a Centennium-defined data modeling technique called SuperNova, replaces all the physical data marts. So, no valuable time has to be spent on designing, optimizing, loading, managing and updating these derived data marts. Data warehouse extensibility is retained, but because the reporting is based on virtual, rather than physical models, they are very easy to create and maintain.
Meet Rick van der Lans at Data Virtualization Day
To learn more about this innovative solution as well as data virtualization in general, come to Data Virtualization Day 2014 in New York City on October 1. Rick, along with the also sharp Barry Devlin, will join me on stage for the Analyst Roundtable. I hope to see you there.
To learn more about Cisco Data Virtualization, check out our page.