February is here. Winter is in full swing on this side of the equator. Summer is grasping the other. I know it’s been a warm one so far for our friends in Australia. But snowfall amounts in the Northeast has our ski areas in Northern California and the Rockies so envious. Such is Mother Nature right?
Recently, our team announced some important details for our Switching and Wireless products.
I thought I would take the time to let you all know more on these announcements.
First up, Nasser Tarazi, Wireless Product Manager, announced two new models. We will cover the new WAP351 this week and the new WAP131 next week.
The New Cisco WAP351
The new Cisco WAP351 perfect for conference rooms, classrooms, hospitality and other flexible deployments. It offers Dual Radio (2.4Ghz and 5Ghz) wireless N connectivity, a 5-port Switch with PoE PD and PSE support, Single Point Setup, Captive Portal and comes with Limited Lifetime Warranty.
The WAP351 offers something new to the Wireless portfolio. Here is a quick Power Over Ethernet (PoE) primer. PoE Powered-Device (PD) is the ability to power the device through an PD-capable Ethernet port. PoE PSE (Power Sourcing Equipment) is the ability to supply power a device connected to a PSE-capable Ethernet port. In terms of power, a standard PoE port can support a maximum output of 15W, while a PoE+ port supports up to 30W.
Now, back to the WAP351. As mentioned above, the WAP351 support both PD and PSE. This means if the WAP351 is connected to a PoE+ switch like the SG300-10PP, the WAP351 can be power through PD-capable Ethernet port, while at the same time powering a standard PoE device like a phone or another AP, like the WAP131 through the WAP351’s designated PSE-capable Ethernet port.
More on Wireless Access Points and PoE:
- PoE: Power over Ethernet. PoE enables Power and Data to be combined onto a single Ethernet cable to power devices such as access points, IP phones, or IP cameras
- PSE on a WAP is exclusive to the new WAP351
- A WAP with PSE is attractive for verticals such as education, hospitality, and smaller offices and meeting rooms where both wired and wired access is required
- PoE enables WAP’s or other endpoint devices to be installed where power typically is not available, such as on a wall or ceiling. This allows for greater flexibility during deployments.
- All Cisco Small Business WAP’s support PoE PD
- Dual-Radio WAPs requiring 802.3af PoE power = WAP131, WAP351, WAP561
- Dual-Radio WAPs requiring 802.3at PoE+ power = WAP371, WAP351 when using the PSE with full power budget
- The WAP351 can be powered by 48V/1.25A external DC power if a 802.3af/t PoE switch is not used or available
- The WAP351 can provide 6w of PSE when using 802.3af
Ok you got it? Make sense?
Cisco 300 Series Switches
In other news:
Switching Product Manager Michael Wynh announced several price reductions on the ever-popular 300 Series Switches. This is good news for our customers and channel partners alike. Businesses can maximize their budgets and take advantage of Cisco’s class-leading PoE switching products. For more information on these important updates, please contact your local Cisco Representative or check out our support community.
That is it for now. Thanks for hanging out with us.
Until next time,
Tags: #wireless, access point, Cisco, Cisco Wireless, ethernet, network, PoE ports, port, router, switch, VLAN, wlan
MDS 9500 family has supported customers for more than a decade helping them through FC speed transitions from 1G, 2G, 4G, 8G and 8G advanced without forklift upgrades. But as we look in the future the MDS 9700 makes more sense for a lot of data center designs. Top four reasons for customers to upgrade are
- End of Support Milestones
- Storage Consolidation
- Improved Capabilities
- Foundation for Future Growth
So lets look at each in some detail.
- End of Support Milestones
MDS 4G parts are going End of Support on Feb 28th 2015. Impacted part numbers are DS-X9112, DS-X9124, DS-X9148. You can use the MDS 9500 Advance 8G Cards or MDS 9700 based design. Few advantages MDS 9700 offers over any other existing options are
a. Investment Protection – For any new Data Center design based on MDS 9700 will have much longer life than MDS 9500 product family. This will avoid EOL concerns or upgrades in near future. Thus any MDS 9700 based design will provide strong investment protection and will also ensure that the architecture is relevant for evolving data center needs for more than a decade.
b. EOL Planning – With MDS 9700 based design you control when you need to add any additional blades but with MDS 9500, you will have to either fill up the chassis within 6 months (End of life announcement to End of Sales) or leave the slots empty forever after End of Sale date.
c. Simplify Design - MDS 9700 will allow single skew, S/W version, consistent design across the whole fabric which will simplify the management. MDS 9700 massive performance allows for consolidation and thus reducing footprint and management burden.
d. Rich Feature Set - Finally as we will see later MDS provides host of features and capabilities above and beyond MDS 9500 and that enhancement list will continue to grow.
- Storage Consolidation
MDS 9700 provides unprecedented consolidation compared to the existing solutions in the industry. As an example with MDS 9710 customers can use the 16G Line Rate ports to support massively virtualized workload and consolidate the server install base. Secondly with 9148S as Top of Rack switch and MDS 9700 at Core, you can design massively scalable networks supporting consistent latency and 16G throughput independent of the number of links and traffic profile and will allow customers to Scale Up or Scale Out much more easily than legacy based designs or any other architecture in the industry.
Moreover as shown in figure above for customers with MDS 9500 based designs MDS 9710 offers higher number of line rate ports in smaller footprint and much more economical way to design SANs. It also enables consolidation with higher performance as well as much higher availability.
- Improved Capabilities
MDS 9700 design provides more enhanced capabilities above and beyond MDS 9500 and many more capabilities will be added in future. Some examples that are top of mind are detailed below
Availability: MDS 9700 based design improves the reliability due to enhancements on many fronts as well as simplifying the overall architecture and management.
- MDS 9710 introduced host of features to improve reliability like industry’s first N+1 Fabric redundancy, smaller failure domains and hardware based slow drain detection and recovery.
- Its well understood that reliability of any network comes from proper design, regular maintenance and support. It is imperative that Data Center is on the recommended releases and supported hardware. As an example data center outage where there are unsupported hardware or software version failure are exponentially more catastrophic as the time to fix those issues means new procurement and live insertion with no change management window. Cost of an outage in an Data Center is extremely high so it is important to keep the fabric upgraded and on the latest release with all supported components. Thus for new designs it makes sense that it is based on the latest MDS 9700 directors, as an example, rather than MDS 9513 Gen-2 line cards because they will fall of the support on Feb 28, 2015. Also a lot of times having different versions of the hardware and different software versions add complexity to the maintenance and upkeep and thus has a direct impact on the availability of the network as well as operational complexity.
With massive amounts of virtualization the user impact is much higher for any downtime or even performance degradation. Similarly with the data center consolidation and higher speeds available in the edge to core connectivity more and more host edge ports are connected through the same core switches and thus higher number of apps are dependent on consistent end to end performance to provide reliable user experience. MDS 9700 provides industries highest performance with 24Tbps switching capability. The Director class switch is based on Crossbar architecture with Central Arbitration and Virtual Output Queuing which ensures consistent line rate 16G throughput independent of the traffic profile with all 384 ports operating at 16G speeds and without using crutches like local switching (muck akin to emulating independent fixed fabric switches within a director), oversubscription (can cause intermittent performance issues) or bandwidth allocation.
MDS Directors are store and forward switches this is needed as it makes sure that corrupted frames are not traversing everywhere in the network and end devices don’t waste precious CPU cycles dealing with corrupted traffic. This additional latency hit is OK as it protects end devices and preserves integrity of the whole fabric. Since all the ports are line rate and customers don’t have to use local switching. This again adds a small latency but results in flexible scalable design which is resilient and doesn’t breakdown in future. These 2 basic design requirements result in a latency number that is slightly higher but results in scalable design and guarantees predictable performance in any traffic profile and provides much higher fabric resiliency .
Consistent Latency: For MDS directors latency is same for the 16G flow to when there are 384 16G flows going through the system. Crossbar based switch design, Central arbitration and Virtual Output Queuing guarantees that. Having a variable latency which goes from few us to a high number is extremely dangerous. So first thing you need to make sure is that director could provide consistent and predictable latency.
End to End latency: Performance of any application or solution is dependent on end to end latency. Just focusing on SAN fabric alone is myopic as major portion of the latency is contributed by end devices. As an example spinning targets latency is of the order of ms. In this design few us is orders of magnitude less and hence not even observable. With SSD the latency is of the order of 100 to 200 us. Assuming 150 us the contribution of SAN fabric for edge core is still less than 10%. Majority (90%) of the latency is end devices and saving couple of us in SAN Fabric will hardly impact the overall application performance but the architectural advantage of CRC based error drops and scalable fabric design will make provided reliable operations and scalable design.
For larger Enterprises scalability has been a challenge due to massive amount of host virtualization. As more and more VMs are logging into the fabric the requirement from the fabric to support higher flogins, Zones. Domains is increasing. MDS 9700 has industries highest scalability numbers as its powered by supervisor that has 4 times the memory and compute capability of the predecessor. This translates to support for higher scalability and at the same time provides room for future growth.
Foundation for Future Growth:
MDS 9700 provides a strong foundation to meet the performance and scalability needs for the Data Center requirements but the massive switching capability and compute and memory will cover your needs for more than a decade.
It will allow you to go to 32G FC speeds without forklift upgrade or changing Fabric Cards (rather you will need 3 more of the same Fabric card to get line rate throughput through all the 384 ports on MDS 9710 (and 192 on MDS 9706).
MDS 9700 allow customers to deploy 10G FCoE solution today and upgrade without forklift upgrade again to 40G FCoE.
MDS 9700 is again unique such that customers can mix and match FC and FCoE line cards any way they want without any limitations or constraints.
Most importantly customers don’t have to make FC vs FCoE decision. Whether you want to continue with FC and have plans for 32G FC or beyond or if you are looking to converge two networks into single network tomorrow or few years down the road MDS 9700 will provide consistent capabilities in both architectures.
In summary SAN Directors are critical element of any Data Center. Going back in time the basic reason for having a separate SAN was to provide unprecedented performance, reliability and high availability. Data Center design architecture has to keep up with the requirements of new generation of application, virtualization of even the highest performance apps like databases, new design requirements introduced by solutions like VDI, ever increasing Solid State drive usage, and device proliferation. At the same time when networks are getting increasingly complex the basic necessity is to simplify the configuration, provisioning, resource management and upkeep. These are exact design paradigms that MDS 9700 is designed to solve more elegantly than any existing solution.
Although I am biased in saying that but it seems that you have voted us with your acceptance. Please see some more details here.
Live as if you were to die tomorrow. Learn as if you were to live forever.
Tags: 16 Gigabit, 16Gb, 16Gb Fibre Channel, 9500, 9710, architecture, availability, best practices, Cisco, cloud, Cloud Computing, Consolidation, convergence, data center, Data Mobility Manager, DCNM, design, Director, dmm, FCIP, FCoE, Fibre Channel, Fibre Channel over Ethernet, IO accelerator, it-as-a-service, MDS, MDS design, nexus, NX-OS, reliability, SAN, Storage, storage area networks, switch, switching, Unified Data Center, Unified Fabric, upgrade, virtualization
I hope everyone’s week has been a fruitful one.
Alike many product teams, ours sometimes have a tendency sometimes to keep a large amount of focus churning out new products with all of the features and performance characteristics our customers want in quality networking products.
While one of our product traits is #useability, we all know, there are features that are perhaps not as straightforward to the layman, small business owners, even though these savvy folks understand the need to use these features for their businesses to be successful.
So in that spirit, we have assembled a team of young, bright individuals and challenged them with an aggressive list of topics. The first of these topics is an informative yet light video-on-demand on Quick VPN configuration tips for some of my RV Series router models. Configuring VPN’s, even though it sounds exciting (yes that is a joke), is not always straightforward, so this video should be very helpful for many.
I would like to introduce you to Ruben:
I also wanted to pass along two Cisco Small Business Links that house a set of VoD’s that are a little more technical in nature that our team in Greenville has produced for topics such as Configuring VLAN’s on the RV320 and RV325 and setting up multiple types of VPN’s on RV130, RV320, RV325 and others. These again are little more technical in nature, but very informative and helpful in getting your Small Business Networking configured as needed.
Here is the Cisco Small Business YouTube Page.
And here is the Cisco Small Business Vimeo Page.
Make it a great day.
Tags: growth, network, router, Scalability, small business, switch, VLAN, vpn, wireless
Hope you all are enjoying a productive week. This week I thought it would be prudent to talk about upgrading router (+switch and wireless access point) firmware. The firmware is software that is embedded on the router. This firmware is normally updated to include new features and enhancements to the device. All of our firmware upgrades are FREE.
So take a look at a quick Knowledge Base article (based in our fabulous support forum) on upgrading the firmware on the new RV130 and RV130W: https://supportforums.cisco.com/document/12318721/firmwarelanguage-upgrade-rv130-and-rv130w-using-web-interface.
You will need to download the firmware to your computer and connect an ethernet cable from computer to your router.
Side note: We have an option, yes there is another way. Check out this blog on FindIt.
Make it great rest of the week.
Tags: Cisco Small Business, code, Firmware, router, switch, upgrade
This is the final part on the High Performance Data Center Design. We will look at how high performance, high availability and flexibility allows customers to scale up or scale out over time without any disruption to the existing infrastructure. MDS 9710 capabilities are field proved with the wide adoption and steep ramp within first year of the introduction. Some of the customer use cases regarding MDS 9710 are detailed here. Furthermore Cisco has not only established itself as a strong player in the SAN space with so many industry’s first innovations like VSAN, IVR, FCoE, Unified Ports that we introduced in last 12 years, but also has the leading market share in SAN.
Before we look at some architecture examples lets start with basic tenants any director class switch should support when it coms to scalability and supporting future customer needs
Design should be flexible to Scale Up (increase performance) or Scale Out (add more port)
The process should not be disruptive to the current installation for cabling, performance impact or downtime
The design principals like oversubscription ratio, latency, throughput predictability (as an example from host edge to core) shouldn’t be compromised at port level and fabric level
Lets take a scale out example, where customer wants to increase 16G ports down the road. For this example I have used a core edge design with 4 Edge MDS 9710 and 2 Core MDS 9710. There are 768 hosts at 8Gbps and 640 hosts running at 16Gbps connected to 4 edge MDS 9710 with total of 16 Tbps connectivity. With 8:1 oversubscription ratio from edge to core design requires 2 Tbps edge to core connectivity. The 2 core systems are connected to edge and targets using 128 target ports running at 16Gbps in each direction. The picture below shows the connectivity.
Down the road data center requires 188 more ports running at 16G. These 188 ports are added to the new edge director (or open slots in the existing directors) which is then connected to the core switches with 24 additional edge to core connections. This is repeated with 24 additional 16G targets ports. The fact that this scale up is not disruptive to existing infrastructure is extremely important. In any of the scale out or scale up cases there is minimal impact, if any, on existing chassis layout, data path, cabling, throughput, latency. As an example if customer doesn’t want to string additional cables between the core and edge directors then they can upgrade to higher speed cards (32G FC or 40G FCoE with BiDi ) and get double the bandwidth on the on the existing cable plant.
Lets look at another example where customer wants to scale up (i.e. increase the performance of the connections). Lets use a edge core edge design for this example. There are 6144 hosts running at 8Gbps distributed over 10 edge MDS 9710s resulting in a total of 49 Tbps edge bandwidth. Lets assume that this data center is using a oversubscription ratio of 16:1 from edge into the core. To satisfy that requirement administrator designed DC with 2 core switches 192 ports each running at 3Tbps. Lets assume at initial design customer connected 768 Storage Ports running at 8G.
Few years down the road customer may wants to add additional 6,144 8G ports and keep the same oversubscription ratios. This has to be implemented in non disruptive manner, without any performance degradation on the existing infrastructure (either in throughput or in latency) and without any constraints regarding protocol, optics and connectivity. In this scenario the host edge connectivity doubles and the edge to core bandwidth increases to 98G. Data Center admin have multiple options for addressing the increase core bandwidth to 6 Tbps. Data Center admin can choose to add more 16G ports (192 more ports to be precise) or preserve the cabling and use 32G connectivity for host edge to core and core to target edge connectivity on the same chassis. Data Center admin can as easily use the 40G FCoE at that time to meet the bandwidth needs in the core of the network without any forklift.
Or on the other hand customer may wants to upgrade to 16G connectivity on hosts and follow the same oversubscription ratios. . For 16G connectivity the host edge bandwidth increases to 98G and data center administrator has the same flexibility regarding protocol, cabling and speeds.
For either option the disruption is minimal. In real life there will be mix of requirements on the same fabric some scale out and some scale up. In those circumstances data center admins have the same flexibility and options. With chassis life of more than a decade it allows customers to upgrade to higher speeds when they need to without disruption and with maximum flexibility. The figure below shows how easily customers can Scale UP or Scale Out.
As these examples show Cisco MDS solution provides ability for customers to Scale Up or Scale out in flexible, non disruptive way.
“Good design doesn’t date. Bad design does.”
Tags: 16 Gigabit, 16Gb, 16Gb Fibre Channel, 9710, architecture, availability, best practices, Cisco, cloud, Cloud Computing, Consolidation, convergence, data center, Data Mobility Manager, DCNM, design, Director, dmm, FCIP, FCoE, Fibre Channel, Fibre Channel over Ethernet, IO accelerator, it-as-a-service, MDS, MDS design, nexus, NX-OS, reliability, SAN, Storage, storage area networks, switch, switching, Unified Data Center, Unified Fabric, virtualization