A few weeks ago California was rocked by something so small in magnitude but sent a wake-up call that hopefully will be heard around the country. Three fiber optic cables were cut below a manhole cover alone a city street. Three cables that then disabled every cell phone, land line, internet connection for over 80,000 residents in the South Bay of silicon valley. Even more alarming was the fact that emergency public agencies including fire department, police department, even hospitals were completely cut-off from the outside world and even their sister agencies. These cities learned first-hand what life is without communication. Signs were placed on the streets informing residents that if they had a medical emergency to drive themselves to the hospital. Police officers were to be flagged down in the street and in some cases curfews were debated if communication could not be restored for general safety. Police departments could not even run finger prints or check data bases on suspects they did detain. All this from cutting 3 cables snaking underneath public streets. When company’s talk about building systems and not just about combining individual technologies, this is hopefully in the back of their mind. When building a system it is easier to look for single points of failure, to look for failovers not across each individual technology but for entire systems. This is why architects take so long with their designs, not because they are slow because hopefully they are being thorough. The consequences of not being so can be devastating.
As I think about application delivery services in the network, I wonder if they’re really needed with the increased speeds networks themselves provide today. Does squeezing out a bit more throughput through cache, compression, content distribution, content-based routing, protocol optimization and XML processing really matter when users are seeing fiber to the house? The answer must be yes, or else there wouldn’t be a market for these services. I realize now that the primary need comes from the divide between developers and network administrators. When an application doesn’t perform as expected the developers say the network needs to provide more bandwidth and the network people say the application code isn’t optimized for running over the network. It’s always been both sides blaming the other and the people affected are the users of the application who are subjugated to a lower quality of experience.Then I realized that while users have expectations of experience, so do developers. Developers are under tight deadlines as companies look to be more agile and more distributed in a global economy. Basic features that would optimize code receive lower priority. The assumption is these requirements will be handled elsewhere. The “elsewhere” could be the web server or web client, most have caching built into them, but the network is best positioned to support and provide these services to developers, if they will plan accordingly and work with the network team. There’s a convergence of applications and the network. The architects must specify configuration parameters that indicate to developers and provisioners when these services are activated. They may also specify best-practice formatting conventions that have to be observed. This will ensure the application delivery services are available in the network and are being used reducing the time to development and improving the quality of experience.
Today’s infrastructure architecture is becoming more sophisticated and regarded by many as one of the main pillars of information technology. The IT infrastructure consists of the foundational building blocks on which applications and business processes run, it provides generic services that can be used by multiple applications. The network is a key infrastructure element that provides such services called network-based services. In other circles they may also be called infrastructure and/or SOI services. Network-based services may be decomposed further to atomic or composite, in many cases it’s just a matter of taxonomy. For example; “application acceleration” is a composite service that may be comprised of “cache, compression, protocol optimization and content-based routing”, which may be further decomposed into very specific functions such as static or dynamic cache. Transparent network-based services require no direct interaction with an application but enable functionality for the application. Just because these services exist, doesn’t mean they’re going to be used. Consider not only the before mentioned acceleration services, but also security services such as; encryption, day 0 mitigation, intrusion detention and prevention, and anomaly detection. Think of communication services like multimedia bridging, session control, session records, and topology management. There are also virtualization services that do load balancing, VLAN, VPN, and VSAN. These examples just start to demonstrate the capabilities provided by the network.The challenge is while application developers may know these services exist, they still develop the same functionality into their applications because the governance is not in place nor is the communication (or dare I say collaboration) between the developers and network administrators. Looking at application acceleration- headers must be properly formed and the network made aware for the service to properly operate. Network-based services are becoming more sophisticated and as applications continue to be highly distributed, they will require a library of centralized and standardized services to ensure compatibility with each other. In the end a solid enterprise architecture practice is needed that documents a catalog of network-based services and a change in culture that brings together the application and infrastructure teams to best execute the IT strategy that delivers the business vision.
Good Enterprise Architecture helps ensure that business strategy and IT investments are aligned. IT resources that are documented and/or modeled to contribute to an architectural description become artifacts. As organizations expand and affiliate, their enterprise architecture becomes more complex. Forward-looking enterprises want architectures that will meet the current and future needs of their organization with a catalog of artifacts that are agile and adaptable in supporting business capabilities.As technology continues to grow new resources for architecture artifacts are becoming available on what seems a daily or even hourly basis and some come from unexpected sources. The network has gradually become an important provider of architectural services. Classic transport networks have been optimized to increase throughput, availability, and configurability and are increasingly application-aware. But many enterprise architects still think of the network layer as infrastructure for transport and remain unaware of the new application-related, value-add services that today’s network can provide.When I think of network-based services that are application centric, I think of two types. The first type are exposed services, or those services with a well-defined and well-documented set of API’s that create new functionality in applications through a request/response method. Examples include presence and location information. Information the network is able and ready to give in real-time that an application can then use to streamline a business process. The second type are transparent services, or those services that don’t require an explicit call from an application, but are implicit through best practices and configuration options. Examples include application acceleration and virtualization services. Services that in a properly architected enterprise perform as expected.By properly cataloging and documenting network-based services, an enterprise architect will have new artifacts to simplify and standardize many of the capabilities that need to be provided by IT to support business processes. Explore the network-based services that are offered today. You may be surprised at what Cisco and the network have to offer.
You would never think we find ourselves in the middle of a downturn if you hear the enthusiasm with which people promote and discuss everything related to collaboration in the enterprise. If you have doubts about it, check out the recap of last week’s Voiceon in Orlando here.There you have it: collaboration is en vogue. Only 2 years ago, the vision was Unified Communications (UC), now it is all about collaboration architecture. And moreover, collaboration is one of those topics that everybody can relate to -- at this point in time, we have all enjoyed a variety of technologies and tools that allow us to reach out to people we work with in better, richer ways. We have been visually vowed by TelePresence, we have connected with people we know on social networks, we have contributed to wikis, we have written blogs, we have laughed at the latest youtube video a colleague distributed, we have joined web conferences, we have shared online workspaces -- you name it. So the collaboration discussion is often about what worked and what didn’t work for us, and about consuming the next collaboration service.And there is merit in discussing the human communication aspect of collaboration -- of course it is a key part of the discussion. But let us not kid ourselves -- enterprises are not looking into collaboration because they want employees to have more fun and richer experiences. You can rest assured CEOs, CFOs and CIOs are sitting together in one room thoughtfully scratching their chins, going “Look here, if I align all this stuff optimally with my business processes, I can squeeze massive productivity gains out of my resources (aka employees)”. Because yes, by extending the reach of the enterprise with innovative tools employees can more effectively reach other employees, customers or partners and drive business processes forward from anywhere, and do so much faster. Read More »