Each year I pledge to telework along with thousands of others for the annual Telework Week. Today, I worked from my rpod parked in front of my house. My rpod is my personal smart work space and provides me everything I need to work at home or on the road with secure mobility capabilities that allow me to access all my meetings, applications, and collaboration tools to do my job.
In fact, I could have worked from anywhere and have been teleworking for the past several years.
The Telework Enhancement Act of 2010 (Act) was signed into law on December 9, 2010. The objective is to achieve greater flexibility in managing the workforce through the use of telework. Telework programs and best practices provide agencies a valuable tool to meet mission objectives while helping employees enhance work/life effectiveness to:
- Improve Continuity of Operations to help ensure that essential Federal functions continue during emergency situations including snow storms
- Promote management effectiveness when telework is used to target reductions in management costs and environmental impact and transit costs
- Enhance work-life balance and allows employees to better manage their work and family obligations
This year, Telework Week 2014, was the fourth-annual global effort to encourage agencies, organizations, and individuals to pledge. A total of 163,495 pledges collectively saved $14,003,872 in commuting costs and spared the environment 9,066 tons of pollutants.
Read More »
Tags: Bring your Own Device (BYOD), Mobile Government, telework, unified access
Server virtualization has become mainstream and has changed the way resources are provisioned and accessed within the data center. (Did you know the number of virtual machine shipments now exceeds the number of physical servers being shipped?). Effective measurement and characterization of complex applications in virtual environments is critical to both vendors and customers.
The Transaction Processing Performance Council (TPC) today announced a new industry standard benchmark suite, TPC-VMS (Virtual Measurement of Single-system), that enables comparison of performance, price-performance and energy efficiency of database applications in a virtualized environment.
The benchmark suite is built upon existing TPC standards: TPC-C, TPC-E, TPC-H and TPC-DS. The benchmark test sponsor chooses one of these workloads, and runs three equally sized instances of the same workload on three virtual machines on the system under test. The primary performance metric is the slowest of the three instances and is reported as VMStpmC (for TPC-C), VMStpsE (for TPC-E), VMSQphH@Size (for TPC-H) or VMSQphDS@Size (for TPC-DS).
(1) TPC-VMS Press Release
(2) TPC-VMS Specification
(3) W. Smith, Characterizing Cloud Performance with TPC Benchmarks, LNCS vol. 7755, Springer 2013
(4) P. Sethuraman, R. Taheri, TPC-V: A Benchmark for Evaluating the Performance of Database Applications in Virtual Environments, LNCS vol 6417, Springer 2010
Tags: benchmark, TPC-VMS, virtualization
Physical servers lend the comfort of knowing where your data is located and having control over access and protection of that data. But from a business perspective, there is a lot virtualization can offer. So what’s the compromise with security, and is it worth the switch to a cloud environment?
While the cloud is an “open environment,” with no physical equipment to hold data in a hard-and-fast location, there are security measures that can be taken. Understanding how your technology is being used and who would be interested in accessing stored information is an important step in protecting against security threats. It is also important to consider what type of cloud you are utilizing – public, private, or hybrid. When analyzed thoroughly, you can then integrate security controls into your architecture to view, manage, and control vulnerability and threats.
Finally, you must consider trust. How the technology is used depends on users, devices, applications, and data. Security policies and controls can be determined and installed after establishing how and why the data may be accessed. Vice President and Chief Information Security Officer at Intel explains in more detail the significance of trust and avoiding security breaches. Read what he has to say.
You may want also to take advantage of our coming webcast to see how industry peers are doing to solve the very challenges Cloud adopters face. Tune in to a webcast on December 6 at 9:00 am PST to hear from Cisco UCS customers Xerox and FICO Corporation, about how and why they used it in their Cloud environments.
Tags: cloud, cloud security, cloud_computing
When building a cloud, scale it out.
Cisco Intelligent Automation for Cloud architecture and topology options enable scalability, availability, and geographic distribution. This white paper discusses several options, their strengths and uses, and the technical details underlying these options.”
Cisco IAC Availability, Scalability, and Geographic Distribution White Paper is available in the Cisco support community (log in needed)
Here’s an excerpt:
Cisco Intelligent Automation for Cloud (IAC) is a software-based solution for managing hardware infrastructure tasked with delivering various IT services as-a-service (XaaS). Cisco IAC provides configuration “content” to help customers rapidly deploy service-delivered, self-service enabled IT services on certain hardware architectures. Consulting services from Cisco Advanced Services or Cisco delivery partners can use the IAC infrastructure to create custom services for customers. This white paper discusses the software underpinnings of these services and options for deployment that provide scalability and resilience for large enterprises or service providers.
The major platform products which make up IAC relevant to a scaling and resiliency discussion are:
- The Cisco Cloud Portal – The dynamic, tailored end-user web site where customers and administrators can browse available services and options, and order new services or changes to existing services. This element consists of a web tier which interacts with the browser to expose the Portal UI and an application tier which includes the Portal and Service Catalog. The Service Catalog provides the menu of available services, including new-service and update-service requests, as well as definitions and configurations for roles, business rules, dynamic form rules, and entitlement.
- Cisco Process Orchestrator – The delivery engine that makes the Move/Add/Change/Delete (MACD) changes to the steady-state configuration of the computing, network, storage, and application infrastructure (“Infrastructure”) needed to deliver the requested new service or service change. Orchestrator processes automate workflows which interact with applications, systems, and devices in the environment.
- A database stores configuration, state, and runtime information from the above systems.
- Cisco Network Services Manager (NSM) Server – a specialized engine for network provisioning. Cisco Network Services Manager’s policy-driven approach allows clouds to be created within single or multiple network Points of Delivery (PoDs), each with potentially different and unique offerings and operational behaviors.
- Cisco NSM Controller – a local element near network devices within a network PoD which performs direct device interactions to achieve network provisioning at the direction of the NSM Server.
- Cisco Server Provisioner – provides bare metal provisioning (remote installation) of an OS or hypervisor on a physical or virtual server, as well as bare metal imaging for system cloning and backup.
Tags: Cisco Intelligent Automation for Cloud, cloud, Cloud Computing, Cloud Management, data center, intelligent automation, orchestration, unified management
When Cisco announced the CRS (Carrier Routing System) in 2004, many analysts and other observers thought it overkill. Some said that Cisco would not sell more than 50.
To date, the number is greater than 8000.
That would seem to fall into the category of “Exceeding Expectations”.
And just how did Cisco do this? In part, by continually staying ahead of the game with enhancements – never waiting for traffic loads, customer demands or other circumstances to force it into catch-up mode.
Today, Cisco continued that practice with further enhancements to the industry-leading CRS platform.
Cisco announced that GTS Central Europe (GTS CE), a leading provider of integrated telecommunications solutions and data center services in Central and Eastern Europe, has deployed the CRS for its Next-Generation Internet core. Cisco new elastic core networking capabilities enable service providers such as GTS CE to cost-effectively launch and scale revenue-generating services within minutes instead of months. The solution includes the industry’s first integrated coherent 100 Gbps IP over DWDM and Cisco’s nLight™ technology for the CRS.
Cisco’s nLight technology converges IP and optical transport networks by introducing programmability to minimize network complexity while maximizing service intelligence and monetization opportunities. This capability significantly reduces network total cost of ownership and is a key element of the Cisco Open Network Environment (ONE) framework.
Also, in recent related news, Cisco and BT recently conducted a landmark 100G DWDM trial
Tags: Carrier_Routing_System, Cisco, core_routing, CRS, DWDM, ip, ONE, Optical, service_provider, SP, tco, total_cost_of_ownership