We invited William Moore, CTO at CareCore National to share his thoughts on how cloud and big data are impacting the healthcare industry. Read related blog, “It’s a Boy!”
Now that the initial frenzy of the cloud revolution is settling, solid applications are providing a glimpse of the potential of cloud computing to change daily life for the better. In my industry, healthcare, the cloud is not simply transforming existing processes, but actually enabling new decision-making models that simply weren’t possible before.
Why Electronic Medical Records Fell Short
The healthcare industry earlier tried for transformation with electronic medical records (EMRs). The original notion was that individual physician practices could justify the investment in servers, software, and maintenance based on efficiency gains. Then we’d bubble up the health records data from multiple organizations and it would be a Shangri La moment for chronic disease models, coordinated care, care duplication, and more.
But reality fell short of the mark. Many physicians’ offices are really small business at heart. They were hard pressed to afford EMR infrastructure and all that went with it. Efficiency gains are minuscule at best if you simply print out patient charts each morning, place them on that same old clipboard, mark them up with a ballpoint pen, and then have the office manager enter the new information into the EMR system to print out next time.
Without a critical mass of EMR infrastructure, developers lacked the incentive to create standards and unifying protocols. And the lack of protocols prevented meaningful sharing of data.
Even if some of your healthcare providers do use EMRs, it’s rare that all of your providers can see yours. Connecting EMRs among more than a handful of physician practices is not technically feasible, nor is it appropriate.
After, chatting with other cohorts in the industry (namely Greg Ferro aka Ethereal Mind and of PacketPushers fame, Stephen Foskett of Tech Field Day fame and also writer of his own blog, and Ivan Pepelnjak who, amongst other things, writes an excellent blog on networking topics), its clear that trying to stay up to speed with all the various happenings in going on in the data center is pretty near impossible because of the sheer volume of information and the signal-to-noise radio that sometimes gets out of hand. While all of us, in our own little ways are doing our piece to help address this through blogging, white papers, seminars and the like, we wanted to try something a little different to really dig into some of the more complex topics and came up with the concept of Virtual Symposiums:
Focused discussion on a single topic
Panel discussion with industry experts (or at least folks that believe themselves to be experts)
Focus on helping attendees understand the “how” and “why” of different technologies as opposed to advocating a particular perspective
Interactive, open to audience interaction--the goal is to get your questions answered
Our first symposium, next Tuesday, is going to cover storage convergence. Joining our panel for this discussion, we are lucky to have J Metz (@drjmetz) joining us to lend his FCoE and storage expertise. We will also get joined by special guest Stu Miniman (@stu), Principal Analyst from Wikibon who also brings perspectives shaped by over a decade in the storage market. We are going to spend approxiamtely the first half of the symposium discussing the storage options out their: FC, FCoE, and iSCSI and when we think each one makes sense (or not). As I noted above, the goal is to not push a particular technology agenda, but to educate you, and let you make your own decisions. The balance of the session is open for Q&A--we will cove some of the common questions that we see all the time, but we expect you, the audience, to drive a lot of the discussion.
So, mark you calendar and join us--if you are familiar with this crew, you know it will be both educational and entertaining.
Some people say that in the next few years that Infrastructure as a Service cloud deployments will be focused mostly on private clouds. And then they say that enterprises will migrate to public clouds after they have become “experienced” in running a cloud. About a year ago I could really see this story played out. Now, fifteen months after we introduced Cisco Intelligent Automation for Cloud, I have some different points of view. I would have thought that by now that private cloud architectures would have begun to converge to a few standard patterns. This has not happened. The world is still diverging when it comes to both Private and Public cloud architectures.
I do see patterns arising in successful cloud deployments and here are some of the key ones:
#5: Pragmatic Approach: IT shops that come with a long list of RFP requirements and questions take a long time to source a technology provider and to achieve production success. Others that are pragmatic (can I say Agile in their approach) get to cloud quicker and learn from their successes and missteps alike.
#4: They Have a Cloud Instance Roadmap: After a cloud deployment, some IT organizations think that is it, they are done, next project, my move to cloud is complete. Hold it right there, did you know that cloud is not a single step where you through a switch, but a succession of deployments of great scope from one step to the next? A roadmap is needed that covers: hardware, network, storage infrastructure, virtualization technology and release version, management and orchestration software instance version and finally the services that you are offering to the end users and how the service catalog is changing over time. Those that have a roadmap roughed out are generally more successful than those that have a big bang perspective.
#3: Appreciation for Challenge of Management of Change:Moving to cloud is a big change in an operating model; careers are created and new roles are defined. How does an organization move to the new model with different technology, processes and people? When a team proactively manages the change in the non-technical they ensure long term success. It is not just about self service, cloud catalogs, orchestration, domain management and virtualization. It is more about service designers and automation authors and changes in operational processes.
#2: Rise of the Cloud Architect: Since cloud is about a new operating model a new position and role is needed. If you have a cloud project and do not have a cloud architect tying it all together from cost models, to hypervisors, to orchestration and orderable service definitions, you need a organization role tune up ASAP.
#1: A Service Centric Approach: Most people get this one right away. Service centric projects are the key focus for ITaaS. However, I can’t tell you how many times when I am talking to an IT team, the opening bell results in a speeds and feeds conversation around provisioning that piece of infrastructure and that virtualization API. If you ask the question about what services they want to offer their end users for self service ordering you will get a request for more time to answer that question. Service Centric IT shops will take the time to start first with the business requirements and the perspective from the end user point of view. Transform your cloud project approach to a service centric agile project and you will go far.
We’re in the sporting and cultural capital of Australia this week for Cisco Live! Did you know that Melbourne is the only city in the world that has five international standard sporting facilities surrounding its central business district?
Cisco Intelligent Automation for Cloud is a cloud management and orchestration software solution that complements Cisco UCS and Nexus to provide self-service on-demand provisioning of IT resources. This new solution is becoming as ubiquitous as the sporting facilities in Melbourne. Cisco partners including Alphawest / Optus, CSC, and VCE are also showcasing our Intelligent Automation for Cloud software in action at their booths.
Essentially, this solution will help you tackle the challenge of deploying infrastructure-as-a-service – and adopt an IT-as-a-Service (ITaaS) strategy. Here’s a short analyst video on delivering ITaaS with Cisco Intelligent Automation:
A key component of Cisco’s Unified Data Center and our virtual networking portfolio is the Nexus 1010 virtual services appliance. We were excited last month when we announced a more scalable version, the Nexus 1010-X. As I pointed out before, the idea of a virtual services appliance is to provide a dedicated hardware platform for running a wide range of network services, monitoring and security virtual machines rather than having them share server resources with key business applications. From an administrative point of view, these network services VM’s can be managed by the networking team, rather than the teams running VM’s on the application servers, which is the right division of labor. The Nexus 1010 platform runs NX-OS and basically looks like a network device rather than a VM host, helping the network admins manage the service policies.
Now we are releasing a case study of an Italian service provider, FASTWEB, who is using the Nexus 1010 to simplify the management of their virtual network, and network service policies. As part of a sustained and forward looking strategy, the Italian service provider has built a next-generation network for delivering converged voice, video, data and mobile services. This investment has enabled FASTWEB to accelerate the creation of new, differentiated offers for business and residential customers, while reducing operational complexity and overhead.
The Nexus 1010 supports network analysis down to the VM layer, giving FASTWEB’s network administrators granular visibility to virtual workloads, without having to trouble the storage and virtualization operations teams.