I was chatting with a customer the other day who was struggling with some of the implications of “cloud computing”. The analogy that finally made sense to them is what I will call”cloud dining”. I am the cook in the house and I am tasked with feeding the family. If my 10-year old is lobbying for Italian, I am cook at home or order out. The decision may also vary from day to day. For instance, I might not have all the ingredients and have to order out, or, like this weekend, it may be 103 outside and cooking at home is not all that appealing.Now, the same can be said for supporting a given application in a cloud computing environment.In a fully implemented Data Center 3.0 environment, you can decide if an app is run locally (cook at home), in someone else’s data center (take-out) and you can change your mind on the fly in case you are short on data center resources (pantry is empty) or you having environmental/facilities issues (too hot to cook). In fact, with automation, a lot of this can can be done with policy and real-time triggers. For example, during month end processing, you might always shift non-critical apps offsite, or if you pass a certain cooling threshold, you might ship certain processing offsite.James Gardner had an interesting post about this, which got me thinking. What if you could start comparing the cost of running a workload and handle it wherever it is most cost-effective: energy cost spiking in California today because of a heatwave, ship the workload somewhere cooler. James talks about a futures market for MIPS. I think he might be on to something.Somewhere, in this data center arbitrage model, there is also a business opportunity, since someone is going to have to help customers find the find the best cost for data center resources and intermediate the transaction. Hmmm…..