Anwar Ghuloum recently posted to the Intel Research blog about the conversations they are having with developers around developing for multi-core and terascale environments. To quote Anwar: “Ultimately, the advice I’ll offer is that these developers should start thinking about tens, hundreds, and thousands of cores now in their algorithmic development and deployment pipeline.”For a while now, our perspective has been that multi-core processors and the complex workloads they enable will be a driver for the migration to 10GbE. The sheer volume of information these multi-core capable apps will be able to consume, process, share and produce will accelerate the uptake of 10GbE. That being said, there is also a qualitative aspect to all this. Raw bandwidth is not enough--we need granular, dynamic control of that bandwidth. At any given point in time, a workload might be reading data from one disk, writing out to a cache on a second disk, sending information to another server to be processed, and finally sending information to a user--and moment by moment the bandwidth requirements for each of these operations is going to shift. We need infrastructure that can handle both the quantitative and qualitative aspects of these demands. In the long run, I believe this is one of the areas where unified fabric will prove its worth--it gives you a way to cost effectively handle the aggregate bandwidth requirements of these multi-core environments while giving you the granularity to make sure that bandwidth is being effectively used.