“We Can’t Afford to measure PUE”

One of the more interesting phenomena that I experience as I travel and talk with customers and industry peers is that there is a significant number of folks out there with the belief that they cannot measure PUE because they cannot afford or lack the funding for the type of equipment and systems needed to properly measure their infrastructure.  As a rule I beleive this to be complete hogwash as there are ways to measure PUE without any additional equipment (I call it SneakerNet or Manual Scada).   One can easily look at the power draw off the UPS  and compare that to the information in their utility bills.  Its not perfect, but it gives you a measure that you can use to improve your efficiency.  As long as you are consistent in your measurement rigor (regular intervals, same time of day, etc) you can definitely achieve better and better efficiency within your facility.

Many people have pushed back on me saying that measurement closer to the PDU or rack is more important and for that one needs a full blown branch circuit monitoring solution.   While to me increased efficiency is more about vigilance in understanding your environment I have had to struggle with an affordable solution for folks who desired better granularity.

Now that I have been in management for the better half of my career I have had to closet the inner geek in me to home and weekend efforts.   Most of my friends laugh when they find out I essentially have a home data center comprised of a hodge podge of equipment I have collected over the years.  This includes things like my racks of different sizes (It has been at least a decade since I have seen a half-height rack in a facility, but I have two!) , my personal Cisco Certification lab, Linux Servers, Microsoft Servers, and a host of other odds and ends).  Its a personal playground for me to try and remain technical despite my management responsibilities.  

Its also a great place for me to try out new gear from time to time and I have to say I found something that might fit the bill for those folks that want to get a deeper understanding of power consumption in their facilities.   I rarely give product endorsements but this is something that the budget minded facilities folks might really like to take a look at. 

I received a CL-AMP IT package from the Noble Vision Group to review and give them some feedback on their kit.   The first thing that struck me was that this kit seemed to essentially be a power metering for dummies kit.    There were a couple of really neat characteristics out of the box that took many of the arguments I usually hear right off the table.

nvg

First the “clamp” itself in non-intrusive, non-invasive way to get accurate power metering and results.   This means contrary to other solutions I did not have to unplug existing servers and gear to be able to get readings from my gear or try and install this device inline.  I simply Clamped the power coming into the rack (or a server) and POOF! I had power information. It was amazingly simple. Next up -  I had heard that clamp like devices were not as accurate before so I did some initial tests using an older IP Addressable power strip which allowed me to get power readings for my gear.   I then used the CL-AMP device to compare and they were consistently within +/- 2% with each other.  As far as accuracy, I am calling it a draw because to be honest its a garage based data center and I am not really sure how accurate my old power strips are.   Regardless the CL-AMPS allowed me a very easy way to get my power readings easily without disrupting the network.  Additionally, its mobile so if I wanted to I could move it around you can.  This is important for those that might be budget challenged as the price point for this kit would be incredibly cheaper than a full blown Branch Circuit solution. 

While my experiment was far from completely scientific and I am the last person to call myself a full blown facilities engineer, one thing was clear this solution can easily fill a role as a mobile, quick hit way to measure PUE power usage in your facility that doesn’t break the bank or force you to disrupt operations or installing devices in line.   

\Mm

At the Intersection of Marketing and Metrics, the traffic lights don’t work.

First let me start out with the fact that the need for datacenter measurement is paramount if this industry is to be able to manage itself effectively.   When I give a talk I usually begin by asking the crowd three basic questions

1) How many in attendance are monitoring and tracking electrical usage?

2) How many in attendance measure datacenter efficiency?

3) How many work for organizations in which the CIO looks at the power bills?

The response to these questions has been abysmally low for years, but I have been delighted by the fact that slowly but surely, the numbers have been rising.  Not in great numbers mind you, but incrementing.  We are approaching a critical time in the development of the data center industry and where it (and the technologies involved) will go.  

To that end there is no doubt that the PUE metric has been instrumental in driving awareness and visibility on the space.  The Green Grid really did a great job in pulling this metric together evangelizing it to the industry.  Despite a host of other potential metrics out there, PUE has captured the industry given its relatively straight forward approach.   But PUE is poised to be a victim of its own success in my opinion unless the industry takes steps to standardizes its use in marketing material and how it is talked about. 

Don’t get me wrong, I am rabidly committed to PUE as a metric and as a guiding tool in our industry.   In fact I have publicly defended the detractors of this metric for years.  So this post is a small plea for sanity. 

These days, I view each and every public statement of PUE with a full heaping shovel-full of skepticism regardless of company or perceived leadership position.   In my mind measurement of your company’s environment and energy efficiency is a pretty personal experience.   I don’t care which metric you use (even if its not PUE) as long as you take a base measurement and consistently measure over time making changes to achieve greater and greater efficiency.   There is no magic pill, no technology, no approach that gives you efficiency nirvana. It is a process that involves technology (both high tech and low tech), process, procedure,  and old fashioned roll up your sleeves operational best practices over time that gets you there. 

With mounting efforts around regulation, internal scrutiny around capital spending, lack of general market inventory and a host of other reasons, the push for efficiency has never been greater and the spotlight on efficiency as a function of “data center product” is in full swing.  Increasingly PUE is moving from the data center professional and facilities groups to the marketing department.  I view this as bad. 

Enter the Marketing Department

In my new role I get visibility to all sorts of interesting things I never got to see in my role managing the infrastructure for a globally ubiquitous cloud roll out.  One of the more interesting items was an RFP issued by a local regional government for a data center requirement.   The RFP had all the normal things you would expect to find in terms of looking for that kind of thing, but there was a caveat that this facility must have a PUE of 1.2.  When questioned around this PUE target, the person in charge stated, if Google and Microsoft are achieving this level we want the same thing and this is becoming the industry standard.   Of course the realization in differences in application make up, legacy systems, or the fact that it would have to also house massive tape libraries (read low power density) and a host of other factors made it impossible for them to really achieve this.   It was then that I started to get an inkling that PUE was starting to get away from its original intention.  

You don’t have to look far to read about the latest company that has broken the new PUE barrier of 1.5 or 1.4 or 1.3 or 1.2 or even 1.1.  Its like the space race.   Except that the claims of achieving those milestones are never really backed up with real data to prove or disprove it.  Its all a bunch of nonsensical bunk.  And its in this nonsensical bunk that we will damn ourselves with those who have absolutely no clue about how this stuff actually works.  Marketing wants the quick bullet points and a vehicle to allow them to show some kind of technological superiority or green badges of honor.  When someone walks up to me at a tradeshow or emails me braggadocios claims of PUE they are unconsciously picking a fight with me and I am always up for the task.

WHICH PUE DO YOU USE?

Lets have an honest, open and frank conversation around this topic shall we?  When someone tells me of the latest greatest PUE they have achieved, or have heard about, my first question is ‘Oh yeah?  Which PUE are they/you using?’.  I love the response I typically get when I ask the question. Eyebrows twist up and a perplexed look takes over their face.   Which PUE?

If you think about it, its a valid question.  Are they looking at Average Annual PUE?  Are they looking at AVERAGE PEAK PUE?  Are they looking at design point PUE?  Are they looking at Annual Average Calculated PUE? Are they looking at Commissioning state PUE?  What is the interval at which they are measuring? Is this the PUE rating they achieved one time at 1:30AM on the coldest night in January?

I sound like an engineer here but there is a vast territory of values between these numbers all of them and none of them may have anything to do with reality.   If you will allow me a bit of role-playing here lets walk through a scenario where we (you and I dear reader) are about to build and commission our first facility.  

We are building out a 1MW facility with a targeted PUE of 1.5.   After successful build out with no problems or hiccups (we are role-playing remember) we begin the commissioning with load banks to simulate load.   During the process of commissioning we have a measured PUE our target of 1.40.  Congratulations we have beaten our design goal! right? We have crossed the 1.5 barrier! Well maybe not.  Lets ask the question…How long did we run the Level 5 commissioning for?  There are some vendors who burn it in over a course of 12 hours.  Some a full day.  Does that 1.40 represent the average of the values collected?  Does it measure the averaged peak?  Was it the lowest value?  What month are we in?  Will it be significantly different in July? January?  May?  Where is the facility located?   The scores over time versus at commissioning will vary significantly over time.  

A few years back when I was at Microsoft, we publicly released the data below for a mature facility at capacity that has been operating and collecting information four years.  We had been tracking PUE or at least the variables used in PUE for that long.  You can see in the chart the variations of PUE.  Keep in mind this chart shows a very concentrated effort to drive efficiency over time.   Even in a mature facility where the load remains mostly constant over time, the PUE has variation and fluctuation.   Add to that the deltas between average, peak and average peak.  Which numbers are you using?

image

(source: GreenM3 blog)

Ok lets say we settle on using just average (its always the lowest number PUE with the exception of a one time measurement).  We want to look good to management right?  If you are a colo company or data center wholesaler you may even give marketing a look-see to see if there is any value in that regard.    We are very proud of ourselves.  There is much back slapping and glad handing as we send our production model out the door.

Just like an automobile our data center depreciates quickly as soon as the wheels hit the street.  Except that with data centers its the PUE that is negatively affected.

The IMPORTANCE OF USE

Our brand new facility is now empty.  The load-banks have been removed, we have pristine white floor space ready to go.   With little to no IT load in our facility we currently have a PUE somewhere between 7 and 500.  Its just math (refer back to how PUE is actually calculated).  So now our PUE will be a function of how quickly we consume the capacity.  But wait, how can our PUE be so high?  We have proof from commissioning that we have created an extremely efficient facility.   Its all in the math.  Its math marketing people don’t like.  It screws with the message.   Small revelation here – Data Centers become more “efficient” the more energy they consume!  Regulations that take PUE into account will need to worry about this troublesome side effect. 

There are lots of interesting things you can do to minimize this extremely high PUE at launch like shutting down CRAH units, removing perf tiles and replacing them with solid tiles, but ultimately your PUE is going to be much much higher regardless.  

Now lets take the actual deployment of IT ramps in new data centers.  In many cases enterprises build data centers to last them over a long period of time. This means that there is little likelihood that your facility will look close to your commissioning numbers (with load banks installed).  Add to the fact that traditional data center construction has you building out all of the capacity from the start.  This essentially means that your PUE is not going to have a great story for quite a bit of time.   Its also why I am high on the modularized approach.  Smaller, more modular units allow you to more efficiently (from cost as well as energy efficiency) grow your facility out.

So if we go back to our marketing friends, our PUE looks nothing like the announcement any more.  Future external audits might highlight this, and we may full under scrutiny of falsely advertising our numbers.  So lets pretend we are trying to do everything correctly and have projected that we will completely fill our facility in 5 years.  

The first year we successfully fill 200kw of load in our facility.  We are right on track.   Except that the 200kw was likely not deployed all at once.  It was deployed over the course of the year.   Which means my end of year PUE number may be something like 3.5 but it was much higher earlier in the year.  If I take my annual average, it certainly wont be 3.5.  It will be much higher.   In fact if I equally distribute the 200kw that first year over 12 months, my PUE looks like this:

image 

That looks nothing like the PUE we advertised does it? Additionally I am not even counting the variability introduced by time.   This is just end of month numbers.   So the frequency will have an impact on this number as well.  The second year of operation our numbers are still quite poor when compared to our initial numbers.

image

Again, if I take annual average PUE for the second year of operation, I am not at my design target nor am I at our commissioned PUE rating.  So how can firms unequivocally state such wonder PUEs?  They cant.  Even this extremely simplistic example doesn’t take into effect that load in the data center moves around based upon utilization, it also almost never achieves the power draw you think it will take.  There are lots of variables here. 

Lets be clear – This is how PUE is supposed to work!  There is nothing wrong with these calcs.  There is nothing wrong with the high values.   It is what it is.  The goal is to drive efficiency in your usage.   Falsely focusing on extremely low numbers that are the result of highly optimized integration between software and infrastructure and making them the desirable targets will do nothing more than place barriers and obstacles in our way later on.  Outsiders looking in want to find simplicity.  They want to find the quick and dirty numbers by which to manage the industry by.   As engineers you know this is a bit more complex.   Marketing efforts and focusing on low PUEs will only damn us later on.

Additionally, if you allow me to put my manager/businessman hat on – there is a law of diminishing return by focusing on lower and lower PUE.  The cost for continued integration and optimization starts losing its overall business value and gains in efficiency are offset by the costs to achieve those gains.  I speak as someone who drove numbers down into that range.   The larger industry would be better served by focusing more on application architecture, machine utilization, virtualization, and like technologies before pushing closer to 1.0. 

So what to do?

I fundamentally believe that this would be an easy thing to correct.   But its completely dependent upon how strong a role the Green Grid wants to play in this.   I feel that the Green Grid has the authority and responsibility to establish guidelines in the formal usage of PUE ratings.  I would posit the following ratings with apologies in advance as I am not a marketing guy who could come up with more clever names:

Design Target PUE (DTP) – This is the PUE rating that theoretically the design should be able to achieve.   I see too many designs that have never manifested physically.  This would be the least “trustworthy” rating until the facility or approach has been built.

Commissioned  Witnessed PUE (CWP) – This is the actual PUE witnessed at the time of commissioning of the facility.  There is a certainty about this rating as it has actually achieved and witnessed.  This would be the rating that most colo providers and wholesalers would need to use as they have little impact or visibility into customer usage.

Annual Average PUE (AAP) – This is what it says it is.  However I think that the Green Grid needs to come up with a minimal standard of frequency (my recommendation is at least 3 times a day, data collection) to establish this rating.  You also couldn’t publish this number without a full years worth of data.

Annual Average Peak PUE (APP) – My preference would be to use this as its a value that actually matters to the ongoing operation of the facility.  When you combine with the Operations challenge of managing power within a facility, you need to account for peaks more carefully especially as you approach the end capacity of the space you are deploying.    Again hard frequencies need to be established along with a full years worth of data here as well.

I think this would greatly cut back on ridiculous claims or at least get closer to a “truth in advertising’ position.  It would also allow for outside agencies to come in and audit those claims over time.  You could easily see extensions to the ISO14k and ISO27K and other audit certifications to test for it.   Additionally it gives the outsiders a peak at the complexity at the space and allows for smarter mechanics that drive for greater efficiency (how about a 10% APP reduction target per year instead). 

As the Green Grid is a consortium of different companies (some of whom are likely to want to keep the fuzziness around PUE for their own gains) it will be interesting to see if they step into better controlling the monster we have unleashed.  

Lets re-claim PUE and Metrics from the Marketing People. 

\Mm

Dinner and fireworks

Last night I attended my first Digital Realty Round Table Discussion in London and it was a fantastic treat and topper for my trip to the UK.   For those of you not familiar with these events, its an opportunity to discuss the challenges and issues facing the industry in an informal setting.  The events are hosted by Bernard Geoghegan, who is the General Manager for European Region who does a great job  of MC’ing the dinner and ensuring that conversation flows.    As the dinner begins attendees introduce themselves but are not required to mention the firms they work at.   The purpose of this meeting is real unfiltered conversation.  Selling and product positioning is not strictly not allowed, most especially from Digital attendees. 

As I sat around the largest round table in London (literally!) and scanned across the 25 or so attendees I really did not know what to expect.  I was pretty confident that if no-one bothered to offer any conversation points up,  Jim Smith (who also attended) and I could probably find some aspect of technology to argue and debate about.  But It didn’t take long for the fireworks to come out.   In fact the first person to introduce himself also listed out some of the things concerning him and that process flowed on to each participant.   By the time we got around the table of introductions we had healthy list of issues, challenges, and topics to talk about.  So much so, that there was absolutely no hope of getting to all of them.

After introductions I kicked off the conversation by diving into Data Center Management measurements.   Currency per kilowatt.  It was a great conversation with those that agreed that this was a good metric and those that did not.  I am not going to go into the topics we discussed in this post. They ranged from data center metrics, data center industry challenges, PUE, Data Center Tiering, Cloud Services, Managed Services, and a host of others. There is way too much to cover, and each will likely end up being its own post.   Lets just say there was no lack of opinion or fervor behind most topics.    Most interesting to me was the variation and representation of the firms around the table.  While many did not identify their specific firms, they did mention that they worked for a bank, a hosting provider, a large retail chain, etc.   It really highlighted to me how diverse our industry is and the technology applications we need to solve for.   The pervading thought as I left was that the current regulatory attempts to govern this space are going to be downright disastrous or ineffectual unless those agencies began to start reaching out to our industry in specific.   I have a whole post in mind on this, but fair warning – IT IS COMING (its already here), IT WILL AFFECT YOU – and YOU CANNOT IGNORE IT ANY MORE.

More on that to come.   I would strongly suggest that if you havent attended one of these events you think about doing so.   Quite a few of the attendees shared that they learned a great deal through this kind of group therapy.  It was a blast.

 

/Mm

Forecast Cloudy with Continued Enterprise

image

This post is a portion of my previous post that I broke into two.   It more clearly defines where I think the market is evolving to and why companies like Digital Trust Realty will be at the heart of the change in our industry.

 

The Birth of Utilities and why a Century ago will matter today and forward…

I normally kick off my vision of the future talk with a mention first to history (my former Microsoft folks are probably groaning at this moment if they are reading this).   I am a huge history buff.  In December of 1879, Thomas Edison harnessed the power of electricity for the first time to light a light bulb.  What’s not apparent is that this “invention” was in itself not complete.  To get this invention from this point to large scale, commercial application required a host of other things to be invented as well.   While much ado is made about the successful kind of filament used to ensure a consistent light source, there were no less than at least seven other inventions to make electric light (and ultimately the electric utility) practical for everyone.  Invention of things like the parallel circuit, an actual durable light bulb, an improved dynamo, underground conductor networks, devices to maintain constant voltage, insulting materials and safety fuses, the light socket, the on/off switch, and a bunch of other minor things.   Once all these things were solved, the creation of the first public electricity utility was created.  On September of 1882, the first commercial power station, located on Pearl Street in lower Manhattan opened its doors and began providing light and electrical power to all customers within the “massive” area of one square mile.   This substation was a marvel of technology staffed with 10s of technicians, maintaining the complex machinery to exacting standards.   The ensuing battle between Direct Current and Alternating Current was then created and in some areas still continues today. More on this in a bit.

A few years earlier a host of people were working on what would eventually become known as the telephone.   In the United States this work is attributed to Alexander Graham Bell and its that story I will focus on here for a second.  Through trial and error Bell and his compatriot Watson, accidently stumbled across a system to transfer sound in June of 1875.  After considerable work on refinement the product launched (There is an incredibly interesting history of this at SCRIBD), and after ever more additional trial and error the first telephonic public utility was created with the very first central office coming online in January of 1878 in New Haven, Connecticut.  This first central office was a marvel to behold.  Again the extremely high tech equipment with a host of people ensuring that telephonic utility was always available and calls were transferred appropriately.  Interestingly by 1881 only 9 cities with populations above 10,000 were without access to the telephone utility and only 1 above 15,000!  That is an adoption rate that remains boggling even by today’s standards.  

These are significant moments in time that truly changed the world in the way we live every day.   Today we are the birth of another such utility.  The Information Utility.   Many people I have spoken to claim this “Information Utility” is something different. It’s more of a product, because it uses existing utility services. Some maintain that its truly not revolutionary because its not leveraging new concepts.   But the same can be said of those utilities as well.   The communications infrastructure we use today whether telephone or data has its very roots in the telegraph.  The power utilities have a lot to thank the gas-lamp utilities of the past for solving early issues as well.  Everything old is new again and everything gets refined into something newer and better.  Some call this new Information Utility the “cloud”, others the Information-sphere, others just call it the Internet.  Regardless what you call it, access to information is going to be at your finger tips more today and tomorrow than it has ever been before. 

Even though this utility is built upon existing services, this utility too will have its infrastructure.  Just as the electric utility has its sub-stations and distribution yards, and the communication utilities have central offices, so too will data centers become the distribution mechanism for the the Information Utility.   We still have a lot of progress to make as well.   Not everything is invented or understood yet.  Just as Edison had to invent a host of other items to make electricity practical, and Bell and Watson have to develop the telephone, the telephone ringer (or more correctly, thumper or buzzer), so to does our information Utility have a long way to go.  In some respects its even more complicated than its predecessors as their was not burdened with legislation and government involvement that would affect its early development.  The “Cloud” does.

And that innovation does not always come from a select few.  Westinghouse and his alternating current eventually won out over direct current because it found its killer app and business case.   Alternating current was clearly the technically superior and better for distribution. They had even demonstrated generating power at Niagara Falls and successfully transferred that power all the way to Buffalo, New York! Something direct current was unable to do.  In the end, Westinghouse worked with appliance manufacturers to create devices that used alternating current.  By driving his killer app (things like refrigerators), Edison eventually lost out.  So too will the cloud have its killer apps.  The pending software and services battle will be interesting to note.  However what is interesting to me is that it was the business case that drove adoption and evolution here.  This also modified how the utility was used and designed.   DC substations gave way to AC substations, what used to take scores of people to support has dwindled to occasional visitations and pre-scheduled preventative maintenance.   At the data center level, we cannot afford to think that these killer applications will not change our world.    Our killers applications are coming and it will forever change how our world does business.  Data Centers and their evolution are at the heart of our future. 

On Fogs, Mist, and the Clouds Ahead . . .

After living in Seattle for close to 10 years, you learn you become an expert in three things.  Clouds, rain, and more clouds.   Unlike the utilities of the past, this new Information Utility is going to be made up of lots of independent cloudlets full of services.  The Microsoft’s, Google’s and Amazon’s of the world will certainly play a large part of the common platforms used by everyone, but the applications, products, content, customer information, and key development components will continue to have a life in facilities and infrastructure owned or controlled by companies providing those services.    In addition, external factors are already beginning to have a huge influence on cloud infrastructure.  Despite the growing political trend of trans-nationalism where countries are giving up some of their sovereign rights to participate in more regionally-aware economics and like-minded political agendas, that same effect does not seem to be taking place in the area of taxation and regulation of cloud and information infrastructure.  Specifically as it relates to electronic or intellectual property entities that derive revenue from infrastructure housed in those particular countries or do so (drive revenue) off of online activity of citizens of those nations.

There are numerous countries today that have, or are, seriously engaged in establishing and managing their national boundaries digitally online.  What do I mean by that?  There is a host of legislation across the globe that is the beginning to govern the protection and online management of their their citizens through legislation and mandates in accordance with their own laws.   This is having (and will continue to have) a dramatic impact on how infrastructure and electronic products and services will be deployed, where that data is stored,  and how revenue from that activity can and will be taxed by the local country.  This level of state exercised control can be economically, politically, or socially motivated and cloud services providers need to pay attention to it.   A great example of this is Canada which has passed legislation in response to the U.S. Patriot Act.   This legislation forbids personally identifiable information (PII) of Canadian citizens to be housed outside of the boundaries of Canada or perhaps more correctly, forbids its storage in the United States.    There are numerous laws and legislation making their way across Europe and Asia as well.   That puts an interesting kink in the idea of a world wide federated cloud user-base where information will be stored “in the cloud”.  From an infrastructure perspective it will mandate that there are facilities in each country to house that data.  While the data storage and retention challenge is an interesting software problem to solve the physical fact that the data will need to remain in a local geography will require data centers and components of cloud infrastructure to be present.  I expect this to continue as governments become more technically savvy and understand the impact of the rate of change being caused by this technology evolution. Given the fact that data centers are extremely capital intensive only a few players will be able to deploy private global infrastructures.  This means that the “information sub-station” providers will have an even more significant role in driving the future standards of this new Information Utility. One might think that this could be a service that is ultimately provided by the large cloud providers as a service.   That could be a valid assumption however, there is an interesting wrinkle developing around taxation or more correctly exposure to double taxation or multiple-country taxation that those large providers will face.   In my opinion the federation of “information substation” providers will provide the best balance of off-setting taxation issues and still providing a very granular and regionally acceptable way to service customers. That is where companies like Digital Realty Trust are going to come in and drive significant value and business protection.

I watch a lot of these geo-political and economic developments pretty closely as it relates to Data Center and infrastructure legislation and will continue to do so.  But even outside of these issues, the “cloud” or whatever term you like will continue to evolve and the “channels” created by this paradigm will continue to drive innovation at the products and services level.  Its at this level where the data center story will continue to evolve as well.   To start we need to think about the business version of the IT “server-hugging” phenomena. For the uninitiated, “Server Huggers” are those folks in an IT department who believe that the servers have to be geographically close  in order to work on them. In some cases its the right mentality, in others, where the server is located truly doesn’t matter.   It’s as much a psychological experiment as a technical one.   At a business level, there is a general reluctance to house the company jewels outside of corporate controlled space.  Sometimes this is regulated (like banks and financial institutions), most often its because those resources (proprietary applications, data sets, information stores, etc) are crucial to the success of the company, and in many instances ARE the company. Not something you necessarily want to outsource to others for control.  Therefore wholesale adoption of cloud resources is still a very very long way off.  That is not to say that this infrastructure wont get adopted into solutions that companies ultimately use to grow their own businesses.  This is going to drive tons of innovation where businesses evolve their applications , create new business models, and join together in mutually beneficial alliances that will change the shape, color, and feel of the cloud.  In fact, the cloud or “Information Utility” becomes the ultimate channel distribution mechanism.

The first grouping I can see evolving is fraternal operating groups or FOGs.  This is essentially a conglomeration of like minded or related industry players coming together to build shared electronic compute exchanges or product and service exchanges.  These applications and services will be highly customized to that particular industry. They will never be sated by the solutions that the big players will be putting into play, they are too specialized.  This infrastructure will likely not sit within individual company data centers but are likely to be located in common ground facilities or leased facilities with some structure for joint ownership.   Whether large or small, business to business, or business to consumer, I see this as an evolving sector.  There will be definitely companies looking to do this on their behalf, but given the general capital requirements to get into this type of business these FOG Agreements may be just the answer to find a great trade off between capital investment and return on the compute/service.

The next grouping builds off of the “company jewels” mindset and how it could blend with cloud infrastructure.  To continue the overly used metaphor of clouds,I will call them Managed Instances Stationed Territorially or MISTs.   There will likely be a host of companies that want to take advantage of the potential savings of cloud managed infrastructure, but want the warm and fuzzy feeling knowing its literally right in their backyard.  Imagine servers and infrastructure deployed at each customer data center, but centrally managed from cloud service providers.   Perhaps its owned by the cloud provider, perhaps the infrastructure has been purchased by the end-user company.   One can imagine the container-based server solutions being dropped into container-ready facilities or jury-rigged in the parking lot of a corporate owned or leased facility.  This gives companies the ability to structure their use of cloud technologies and map them into their own use case scenarios.  What makes the most sense for them.  The recent McKinsey paper talked about how certain elements of the cloud are more expensive than managing the resources through traditional means.  This is potentially a great hybrid scenario where companies can integrate as they need to using those services.  One could even see Misty FOGs or Foggy Mists.  I know the analogy is getting old at this point, but hopefully you can see that the future isn’t as static as some would have you believe.  This ability to channelize the technologies of the cloud will have a huge impact on business costs, operations, and technology.   It also suggests that mission critical infrastructure is not going to go away but become even more important and potentially more varied.  This is why I think that the biggest infrastructure impact will occur in the “information substation provider” level.  Data Centers aren’t going away, they might actually be growing in terms of demand, and the one thing definitely for sure is that they are evolving today and will continue to evolve as this space matures.  Does your current facility allow for this level of interconnectivity?  Do you have the ability to have a mixed solution management providers in your facility?  Lots of questions lots of opportunities to develop answers.

The last grouping is potentially an evolution of modern content delivery infrastructure or edge computing capabilities.  I will quit with the cutesy cloud names and call this generically Near Cloud Content Objects.   Given that products, services, and data will become the domain of those entities owning them, and a general reluctance to wholesale store them in someone else’s infrastructure, one could see that this proprietary content could leverage the global cloud infrastructures through regional gateways where they will be able to maintain ownership and control of their asset.  This becomes even more important when you balance into this the economic and geo-political aspects emerging in cloud compute.

In the end the cloud approach is going to significantly drive data center demand and cause it to evolve even further.  It will  not as some would like to project end the need for corporate data centers.  Then there is that not so little issue of the IT Applications and internal company services we use everyday.  This leads me into my next point . . .

The Continued and Increasing Importance of Enterprise Data Centers

This post has concentrated a lot on the future of cloud computing, so I will probably tick off a bunch of cloud-fan-folk with this next bit, but the need for the corporate data centers is not going away.  They may change in size, shape, efficiency, and the like, but there is a need to continue to maintain a home for those company jewels and to serve internal business communities.  The value of any company is the information and intellectual property developed, maintained, and driven by their employees.   Concepts like FOGs and MISTs and such still require ultimate homes or locations for that work to be terminated into or results sent to.  Additionally look at the suite of software each company may have in its facilities today supporting their business.  We are at least a decade or more away before those could be migrated to a distributed cloud based infrastructure.  Think about the migration costs of any particular application you have, then compound that with having the complexity of having your data stored in those cloud environments as well.  Are you then locked into a single cloud provider forever? It obviously requires cloud interoperability, which doesn’t exist today with exception of half-hearted non-binding efforts that don’t actually include any of the existing cloud providers.   If you believe as I do that the “cloud”  will actually be many little and large channelized solution cloudlets, you have to believe that the corporate data center is here to stay.  The mix of applications and products in your facilities may differ in the future, but you will still have them.  That’s not to say the facilities themselves will not have to evolve.  They will.  With changing requirements around energy efficiency and green reporting, along with the geo-political and other regulations coming through the pipeline, the enterprise data center will still be an area full of innovation as well.  

/Mm

Starting something new….

image

This post was an interesting struggle for me.  What should my first post since my departure from Microsoft be about?  I have a great amount of topics that I definitely want to talk about regarding the distance and gap from the executive suite, to Information Technology to the data center floor and why there continues to be challenge in this space across the industry.  In fact I probably have a whole series of them.  I am thinking of calling them “Chiller-side Chats” aimed at priming both sides in conversations with the other.   There are some industry-wide metric related topics that I want to take on, interesting trends I see developing, and literally a host of other things ranging from technology to virtualization.  While at Microsoft I maintained Loosebolts and an internal Microsoft blog which as it turns out was quite a bit of work.   I now have time to focus my energies in one place here at Loosebolts and unfortunately I may subject everyone reading this to even more of my wild ramblings.  But to talk to any of these technical issues, business issues, or industry issues would be ignoring the gigantic, purple spotted, white elephant in the room.   In fact, by the time I finished the original version of this post it was 6 pages long, and ran far afield on what I think is fundamentally changing in the data center space.  Instead of subjecting you to one giant blog, I was counseled by close friends to cut it down a bit into different sections.  So I will chop it up into two seperate posts.  The first question of course is – Why did I leave Microsoft for Digital Realty Trust?

I accomplished a great deal at Microsoft and I am extremely proud of my work there.  I have an immense amount of pride in the team that I developed there and the knowledge that it continues to drive that vision within the company.  Rest assured Microsoft has a great vision for where things are going in that space and the program is on rails as they say.  My final goodbye post talks more about my feelings there.  Within it, however, are some of the seeds (to continue that farming analogy even farther!)  of my departure.  First we need to pull our heads out of the tactical world of data centers and look at the larger emerging landscape in which data centers sit.  Microsoft, along with Google, Amazon and a few others are taking aim at Cloud Computing and are designing, building, and operating a different kind of infrastructure with different kinds of requirements. Specifically building ubiquitous services around the globe.  In my previous role, I was tasked with thinking about and building this unique infrastructure in concert with hundreds of development groups taking aim at building a core set of services for the cloud.   A wonderful blend of application and infrastructure.  Its a great thing.  But as my personal thought processes matured and deepened on this topic flavored with what I was seeing as emerging trends in business, technology and data center requirements I had a personal epiphany.  The concept of large monolithic clouds ruling the Information-sphere was not really complete.  Don’t get me wrong, they will play a large and significant role in how we compute tomorrow, but instead of an oligarchy of the few, I realized that enterprise data centers are here to stay and additionally we are likely to see an explosion of different cloud types are on the horizon.

In my opinion it is here in this new emerging space where the Information Utility will ultimately be born, defined, and true innovation in our industry (data center-wise) will take place.   This may seem rather unintuitive given the significant investments being made by the big cloud players but it is really not.   We have to remember that today, any technology must sate basic key requirements.  First and foremost amongst these is that it must solve the particular business problems.  Technology for technology sake will never result in significant adoption and the big players are working to perfect platforms that will work across a predominance of applications being specifically developed for their infrastructure.   In effect they are solving for their issues.  Issues that most of those looking to leverage cloud or shared compute will not necessarily match in either scale or standardization of server and IT environments.    There will definitely be great advances in technology, process, and a host of other areas, as a result of this work, but their leveragability is ultimately minimized as their environments, while they look like each other’s, will not easily map into the enterprise, near-enterprise, or near-cloud space.   The NASA space program has had thousands of great solutions, and some of them have been commercialized for the greater good.  I see similar things happening in the data center space.  Not everyone can get sub 1.3 Average PUE numbers, but they can definitely use those learnings to better their own efficiency in some  way.  While these large platforms in conjunction with enterprise data centers will provide key and required services, the innovation and primary requirement drivers in the future will come from the channel. 

So Why Digital Realty Trust?

Innovation can happen everywhere in any situation but it is most often born under the pressure of constraints.  While there are definitely some constraints that the big players have in evolving their programs, the real focus and attention in the industry will be at the Enterprise and Information Sub Station provider layer.   This is the part of the industry that is going to feel the biggest pinch as the requirements evolve.  Whether they be political, economical, social, or otherwise this layer will define how most of the data center industry looks like.   It is here at this layer in which a majority of companies around the world will be.  It is here at this layer that will be the most exciting for me personally.  The Moon Missions were great but they were not about bringing space travel to the masses.  Definitely some great learnings there that can be leveraged, but the commercialization and solution to the masses problem is different, perhaps bigger, and in my opinion more challenging.   At the end of the day it has to be economical and worthwhile.  We have to solve that basic business need and use case or it will remain an interesting scientific curiosity much like electricity was viewed before the light bulb. 

In Digital Realty Trust I found the great qualities I was looking for in any company.   First, they are positioned to provide either “information substation” or “enterprise” solutions and will need to solve for both.  They are effectively right in the middle of solving these issues and they are big enough to have a dramatic impact on the industry.  Secondly, and perhaps more importantly, they have a passionate, forward looking management team whom I have interacted with in the Industry for quite some time.  Let me reiterate that passionate point a moment, this is not some real estate company looking to make a quick buck on mission critical space.  I have seen enough of those in my career.  This is a firm focused on educating the market, driving innovation in application of technology, and near zealot commitment on driving efficiencies for their customers.  Whether its their frequent webinars, their industry speaking engagements, or personal conversations they are dedicated to this space, and dedicated on informing their customers.  Even when we have disagreed on topics or issues in the past, its always a great respectful conversation.  In a nutshell, they GET IT.  Another key piece that probably needs some addressing is that bit about application of technology.  We are living in some interesting times with data center technologies in a wonderful and terrible time of evolution.   The challenge for any enterprise is making heads and tails of which technologies will be great for them, what works, what doesn’t, what’s vaporware versus what is truly going to drive value.   The understanding and application of that technology is an area that Digital knows very well and the scale of their deployments allow them to learn the hard lessons before their clients have to.   Moreover they are implementing these technologies and building solutions that will fit for everyone, today! 

Another area where there is significant alignment in terms of my own personal beliefs and those of Digital Realty Trust is around speed of execution and bringing capacity online just in time.   Its no secret that I have been an active advocate of moving from big build and construction to a just in time production model.  These beliefs have long been espoused by Chris Crosby, Jim Smith, and the rest of the Digital team for some time and is very clearly articulated in the POD ARCHITECTURE approach that they have been developing for quite a few years.  Digital has done a great job of bringing this approach to the market for enterprise users and wants to drive it even faster!  One of my primary missions will be to develop the ability to deliver data center capacity start to finish in 16 weeks.   You cannot get there without a move to standardizing the supply chain and driving your program to production rather than pure construction.   Data Center planning and capacity planning is the single largest challenge in this industry.  The typical business realizes to late that they are in need to add data center capacity and these efforts typically result in significant impacts to their own business needs through project delays or cost.  As we all know, data center capacity is not ubiquitous and getting capacity just in time is either very expensive or impossible in most markets.  You can solve this problem by trying to force companies to do a better job of IT and capacity planning (i.e. boiling the ocean) or you can change how that capacity is developed, procured, and delivered.   This is one of my major goals and something I am looking forward to delivering.

In the end, my belief is that it will be companies like Digital Realty Trust at the spearhead of driving the design, physical technology application and requirements for the global Information Utility infrastructure.  They will clearly be situated the closest to those changing requirements for the largest amount of affected groups.  It is going to be a huge challenge. A challenge, I for one am extremely excited about and can’t wait to dig in and get started.

\Mm

Out of the Box Paradox – Manifested (aka Chicago Area Data Center begins its journey)

Comment on Microsoft’s official launch of the Chicago facility and the announcement of another Microsoft Data Center Experience Conference in the Chicago facility.

clip_image001

With modern conventional thinking and untold management consultants coaching people to think outside the box, I find it humorous that we have actually physically manifested an “Out of the Box Paradox” in Chicago.  

What is an Out of the Box Paradox you ask?  Well I will refer to Wikipedia on this one for a great example:

“The encouragement of thinking outside the box, however, has possibly become so popular that thinking inside the box is starting to become more unconventional.  This kind of “going against the grain means going with the grain” mentality causes a paradox in that there may be no such thing as conventionality when unconventionality becomes convention.”

The funny part here is that we are actually doing this with….you guessed it…..boxes. Today we finished the first phase of construction and we are rolling into the testing of container-based deployments.  Our facility in Chicago is our first purpose-built data center to accommodate containers on a large scale.  It has been an incredibly interesting journey.  The challenges of solving things that have never been done before are many.  We even had to create our own container specification, one specifically with the end-user in mind to ensure we maximized the cost and efficiency gains possible, not to mention standard blocking and tackling issues like standardizing power, water, network and other interfaces.  All sorts of interesting things have been discovered, corrected, and perfected.  From electrical harmonics issues to streamlining materials movement, to whole new operational procedures.

Chicago Container Spaces with load banks

The facility is already simply amazing and it’s a wonder to behold. Construction kicked off only one year ago and when completed it will have the capacity to scale to hundreds of thousands of servers which can be deployed (and de-commissioned as needed) very quickly.  The joke we use internally is that this is not your mother’s data center.  You get that impression from the first moment you step into the “hangar bay” on the first floor. The “hangar’s” first floor will house the container deployments and I can assure you it is like no data center you have ever seen.  It’s one more step to the industrialization of the IT world, or at least the cloud-scale operations space.  To be fair, and it’s important to note, only one half of the total facility is ready at this point, but even half of this facility is significant in terms of total capacity.

That “Industrialization of IT” is one of the core tenets of my mission at Microsoft. Throwing smart bodies at dumb problems is not really smart at all. The real quest is how to drive innovation and automation into everything that you do to reduce the amount of work that needs to be performed by humans.  Dedicate your smart people for solving hard problems.  It’s more than a mission, it’s a philosophy deeply rooted in our organization.  Besides, industry numbers tell us that humans are the leading cause of outages in data center facilities. 🙂 Our Chicago facility is a huge step forward to driving that industrialization increasingly forward.  It truly represents an evolution and demonstrates what could happen when you blend the power of software and breakthrough innovative design and engineering. Even for buildings!

 Chicago Container Spines being constructed

I have watched with much interest the back and forth on containers in the media, in the industry, and the interesting uses being proposed by the industry. The fact of the matter is that Containers are a great “Out of the Box Paradox” that really should not be terribly shocking to the industry at large. 

The idea of “containment” is almost as old as mechanical engineering and thermodynamics itself. Containment gives you the ability to manage the heat or lack thereof more effectively in individual ecosystems. Forward looking designers have been doing “containment” for a long time. So going back to the paradox that “out of the box, is in the box thinking” shift, the concept is not terribly new.  It’s the application at our scale and specifically to the data center world which is most interesting.  

It allows us to get out of the traditional decision points common to the data center industry in that certain infrastructure decisions actually reside in the container itself, which allows for a much quicker refresh cycle of key components and the ability to swap out for the next greatest technology rapidly.  Therefore, by default it allows us to deploy our capital infrastructure costs much more closely aligned with actual need versus the large step functions one normally sees in data center construction (build a large expensive facility, and fill it up over time versus build capacity out as you need it).   This allows you to better manage costs, better manage your business, and give you the best possible ramp for technology refresh.  You don’t particularly care if its AC or DC, if it’s water cooled or air cooled.  Our metrics are simple – Give us the best performing, most efficient, lowest TCO technology to meet our needs. If today that’s AC, great.  Tomorrow DC?  Fantastic.  Do I want to be able to do a bake-off between the two?  Sure. I don’t have to reinvest huge funds in my facilities to make those changes. 

For those of you with real lives and have not been following the whole container debates here is a quick recap –

  1. Microsoft is using standard 40 foot shipping containers for the deployment of servers in support of the software + services strategy and in support of our cloud services infrastructure initiatives.
  2. The containers can house as many as 2500 servers achieving a density of 10 times the amount of compute in the equivalent space in a traditional data center.
  3. We believe containers offer huge advantages at scale in terms of both initial capital and ongoing operating costs.
  4. This idea has met some resistance in the industry. As highlighted by my interesting back and forth with Eric Lai from Computerworld magazine. Original article can be found here, with my “Anthills” response found here.
  5. Chicago represents one of the first purpose-built container-built facilities ever.

To be clear, as I have said in the past, containers are not for everyone, but they are great for us.

The other thing which is important is the energy efficiency of the containers. Now I want to be careful here as the reporting of efficiency numbers can be a dangerous exercise in the blogo-sphere. But our testing shows that our containers in Chicago can deliver an average PUE of 1.22 with an AVERAGE ANNUAL PEAK PUE of 1.36. I break these two numbers out separately because there is still some debate (at least in the circles I travel in) on which of these metrics is more meaningful.  Regardless of your position on which is more meaningful, you have to admit those numbers are pretty darn compelling. 

image

For the purists and math-heads out there, Microsoft includes house lighting and office loads in our PUE calculation. They are required to run the facility so we count them as overhead.

On the “Sustainability” side of containers it’s also interesting to note that shipping 2500 servers in one big container has a positive reduction on the CO2 related to transportation, let alone the amount of packaging material eliminated.

So in my mind, containers are driving huge cost and efficiency (read also as cost benefits in addition to “green” benefits) gains for the business.  This is an extremely important point, as Microsoft expands its data center infrastructure, it is supremely important that we follow an established smart growth methodology for our facilities that is designed to prevent overbuilding—and thus avoid associated costs to the environment and to our shareholders.  We are a business after all.  We must do all of this while also meeting the rapidly growing demand for Microsoft’s Online and Live services.

Containers, and this new approach is definitely a change in how facilities have traditionally been developed, and as a result many people in our industry are intimidated by it.  But they shouldn’t be. Data center’s have not changed in fundamental design for decades.  Sometimes change is good. The exposure to any new idea is always met with resistance, but with a little education things change over time.

In that vein we are looking at holding our second Microsoft Data Center Experience (MDX) event in Chicago in the Spring/Summer 2009.  Our first event held in San Antonio, was basically an opportunity for a couple hundred Microsoft enterprise customers to tour our facilities, ask all the questions they wanted, interact with our Data Center experts (mechanical, electrical, operations, facilities management, etc.), and generally get a feel to our approach. It’s not that ours is the right way, or the wrong way…..just our way.  Think of an Operations event for Operations people, by Operations people. 

It’s not glamorous, there are no product pitches, no slick brochures, no hardware hunks or booth babes, but hopefully it’s interesting.  That first event was hugely successful with incredible feedback from our customers. As a result, we decided to do the same thing in Chicago with the very first container data center.  Which of course makes things a bit tricky.  While the facility will be going through a vigorous testing phase from effectively now moving forward, we thought it better to ensure that any and all construction activity be formally complete before we go moving large groups of people through our facility to ensure safety.  Plus, I don’t think I have enough hard hats and safety gear for you all.  

So if you attended MDX-San Antonio and really want to drill deeper in on Containers, in a facility custom built for them, or would like to attend just to ask questions, look for details on it from your Microsoft account management team or your local Microsoft sales office for details next Spring. (Although it’s not a sales event, you are more likely to reach someone there faster than calling into Global Foundation Services directly, after all we have a global infrastructure to run.)

/Mm

DataCenter Think Tanks Sheepish on Weightloss

LoseWeight.jpg

Matt Stansbury over at Data Center Facilities Pro posted an interesting post regarding a panel containing Uptime’s Ken Brill.  The note warns folks on the use of PUE as a benchmarking standard between data centers.

I can’t say I really disagree with what he says. In my mind, self measurement is always an intensely personal thing.  To me, PUE is a great self-measurement tool to drive towards power efficiency in your data center.  Do you include lighting?  Do you include your mechanical systems?  To me those questions are not all that dissimilar to the statement, “I am losing weight”.  Are you weighing yourself nude? in the your underwear?  with your shoes on? 

I do think the overall PUE metric could go a little farther to fully define what *MUST* be in the calculation, especially if you are going to use it comparatively.  But those who want to use this metric in some kind of competitive game are completely missing the point.   This is ultimately about using the power resources you have to its highest possible efficiency.    As I have stated over and over, and as recently as the recent Data Center Dynamics conference in Seattle.  Every Data Center is different.  If I tried to compare the efficiency of one of our latest generation facilities in San Antonio or Dublin to a facility built 10 years ago, assuming we made sure that we were comparing apples to apples with like systems included, of course the latest generation facilities would be better off.   A loss of 5 pounds on an Olympic runner with 4% body fat compared to a loss of 5 pounds on professional sumo wrestler have dramatically different effects (or non effects). 

Everyone knows I am a proponent of PUE/DCiE.  So when you read this understand where my baggage is carried.   To me the use of either, or, or both of these is a matter of audience.   Engineers love efficiency.  Business Managers understand overhead.   Regardless the measurement is consistent and more importantly the measurement is happening with some regularity. This is more important than anything.

If we are going to attempt to use PUE for full scale facility comparison a couple of things have to happen.   At Microsoft we measure PUE aggressively and often.  This speaks to the time element that Ken mentions in his talk in the post.   It would be great for the Green Grid or Uptime or anyone to produce the “Imperial Standard”.  One could even think that these groups could earn some extra revenue by certifying facilities to the “Imperial PUE standard”.  This would include minimum measurement cylces (once a day, twice a day, average for a year, peak for a year, etc).  Heavens knows it would be a far more useful metric for measuring data centers than the current LEEDS certifications. But thats another post for another time.  Seriously, the time element is hugely important.  Measuring your data center once at midnight in January while experiencing the coldest winter on record might make great marketing, but it doesnt mean much.

As an industry we have to face the fact that there are morons amongst us.  This, of course  is made worse if people are trying to advertise PUE as a competitive advantage due mostly to the fact that this means that they have engaged marketing people to “enhance” the message.    Ken’s mention of someone announcing that their PUE of .8 should instantly flag that person as an idiot and you should hand them an Engvallian sign.    But even barring these easy to identify examples we must remember that any measurement can be gamed.  In fact, I would go so far as to say that gaming measurements is the national pastime of all businesses. 

Ultimately I just chalk this up to another element of “Green-washing” that our industry is floating in.

eue

Ken also talks about the use of the word “Power” being incorrect and that because it is a point in time measurement versus an over time measurement and that we should be focused on “Energy”. According to Ken this could ultimately doom the measurement on the whole.  I think this is missing the point entirely on two fronts.   First whether you call it power or energy, the naming semantics dont really matter.  They matter to english professors and people writing white papers, but it terms of actually doing something, it has no effect.  The simple act of measuring is the most critical concept here.   Measure something, get better.   Whether you like PUE, DCIE or whether you want to adopt “Energy” and call it EUE and embrace a picture of a sheep with power monitoring apparatus attached to its back, the name doesnt really matter. )Though I must admit, a snappy mascot might actually drive more people to measure.  Just do something!

My informal polling at speaking engagements continues on the state of the industry and I am sad to say, the amount of people actively measuring power consumption remains less than 10% (let alone measuring for efficiency!), and if anything the number seems to be declining.  

In my mind, as an end-user, the thrash that we see coming from the standards bodies and think tank organizations like Uptime, Green Grid, and others should really stop bickering over whose method of calculation is better or has the best name.  We have enough challenge getting the industry to adopt ANY KIND of measurement.  To confuse matters more and argue the finer points of absurdity is only going to further magnify this thrash and ensure we continue to confuse most data center operators into more non-action .  As an industry we are heading down a path with our gun squarely aimed at out foot.   If we are not careful, the resultant wound is going to end up in amputation.

– MM