Through an idea and force of will, he created an industry…

This week the Data Center Industry got the terrible news it knew might be coming for some time.   That Ken Brill, founder of the Uptime Institute had passed away.  Many of us knew that Ken had been ill for some time and although it may sound silly, were hoping he could somehow pull through it.   Even as ill as he was, Ken was still sending and receiving emails and staying in touch with this industry that quite frankly he helped give birth to.  

I was recently asked about Ken and his legacy for a Computerworld article and it really caused me to stop and re-think his overall legacy and gift to the rest of us in the industry.  Ken Brill was a pioneering, courageous, tenacious, visionary who through his own force of will saw the inefficiencies in a nascent industry and helped craft it into what it is today.

Throughout his early career experience Ken was able to see the absolute silo’ing of information, best practices, and approaches that different enterprises were developing around managing their mission critical IT spaces.    While certainly not alone in the effort, he became the strongest voice and champion to break down those walls, help others through the process and build a network of people who would share these ideas amongst each other.  Before long an industry was born.   Sewn together through his sometimes delicate, sometimes not so delicate cajoling and through it all his absolute passion for the Data Center industry at large.

One of the last times Ken and I got to speak in person.In that effort he also created and permeated the language that the industry uses as commonplace.   Seeing a huge gap in terms of how people communicated and compared mission critical capabilities he became the klaxon of the Tiering system which essentially normalized the those conversations across the Data Center Industry.   While some (including myself) have come to think it’s a time to re-define how we classify our mission critical spaces, we all have to pay homage to the fact that Ken’s insistence and drive for the Tiering system created a place and a platform to even have such conversations.  

One of Ken’s greatest strengths was his adaptability.   For example, Ken and I did not always agree.   I remember an Uptime Fellows meeting back in 2005 or 2006 or so in Arizona.  In this meeting I started talking about the benefits of modularization and reduced infrastructure requirements augmented by better software.   Ken was incredulous and we had significant conversations around the feasibility of such an approach.   At another meeting we discussed the relative importance or non-importance of a new organization called ‘The Green Grid’ (Smile)and if Uptime should closely align itself with those efforts.   Through it all Ken was ultimately adaptable. Whether it was giving those ideas light for conversation amongst the rest of the Uptime community via audio blogs, or other means, Ken was there to have a conversation.

In an industry where complacency has become commonplace, where people rarely question established norms, it was always comforting to know that Ken was there acting the firebrand, causing the conversation to happen.   This week we lost one of the ‘Great Ones’ and I for one will truly miss him.  To his family my deepest sympathies, to our industry I ask, “Who will take his place?”

 

\Mm

The Cloud Cat and Mouse Papers–Site Selection Roulette and the Insurance Policies of Mobile infrastructure

cnm-roulette

Its always hard to pick exactly where to start in a conversation like this especially since this entire process really represents a changing life-cycle.   Its more of a circular spiral that moves out (or evolves) as new data is introduced than a traditional life-cycle because new data can fundamentally shift the technology or approach.   That being said I thought I would start our conversations at a logical starting point.   Where does one place your infrastructure?  Even in its embryonic “idea phase” the intersection of government and technology begins its delicate dance to a significant degree. These decisions will ultimately have an impact on more than just where the Capital investments a company decides to make are located.  It has affects on the products and services they offer, and as I propose, an impact ultimately on the customers that use the services at those locations.

As I think back to the early days of building out a global infrastructure, the Site Selection phase started at a very interesting place.   In some ways we approached it with a level of sophistication that has still to be matched today and in other ways, we were children playing a game whose rules had not yet been defined.

I remember sitting across numerous tables with government officials talking about making an investment (largely just land purchase decisions) in their local community.  Our Site Selection methodology had brought us to these areas.  A Site Selection process which continued to evolve as we got smarter, and as we started to truly understand the dynamics of the system were being introduced to.   In these meetings we always sat stealthily behind a third party real estate partner.  We never divulged who we were, nor were they allowed to ask us that directly.  We would pepper them with questions, and they in turn would return the favor.  It was all cloak and dagger with the Real Estate entity taking all action items to follow up with both parties.

Invariably during these early days -  these locales would always walk away with the firm belief that we were a bank or financial institution.   When they delved into our financial viability (for things like power loads, commitment to capital build-out etc.) we always stated that any capital commitments and longer term operational cost commitments were not a problem.    In large part the cloak and dagger aspect was to keep land costs down (as we matured, we discovered this was quite literally the last thing we needed to worry about) as we feared that once our name became attached to the deal our costs would go up.   These were the early days of seeding global infrastructure and it was not just us.  I still laugh at the fact that one of our competitors bound a locality up so much in secrecy – that the community referred to the data center as Voldemort – He who shall not be named, in deference to the Harry Potter book series.

This of course was not the only criteria that we used.  We had over 56 by the time I left that particular effort with various levels of importance and weighting.   Some Internet companies today use less, some about the same, and some don’t use any, they ride on the backs of others who have trail-blazed a certain market or locale.   I have long called this effect Data Center Clustering.    The rewards for being first mover are big, less so if you follow them ultimately still positive. 

If you think about most of the criteria used to find a location it almost always focuses on the current conditions, with some acknowledge in some of the criteria of the look forward.  This is true for example when looking at power costs.   Power costs today are important to siting a data center, but so is understanding the generation mix of that power, the corresponding price volatility, and modeling that ahead to predict (as best as possible) longer term power costs.

What many miss is understanding the more subtle political layer that occurs once a data center has been placed or a cluster has developed. Specifically that the political and regulatory landscape can change very quickly (in relationship to the life of a data center facility which is typically measured in 20, 30, or 40 year lifetimes).  It’s a risk that places a large amount of capital assets potentially in play and vulnerable to these kinds of changes.   Its something that is very hard to plan or model against.  That being said there are indicators and clues that one can use to at least play risk factors against or as some are doing – ensuring that the technology they deploy limits their exposure.    In cloud environments the question remains open – how liable are companies using cloud infrastructure in these facilities at risk?   We will explore this a little later.

That’s not to say that this process is all downside either.  As we matured in our approach, we came to realize that the governments (local or otherwise) were strongly incented to work with us on getting us a great deal and in fact competed over this kind of business.   Soon you started to see the offers changing materially.  It was little about the land or location and quickly evolved to what types of tax incentives, power deals, and other mechanisms could be put in play.   You saw (and continue to see) deals structured around sales tax breaks, real estate and real estate tax deals, economic incentives around breaks in power rates, specialized rate structures for Internet and Cloud companies and the like.   The goal here of course was to create the public equivalent of “golden handcuffs” for the Tech companies and try to marry them to particular region, state, or country.  In many cases – all three.  The benefits here are self apparent.  But can they (or more specifically will they) be passed on in some way to small companies who make use of cloud infrastructure in these facilities? While definitely not part of the package deals done today – I could easily see site selection negotiations evolving to incent local adoption of cloud technology in these facilities or provisions being put in place tying adoption and hosting to tax breaks and other deal structures in the mid to longer timeframe for hosting and cloud companies.

There is still a learning curve out there as most governments mistakenly try and tie these investments with jobs creation.   Data Centers, Operations, and the like represents the cost of goods sold (COGS) to the cloud business.  Therefore there is a constant drive towards efficiency and reduction of the highest cost components to deliver those products and services.   Generally speaking, people, are the primary targets in these environments.   Driving automation in these environments is job one for any global infrastructure player.  One of the big drivers for us investing and developing a 100% lights-out data center at AOL was eliminating those kinds of costs.  Those governments that generally highlight job creation targets over other types typically don’t get the site selection.    After having commissioned an economic study done after a few of my previous big data center builds I can tell you that the value to a region or a state does not come from the up front jobs the data center employs.  After a local radio stationed called into question the value of having such a facility in their backyard, we used a internationally recognized university to perform a third party “neutral” assessment of the economic benefits (sans direct people) and the numbers were telling.  We had surrendered all construction costs and other related material to them, and they investigated over the course of a year through regional interviews and the like of what the direct impacts of a data center was on the local community, and the overall impacts by the addition.  The results of that study are owned by a previous employer but I  can tell you with certainty – these facilities can be beneficial to local regions.

No one likes constraints and as such you are beginning to see Technology companies use their primary weapon – technology – to mitigate their risks even in these scenarios.   One cannot argue for example, that while container-based data centers offer some interesting benefits in terms of energy and cost efficiencies, there is a certain mobility to that kind of infrastructure that has never been available before.    Historically, data centers are viewed as large capital anchors to a location.    Once in place, hundreds of millions to billions (depending on the size of the company) of dollars of capital investment are tied to that region for its lifespan.   Its as close to permanent in the Tech Industry as building a factory was during the industrial revolution. 

In some ways Modularization of the data center industry is/can/will have the same effect as the shipping container did in manufacturing.   All puns intended.  If you are unaware of how the shipping container revolutionized the world, I would highly recommend the book “The Box” by Marc Levinson, it’s a quick read and very interesting if you read it through the lens of IT infrastructure and the parallels of modularization in the Data Center Industry at large.

It gives the infrastructure companies more exit options and mobility in the future than they would have had in the past under large capital build-outs.  Its an insurance policy if you will for potential changes is legislation or regulation that might negatively impact the Technology companies over time.  Just another move in the cat and mouse games that we will see evolving here over the next decade or so in terms of the interactions between governments and global infrastructure. 

So what about the consumers of cloud services?  How much of a concern should this represent for them?  You don’t have to be a big infrastructure player to understand that there are potential risks in where your products and services live.  Whether you are building a data center or hosting inside a real estate or co-location provider – these are issues that will affect you.  Even in cases where you only use the cloud provisioning capabilities within your chosen provider – you will typically be given options of what region or area would you like you gear hosted in.  Typically this is done for performance reasons – reaching your customers – but perhaps this information might cause you to think of the larger ramifications to your business.   It might even drive requirements into the infrastructure providers to make this more transparent in the future.

These evolutions in the relationship between governments and Technology and the technology options available to them will continue to shape site selection policy for years to come.   So too will it ultimately affect the those that use this infrastructure whether directly or indirectly remains to be seen.  In the next paper we will explore the this interaction more deeply as it relates to the customers of cloud services and the risks and challenges specifically for them in this environment.

\Mm

Olivier Sanche, My Dear Friend, Adieu

The data center industry has suffered a huge loss this holiday weekend with the passing of Olivier Sanche, head of Apple’s Data Center program. He was an incredibly thoughtful man, a great father and husband, and very sincerely a great friend. As I got off the phone with his brother and wife in France who gave me this devastating news and I could not help but remember my first encounter with Olivier.  At the time he worked for Ebay and we were both invited to speak and debate at an industry event in Las Vegas.  As we sat in a room full of  ‘experts’  to discuss the future of our industry, the conversation quickly turned controversial.  Passions were raised and I found myself standing side by side with this enigmatic French giant on numerous topics.  His passion for the space coupled with his cool logic were items that endeared me greatly to the man.  We were comrades in ideas, and soon became fast friends.

Olivier was the type of person who could light up a room with his mere presence.   It was as if he embraced the entire room in one giant hug even if they were strangers.  He could sit quietly mulling a topic, pensively going through his calculations and explode into the conversation and rigorously debate everyone.  That passion never belied his ability to learn, to adapt, to incorporate new thinking into his persona either.  Through the years we knew each other I saw him forge his ideas through debate, always evolving.   Many people know the public Olivier, the Olivier they saw at press conferences, or speaking engagements, and the like. Some of us, got to know Olivier much better.  The data center industry is small indeed and those of us who have had the pleasure and terror at working in the worlds largest infrastructures know a special kind of bond.   We routinely meet off-hours and have dinner and drinks.   Its a small cadre of names you probably know, or have heard about, joined in the fact that we have all dealt with or are dealing with challenges most data center environments will never see.  In these less formal affairs, company positions melted away, technological challenges came to the fore, and most importantly the real people behind these companies emerge.   In these forums, you could always count on Olivier to be a warm and calming force.   He was incredibly intelligent, and although he might disagree, you could count on him to champion the free discussion of ideas.

It was in those types of forums where I truly met Olivier.   The man who was so dedicated to his family, and the light of his life little Emilie.  His honesty and direct to the point style made it easy to understand where you stood, and where he was coming from.

More information about memorial services and the like will be coming out shortly and they are trying to get the word out to all of his friends.

The world has lost a great mind, Apple has lost a visionary, His family has lost their world, and I have lost a good friend.

Adieu, Dear Olivier, You and your family will be in my thoughts and prayers.

Your friend,

Mike Manos

\Mm

Opinion Polls and the End of Times

I recently had an interesting e-mail exchange with Olivier Sanche the chief DC architect at Apple.  As you probably know this is a very small industry and Olivier and I have enjoyed a long professional working relationship.   He remarked that we are approaching the end of times, as we were both nominated for a Data Center Dream Team in an industry magazine.  I agreed with him wholeheartedly.

We we were referring to the poll being conducted by the Web Hosting Industry Review (WHIR) who is conducting a survey to see who would represent the Industry’s best Data Center Dream Team.  While its a definite honor to be mentioned, it definitely signals the end of times.  🙂

To me the phrase “Dream Team” conjures images of people with a long list of accomplishments.   Its a bit strange to think of the Data Center Industry at large as having made significant movement forward.  There has been a tremendous amount of innovation in the last few years, and I do definitely believe we are at the start of something truly revolutionary in our industry, I think its probably way to early in our steps forward to start defining success like this.  

For those of you interested the poll is located below.  Please keep in mind that you cannot see the results without actually taking the poll itself.

http://www.thewhir.com/Poll/vote

\Mm

Stirring Anthills – My Response to the Recent Computer World Article

clip_image001

 

** THIS IS A RE-POST From my former BLOG Site, saving here for continuity and posterity **

When one inserts the stick of challenge and change into the anthill of conventional and dogmatic thinking they are bound to stir up a commotion.

That is exactly what I thought when I read the recent Computerworld article by Eric Lai on containers as a data center technology.  The article found here, outlines six reasons why containers won’t work and asks if Microsoft is listening.   Personally, it was an intensely humorous article, albeit not really unexpected.  My first response was "only six"?  You only found six reasons why it won’t work?  Internally we thought of a whole lot more than that when the concept first appeared on our drawing boards. 

My Research and Engineering team is challenged with vetting technologies for applicability, efficiency, flexibility, longevity, and perhaps most importantly — fiscal viability.   You see, as a business, we are not into investing in solutions that are going to have a net effect of adding cost for costs sake.    Every idea is painstakingly researched, prototyped, and piloted.  I can tell you one thing, the internal push-backs on the idea numbered much more than six and the biggest opponent (my team will tell you) was me!

The true value of any engineering organization is to give different ideas a chance to mature and materialize.  The Research and Engineering teams were tasked with making sure this solution had solid legs, saved money, gave us the scale, and ultimately was something we felt would add significant value to our program.  I can assure you the amount of math, modeling, and research that went into this effort was pretty significant.  The article contends we are bringing a programmer’s approach to a mechanical engineer’s problem.  I am fairly certain that my team of professional and certified engineers took some offense to that, as would Christian Belady who has conducted extensive research and metrics for the data center industry. Regardless, I think through the various keynote addresses we’ve participated in over the last few months we tried to make the point that containers are not for everyone.   They are addressing a very specific requirement for properties that can afford a different operating environment.  We are using them for rapid and standard deployment at a level the average IT shop does not need or tools to address. 

Those who know me best know that I enjoy a good tussle and it probably has to do with growing up on the south side of Chicago.  My team calls me ornery, I prefer "critical thought combatant."   So I decided I would try and take on the "experts" and the points in the article myself with a small rebuttal posted here:

Challenge 1: Russian Doll Like Nesting servers on racks in containers and lead to more moreness.

Huh?  This challenge has to do with the perceived challenges on the infrastructure side of the house, and complexity of managing such infrastructure in this configuration.   The primary technical challenge in this part is harmonics.   Harmonics can be solved in a multitude of ways, and as accurately quoted is solvable.  Many manufacturers have solutions to fix harmonics issues, and I can assure you this got a pretty heavy degree of technical review.   Most of these solutions are not very expensive and in some cases are included at no cost.   We have several large facilities, and I would like to think we have built up quite a stable of understanding and knowledge in running these types of facilities.  From a ROI perspective, we have that covered as well.   The economics of cost and use in containers (depending upon application, size, etc.) can be as high as 20% over conventional data centers.   These same metrics and savings have been discovered by others in the industry.  The larger question is if containers are a right-fit for you.  Some can answer yes, others no. After intensive research and investigation, the answer was yes for Microsoft.

Challenge 2: Containers are not as Plug and Play as they may seem.

The first real challenge in this section is about shipment of gear and that it would be a monumental task for us to determine or provide verification of functionality.   We deploy tens of thousands of servers per month. As I have publicly talked about, we moved from individual servers as a base unit, to entire racks as a scale unit, to a container of racks.   The process of determining functionality is incredibly simple to do.  You can ask any network, Unix, or Microsoft professional on just how easy this is, but let’s just say it’s a very small step in our "container commissioning and startup" process.  

The next challenge in this section is truly off base. .   The expert is quoted that the "plug and play" aspect of containers is itself put in jeopardy due to the single connection to the wall for power, network, etc.  One can envision a container with a long electrical extension cord.  I won’t disclose some of our "secret sauce" here, but a standard 110V extension cord just won’t cut it.  You would need a mighty big shoe size to trip over and unplug one of these containers. Bottom line is that connections this large require electricians for installation or uninstall. I am confident we are in no danger of falling prey to this hazard. 

However, I can say that regardless of the infrastructure technology the point made about thousands of machines going dark at one time could happen.  Although our facilities have been designed around our "Fail Small Design" created by my Research and Engineering group, outages can always happen.  As a result, and being a software company, we have been able to build our applications in such a way where the loss of server/compute capacity never takes the application completely offline.  It’s called application geo-diversity.  Our applications live in and across our data center footprint. By putting redundancy in the applications, physical redundancy is not needed.  This is an important point, and one that scares many "experts."   Today, there is a huge need for experts who understand the interplay of electrical and mechanical systems.  Folks who make a good living by driving Business Continuity and Disaster Recovery efforts at the infrastructure level.   If your applications could survive whole facility outages would you invest in that kind of redundancy?  If your applications were naturally geo-diversified would you need a specific DR/BCP Plan?   Now not all of our properties are there yet, but you can rest assured we have achieved that across a majority of our footprint.  This kind of thing is bound to make some people nervous.   But fear not IT and DC warriors, these challenges are being tested and worked out in the cloud computing space, and it still has some time before it makes its way into the applications present in a traditional enterprise data center.

As a result we don’t need to put many of our applications and infrastructure on generator backup.  To quote the article :

"Few data centers dare to make that choice, said Jeff Biggs, senior vice president of operations and engineering for data center operator Peak 10 Inc., despite the average North American power uptime of 99.98%. "That works out to be about 17 seconds a day," said Biggs, who oversees 12 data centers in southeastern states. "The problem is that you don’t get to pick those 17 seconds."

He is exactly right. I guess two points I would highlight here are: the industry has some interesting technologies called battery and rotary UPS’ that can easily ride through 17 seconds if required, and the larger point is, we truly do not care.   Look, many industries like the financial and others have some very specific guidelines around redundancy and reliability.   This drives tens of millions to hundreds of millions of extra cost per facility.   The cloud approach eliminates this requirement and draws it up to the application. 

Challenge 3: Containers leave you less, not more, agile.

I have to be honest; this argument is one that threw me for a loop at first.   My initial thought upon reading the challenge was, "Sure, building out large raised floor areas to a very specific power density is ultimately more flexible than dropping a container in a building, where density and server performance could be interchanged at a power total consumption level."   NOT!  I can’t tell you how many data centers I have walked through with eight-foot, 12-foot, or greater aisles between rack rows because the power densities per rack were consuming more floor space.   The fact is at the end of the day your total power consumption level is what matters.   But as I read on, the actual hurdles listed had nothing to do with this aspect of the facility.

The hurdles revolved around people, opportunity cost around lost servers, and some strange notion about server refresh being tied to the price of diesel. A couple of key facts:

· We have invested in huge amounts of automation in how we run and operate.   The fact is that even at 35 people across seven days a week, I believe we are still fat and we could drive this down even more.   This is running thin, its running smart.  

· With the proper maintenance program in place, with professionals running your facility, with a host of tools to automate much of the tasks in the facility itself, with complete ownership of both the IT and the Facilities space you can do wonders.  This is not some recent magic that we cooked up in our witches’ brew; this is how we have been running for almost four years! 

In my first address internally at Microsoft I put forth my own challenge to the team.   In effect, I outlined how data centers were the factories of the 21st century and that like it or not we were all modern day equivalents of those who experienced the industrial revolution.  Much like factories (bit factories I called them), our goal was to automate everything we do…in effect bring in the robots to continue the analogy.  If the assembled team felt their value was in wrench turning they would have a limited career growth within the group, if they up-leveled themselves and put an eye towards automating the tasks their value would be compounded.  In that time some people have left for precisely that reason.   Deploying tens of thousands of machines per month is not sustainable to do with humans in the traditional way.  Both in the front of the house (servers,network gear, etc) and the back of the house (facilities).   It’s a tough message but one I won’t shy away from.  I have one of the finest teams on the planet in running our facilities.   It’s a fact, automation is key. 

Around opportunity cost of failed machines in a container from a power perspective, there are ultimately two scenarios here.   One is that the server has failed hard and is dead in the container.  In that scenario, the server is not drawing power anyway and while the container itself may be drawing less power than it could, there is not necessarily an "efficiency" hit.   The other scenario is that the machine dies in some half-state or loses a drive or similar component.   In this scenario you may be drawing energy that is not producing "work".  That’s a far more serious problem as we think about overall work efficiency in our data centers.  We have ways through our tools to mitigate this by either killing the machine remotely, or ensuring that we prune that server’s power by killing it at an infrastructure level.   I won’t go into the details here, but we believe efficiency is the high order bit.   Do we potentially strand power in this scenario?  Perhaps. But as mentioned in the article, if the failure rate is too high, or the economics of the stranding begin to impact the overall performance of the facility, we can always swap the container out with a new one and instantly regain that power.   We can do this significantly more easily than a traditional data center could because I don’t have to move servers or racks of equipment around in the data center(i.e. more flexible).   One thing to keep in mind is that all of our data center professionals are measured by the overall uptime of their facility, the overall utilization of the facility (as measured by power), and the overall efficiency of their facility (again as measured by power).  There is no data center manager in my organization who wants to be viewed as lacking in these areas and they give these areas intense scrutiny.  Why?  When your annual commitments are tied to these metrics, you tend to pay attention to them. 

The last hurdle here revolves around the life expectancy of a server and technology refresh change rates and somehow the price of diesel and green-ness.

"Intel is trying to get more and more power efficient with their chips," Biggs said. "And we’ll be switching to solid-state drives for servers in a couple of years. That’s going to change the power paradigm altogether." But replacing a container after a year or two when a fraction of the servers are actually broken "doesn’t seem to be a real green approach, when diesel costs $3.70 a gallon," Svenkeson said.

Clear as mud to me.  I am pretty sure the "price of diesel" in getting the containers to me is included in the price of the containers.  I don’t see a separate diesel charge.  In fact, I would argue that "shipping around 2000 servers individually" would ultimately be less green or (at least in travel costs alone) a push.   In fact, if we dwell a moment longer on the "green-ness" factor, there is something to be said for the container in that the box it arrives in is the box I connect to my infrastructure.   What happens to all the foam product and cardboard with 2000 individual servers?  Regardless, we recycle all of our servers.  We don’t just "throw them away".On the technology refresh side of the hurdle, I will put on my business hat for a second.  Frankly, I don’t know too many people who depreciate server equipment less than three years.  Those who do, typically depreciate over one year.  But having given talks at Uptime and AFCOM in the last month the comment lament across the industry was that people were keeping servers (albeit power inefficient servers) well passed their useful life because they were "free".   Technology refresh IS a real factor for us, and if anything this approach allows us to adopt new technologies faster.   I get to upgrade a whole container’s worth of equipment to the best performance and highest efficiency when I do refresh and best of all there is minimal "labor" to accomplish it.  I would also like to point out that containers are not the only technology direction we have.  We solve the problems with the best solution.  Containers are just one tool in our tool belt.   In my personal experience, the Data Center industry often falls prey to the old adage of “if your only tool is a hammer then every problem is a nail syndrome.”

Challenge 4: Containers are temporary, not a long term solution.

Well I still won’t talk about who is in the running for our container builds, but I will talk to the challenges put forth here.   Please keep in mind that Microsoft is not a traditional "hoster".  We are an end user.  We control all aspects of construction, server deployments and applications that go into our facilities.  Hosting companies do not.   This section challenges that while we are in a growth mode now, it won’t last forever, therefore making it temporary. The main point that everyone seems to overlook is the container is a scale unit for us.  Not a technology solution for incremental capacity, or providing capacity necessarily in remote regions.   If I deploy 10 containers in a data center, and each container holds 2000 servers, that’s 20,000 servers.  When those servers are end of life, I remove 10 containers and replace them with 10 more.   Maybe those new models have 3000 servers per container due to continuing energy efficiency gains.   What’s the alternative?  How people intensive do you think un-racking 20000 servers would be followed by racking 20000 more?   Bottom line here is that containers are our scale unit, not an end technology solution.   It’s a very important distinction that seems lost in multiple conversations.  Hosting Companies don’t own the gear inside them, users do. It’s unlikely  they will ever experience this kind of challenge or need.  The rest of my points are accurately reflected in the article. 

Challenge 5: Containers don’t make a data center Greener

This section has nothing to do with containers.   This has to do with facility design.  While containers may be able to take advantage of the various cooling mechanisms available in the facility the statement is effectively correct that "containers" don’t make a data center greener.   There are some minor aspects of "greener" that I mentioned previously around shipping materials, etc, but the real "green data center" is in the overall energy use efficiency of the building.

I was frankly shocked at some of the statements in this section:

An airside economizer, explained Svenkeson, is a fancy term for "cutting a hole in the wall and putting in a big fan to suck in the cold air." Ninety percent more efficient than air conditioning, airside economizers sound like a miracle of Mother Nature, right?  Except that they aren’t. For one, they don’t work — or work well, anyway — during the winter, when air temperature is below freezing. Letting that cold, dry air simply blow in would immediately lead to a huge buildup of static electricity, which is lethal to servers, Svenkeson said.

Say what?  Airside economization is a bit more than that.  I am fairly certain that they do work and there are working examples across the planet.   Do you need to have a facility-level understanding of when to use and when not to use them?  Sure.   Regardless all the challenges listed here can be easily overcome.   Site selection also plays a big role. Our site selection and localization of design decides which packages we deploy.   To some degree, I feel this whole argument falls into another one of the religious wars on-going in the data center industry.   AC vs. DC, liquid cooled vs. air cooled, etc.  Is water-side economization effective? Yes.  Is it energy efficient? No.  Not at least when compared to air economization in a location tailor made for it.  If you can get away with cooling from the outside and you don’t have to chill any water (which takes energy) then inherently it’s more efficient in its use of energy.  Look, the fact of the matter is we have both horses in the race.  It’s about being pragmatic and intelligent about when and where to use which technology.  

Some other interesting bits for me to comment on:

Even with cutting-edge cooling systems, it still takes a watt of electricity to a cool a server for every watt spent to power it, estimated Svenkeson. "It’s quite astonishing the amount of energy you need," Svenkeson said. Or as Emcor’s Baker put it, "With every 19-inch rack, you’re running something like 40,000 watts. How hot is that? Go and turn your oven on."

I would strongly suggest a quick research into the data that Green Grid and Uptime have on this subject.   Worldwide PUE metrics (or DCiE if you like efficiency numbers better) would show significant variation in the one for one metric.   Some facilities reach a PUE of 1.2 or 80% efficient at certain times of the year or in certain locations.   Additionally the comment that every 19inch rack draws 40kw is outright wrong.  Worldwide averages show that racks are somewhere between 4kw and 6kw.  In special circumstances, densities approach this number, but as an average number it is fantastically high. 

But with Microsoft building three electrical substations on-site sucking down a total of 198 megawatts, or enough to power almost 200,000 homes, green becomes a relative term, others say. "People talk about making data centers green. There’s nothing green about them. They drink electricity and belch heat," Biggs said. "Doing this in pods is not going to turn this into a miracle."

I won’t publicly comment on the specific size of the substation, but would kindly point someone interested in the subject to substation design best practices and sizing.  How you design and accommodate a substation for things like maintenance, configuration and much more is an interesting topic in itself.  I won’t argue that the facility isn’t large by any standard; I’m just saying there is complexity one needs to look into there.   Yes, data centers consume energy, being "green" assumes you are doing everything you can to ensure every last watt is being used for some useful product of work.  That’s our mission. 

Challenge 6: Containers are a programmers approach to a mechanical engineer’s problem.

As I mentioned before, a host of professional engineers that work for me just sat up and coughed. I especially liked:

"I think IT guys look at how much faster we can move data and think this can also happen in the real world of electromechanics," Baker said. Another is that techies, unfamiliar with and perhaps even a little afraid of electricity and cooling issues, want something that will make those factors easier to control, or if possible a nonproblem. Containers seem to offer that. "These guys understand computing, of course, as well as communications," Svenkeson said. "But they just don’t seem to be able to maintain a staff that is competent in electrical and mechanical infrastructure. They don’t know how that stuff works."

I can assure you that outside of my metrics and reporting tool developers, I have absolutely no software developers working for me.   I own IT and facilities operations.   We understand the problems, we understand the physics, we understand quite a bit. Our staff has expertise with backgrounds as far ranging as running facilities on nuclear submarines to facilities systems for space going systems.  We have more than a bit of expertise here. With regards to the comment that we are unable to maintain a staff that is competent, the folks responsible for managing the facility have had a zero percent attrition rate over the last four years.  I would easily put my team up against anyone in the industry. 

I get quite touchy when people start talking negatively about my team and their skill-sets, especially when they make blind assumptions.  The fact of the matter is that due to the increasing visibility around data centers the IT and the Facilities sides of the house better start working together to solve the larger challenges in this space.  I see it and hear it at every industry event.  The us vs. them between IT and facilities; neither realizing that this approach spells doom for them both.  It’s about time somebody challenged something in this industry.  We have already seen that left to its own devices technological advancement in data centers has by and large stood still for the last two decades.  As Einstein said, "We can’t solve problems by using the same kind of thinking we used when we created them."

Ultimately, containers are but the first step in a journey which we intend to shake the industry up with.  If the thought process around containers scares you then, the innovations, technology advances and challenges currently in various states of thought, pilot and implementation will be downright terrifying.  I guess in short, you should prepare for a vigorous stirring of the anthill.