A Well Deserved Congratulations to Microsoft Dublin DC Launch

Today Microsoft announced the launch of their premier flagship data center facility in Dublin, Ireland.  This is a huge achievement in many ways and from many angles.    While there are those who will try and compare this facility to other ‘Chiller-less’ facilities, I can assure you this facility is unique in so many ways.   But that is a story for others to tell over time.

I wanted to personally congratulate the teams responsible for delivering this marvel and acknowledge the incredible amount of work in design, engineering, and construction to make this a reality.  To Arne, and the rest of my old team at Microsoft in DCS – Way to go! 

\Mm

PS – I bet there is much crying and gnashing of teeth as the unofficial Limerick collection will now come to a close.  But here is a final one from me:

 

A Data Centre from a charming green field did grow,

With energy and server lights did it glow

Through the lifting morning fog,

An electrical Tir Na Nog,

To its valiant team – Way to Go!

Modular Evolution, Uptime Resolution, and Software Revolution

Its a very little known fact but software developers are costing enterprises millions of dollars and I don’t think in many cases either party realizes it.   I am not referring to the actual cost of purchase for the programs and applications or even the resulting support costs.   Those are easily calculated and can be hard bounded by budgets.   But what of the resulting costs of the facility in which it resides?

The Tier System introduced by the Uptime Institute was an important step in our industry in that it gave us a common language or nomenclature in which to actually begin having a dialog on the characteristics of the facilities that were being built. It created formal definitions and classifications from a technical perspective that grouped up redundancy and resiliency targets, and ultimately defined a hierarchy in which to talk about those facilities that were designed to those targets.   For its time it was revolutionary and to a large degree even today the body of work is still relevant. 

There is a lot of criticism that its relevancy is fading fast due to the model’s greatest weakness which resides in its lack of significant treatment of the application.    The basic premise of the Tier System is essentially to take your most restrictive and constrained application requirements (i.e. the one that’s least robust) and augment that resiliency with infrastructure and what I call big iron.   If only 5% of your applications are this restrictive, then the other 95% of your applications which might be able to live with less resiliency will still reside in the castle built for the minority of needs.  But before you you call out an indictment of the Uptime Institute or this “most restrictive” design approach you must first look at your own organization.   The Uptime Institute was coming at this from a purely facilities perspective.  The mysterious workload and wizardry of the application is a world mostly foreign to them.   Ask yourself this question – ‘In my organization, how often does IT and facilities talk to one another around end to end requirements?’  My guess based on asking this question hundreds of times of customers and colleagues ranges between not often to not at all.  But the winds of change are starting to blow.

In fact, I think the general assault on the Tier System really represents a maturing of the industry to look at our problem space more combined wisdom.   I often laughed at the fact that human nature (or at least management human nature) used to hold a belief that a Tier 4 Data Center was better than a Tier 2 Data Center.  Effectively because the number was higher and it was built with more redundancy.   More Redundancy essentially equaled better facility.    A company might not have had the need for that level of physical systems redundancy (if one were to look at it from an application perspective) but Tier 4 was better than Tier 3, therefore we should build the best.   Its not better, just different. 

By the way, that’s not a myth that the design firms and construction firms were all that interested in dispelling either.   Besides Tier 4 having the higher number, and more redundancy, it also cost more to build, required significantly more engineering and took longer to work out the kinks.   So the myth of Tier 4 being the best has propagated for quite a long time.  Ill say it again.  Its not better, its just different.

One of the benefits of the recent economic downturn (there are not many I know), is that the definition of ‘better’ is starting to change.  With Capital budgets frozen or shrinking the willingness of enterprises to re-define ‘better’ is also changing significantly.   Better today means a smarter more economical approach.   This has given rise to the boom in Modular data center approach and its not surprising that this approach begins with what I call an Application level inventory.   

This application level inventory first specifically looks at the make up and resiliency of the software and applications within the data center environments.  Does this application need the level of physical fault tolerance that my Enterprise CRM needs?  Do servers that support testing or internal labs need the same level of redundancy?  This is the right behavior and the one that I would argue should have been used since the beginning.  The Data Center doesn’t drive the software, its the software that drives the Data Center. 

One interesting and good side effect of this is that the enterprise firms are now pushing harder on the software development firms.    They are beginning to ask some very interesting questions that the software providers have never been asked before.    For example, I sat in one meeting where and end customer asked their Financial Systems Application provider a series of questions on the inter-server latency requirements and transaction timeout lengths for data base access of their solution suite.  The reason behind this line of questioning was a setup for the next series of questions.   Once the numbers were provided it became abundantly clear that this application would only truly work from one location, from one data center and could not be redundant across multiple facilities.  This led to questions around the providers intentions to build more geo-diverse and extra facility capabilities into their product.   I am now even seeing these questions in official Requests for Information (RFI’s) and Requests for Proposal (RFPs).   The market is maturing and is starting to ask an important question – why should your sub-million dollar (euro) software application drive 10s of millions of capital investment by me?  Why aren’t you architecting your software to solve this issue.  The power of software can be brought to bear to easily solve this issue, and my money is on the fact this will be a real battlefield in software development in the coming years.

Blending software expertise with operational and facility knowledge will be at the center of a whole new train of software development in my opinion.  One that really doesn’t exist today and given the dollar amounts involved, I believe it will be a very impactful and fruitful line of development as well.    But it has a long way to go.    Most programmers coming out of universities today rarely question the impact of their code outside of the functions they are providing and the number of colleges and universities that teach a holistic approach can be counted on less than one hands worth of fingers world-wide.   But that’s up a finger or two from last year so I am hopeful. 

Regardless, while there will continue to be work on data center technologies at the physical layer, there is a looming body of work yet to be tackled facing the development community.  Companies like Oracle, Microsoft, SAP, and hosts of others will be thrust into the fray to solve these issues as well.   If they fail to adapt to the changing face and economics of the data center, they may just find themselves as an interesting footnote in data center texts of the future.

 

\Mm

Must Have Swag…..

I try not to post much business related stuff (ala Digital Realty Trust) on Loosebolts as its my own place to rant and rave.   To be clear-none of the things I say on here represent the views of the company what-so-ever.   But sometimes, there are a things that come along that really make me smile and I have to comment on them.

As you know I am huge fan of modularization in the data center.  Modularization in construction, modularization in operation, modularization is just all-around goodness from a technical perspective through the business side of things.   That’s why the newest marketing campaign from Digital has me smiling ear to ear.  image   The new Data Center Construction kit brings back memories from when I was a kid and built giant structures for my little people to generally live, die, and party in.    It was of course a modular approach that led to endless hours of fun and imagination.   Applying these fond remembrances of youth and combining it with both the modular data center movement, and general fun will make this the MUST-HAVE piece of swag in the industry.   Data Center Knowledge posted a video about the toys a few weeks ago.    I can definitely tell you it will lead to hours of fun and wasted time at work putting it together.   I should know, my completed “data center” sits proudly in my office! 

After all we are all just kids at heart, aren’t we?

\Mm

Speaking on Container Solutions and Applications at Interop New York

I have been invited to speak and chair a panel at Interop, New York (November 16-20, 2009) to give a talk exploring the hype and reality surrounding Data Center based containers and Green IT in general.  

image

The goal of the panel discussion will help data center managers evaluate and approach containers by understanding their economics, key considerations and real-life customer examples.  It’s going to be a great conversation.  If you are attending Interop this year I would love to see you there!

 

\Mm

More Chiller Side Chat Redux….

I have been getting continued feedback on the Chiller Side Chat that we did live on Monday, September 14th.  I wanted to take a quick moment and discuss one of the recurring themes of emails I have been receiving on the topic of data center site selection and the decisions that result at the intersection of data center technology, process and cost.    One of the key things that we technical people often forget is that the data center is first and foremost a business decision.  The business (whatever kind of business it is) has a requirement to improve efficiency through automation, store information, or whatever it is that represents the core function of that business.  The data center is at the heart of those technology decisions and the ultimate place where those solutions will reside.  

As the primary technical folks in an organization whether you represent IT or Facilities,  we can find ourselves in the position of getting deeply involved with the technical aspects of the facility – the design, the construction or retro-fit, the amount of power or cooling required, the amount of redundancy we need and the like.  Those in upper management however view this substantially in a different way.    Its all about business.  As I have gotten a slew of these mails recently I decided to try and post my own response.  As I thought about how I would go about this, I would keep going back to Chris Crosby’s discussion at Data Center Dynamics about two years ago.   As you know I was at Microsoft, at the time and felt that he did an excellent job of outlining the way the business person views data center decisions.    So I went digging around and found this video of Chris talking about it.  Hopefully this helps!  If not let me know and I am happy to discuss further or more specifically.

\Mm

Miss the “Live” Chiller Side Chat? Hear it here!

The folks who were recording the “Live” Chiller Side Chat have sent me a link to the recording.    If you were not able to make the event live, but are still interested in hearing how it went feel free to have a listen at the following link:

 

LIVE CHILLER SIDE CHAT

 

\Mm

Live Chiller Side Chat Redux

I wanted to take a moment to thank Rich Miller of Data Center Knowledge, and all of those folks that called in and asked and submitted questions today in the Live Chiller Side Chat.   It was incredible fun for me to get a chance to answer questions directly from everyone.   My only regret is that we did not have enough time!

When you have a couple of hundred people logged in, its unrealistic and all but impossible to answer all of the questions.  However, I think Rich did a great job bouncing around to clue into key themes that he saw emerging from the questions.    One thing is for sure is that we will try to do another one of those given the amount of unanswered questions.  I have already been receiving some great ideas on how to possibly structure these moving forward.  Hopefully everyone got some value or insight out of the exercise.  As I warned before the meeting, you may not get the right answer, but you will definitely get my answer.  

One of the topics that we touched on briefly during the call, and went a bit under-discussed was regulation associated with data centers or more correctly, regulation and legislation that will affect our industry.    For those of you who are interested I recently completed an executive primer video on the subject of data center regulation.  The link can be found here:

image

Data Center Regulation Video.

Thanks again for spending your valuable time with me today and hope we can do it again!

\Mm

Presenting that the EPA/Energy Star Event on 9/24

Just a quick note that I will be presenting at the Environmental Protection Agency’s Energy Star Event on September 24th at the Hotel Sax in Chicago.  If you plan on attending (Unfortunately, for late comers, the event has reached its maximum attendance) please feel free to hunt me down and chat.    My talk will of course focus on power efficiency.  Specifically drilling into  emerging technologies,approaches, and industry best practices.

Hope to see you there!

\Mm

“We Can’t Afford to measure PUE”

One of the more interesting phenomena that I experience as I travel and talk with customers and industry peers is that there is a significant number of folks out there with the belief that they cannot measure PUE because they cannot afford or lack the funding for the type of equipment and systems needed to properly measure their infrastructure.  As a rule I beleive this to be complete hogwash as there are ways to measure PUE without any additional equipment (I call it SneakerNet or Manual Scada).   One can easily look at the power draw off the UPS  and compare that to the information in their utility bills.  Its not perfect, but it gives you a measure that you can use to improve your efficiency.  As long as you are consistent in your measurement rigor (regular intervals, same time of day, etc) you can definitely achieve better and better efficiency within your facility.

Many people have pushed back on me saying that measurement closer to the PDU or rack is more important and for that one needs a full blown branch circuit monitoring solution.   While to me increased efficiency is more about vigilance in understanding your environment I have had to struggle with an affordable solution for folks who desired better granularity.

Now that I have been in management for the better half of my career I have had to closet the inner geek in me to home and weekend efforts.   Most of my friends laugh when they find out I essentially have a home data center comprised of a hodge podge of equipment I have collected over the years.  This includes things like my racks of different sizes (It has been at least a decade since I have seen a half-height rack in a facility, but I have two!) , my personal Cisco Certification lab, Linux Servers, Microsoft Servers, and a host of other odds and ends).  Its a personal playground for me to try and remain technical despite my management responsibilities.  

Its also a great place for me to try out new gear from time to time and I have to say I found something that might fit the bill for those folks that want to get a deeper understanding of power consumption in their facilities.   I rarely give product endorsements but this is something that the budget minded facilities folks might really like to take a look at. 

I received a CL-AMP IT package from the Noble Vision Group to review and give them some feedback on their kit.   The first thing that struck me was that this kit seemed to essentially be a power metering for dummies kit.    There were a couple of really neat characteristics out of the box that took many of the arguments I usually hear right off the table.

nvg

First the “clamp” itself in non-intrusive, non-invasive way to get accurate power metering and results.   This means contrary to other solutions I did not have to unplug existing servers and gear to be able to get readings from my gear or try and install this device inline.  I simply Clamped the power coming into the rack (or a server) and POOF! I had power information. It was amazingly simple. Next up -  I had heard that clamp like devices were not as accurate before so I did some initial tests using an older IP Addressable power strip which allowed me to get power readings for my gear.   I then used the CL-AMP device to compare and they were consistently within +/- 2% with each other.  As far as accuracy, I am calling it a draw because to be honest its a garage based data center and I am not really sure how accurate my old power strips are.   Regardless the CL-AMPS allowed me a very easy way to get my power readings easily without disrupting the network.  Additionally, its mobile so if I wanted to I could move it around you can.  This is important for those that might be budget challenged as the price point for this kit would be incredibly cheaper than a full blown Branch Circuit solution. 

While my experiment was far from completely scientific and I am the last person to call myself a full blown facilities engineer, one thing was clear this solution can easily fill a role as a mobile, quick hit way to measure PUE power usage in your facility that doesn’t break the bank or force you to disrupt operations or installing devices in line.   

\Mm

Live Chiller Side Chat

I am extremely excited to be participating in a live (webcast) Chiller-Side Chat hosted by none other than Rich Miller of Data Center Knowledge.   The event is scheduled for Monday, September 14th from noon to 1pm Central Standard Time.  You can register for the online event at this link.

I think perhaps the most interesting aspect of this to me is that this will be a live event and focused on answering questions that come in from the audience.   As you know I usually use my ‘Chiller Side Chat’ posts to discuss some topic or other that interests or frustrates me.    Sometimes, even others think they may be interesting or relevant too.    I am planning on meeting up with Rich and doing the webcast from Las Vegas, where I am speaking at the Tier One Hosting Transformation Summit.

I am incredibly excited about this event and hope that if you have time you will join us.  While I will endeavor to give you the right answers – one thing you can be sure of is that you will get MY answers.  :)

See you then!

 

\Mm