Open Source Data Center Initiative

There are many in the data center industry that have repeatedly called for change in this community of ours.  Change in technology, change in priorities, Change for the future.  Over the years we have seen those changes come very slowly and while they are starting to move a little faster now, (primarily due to the economic conditions and scrutiny over budgets more-so than a desire to evolve our space) our industry still faces challenges and resistance to forward progress.   There are lots of great ideas, lots of forward thinking, but moving this work to execution and educating business leaders as well as data center professionals to break away from those old stand by accepted norms has not gone well.

That is why I am extremely happy to announce my involvement with the University of Missouri in the launch of a Not-For-Profit Data Center specific organization.   You might have read the formal announcement by Dave Ohara who launched the news via his industry website, GreenM3.   Dave is another of of those industry insiders who has long been perplexed by the lack of movement and initiative we have had on some great ideas and stand outs doing great work.  More importantly, it doesn’t stop there.  We have been able to put together quite a team of industry heavy-weights to get involved in this effort.  Those announcements are forthcoming, and when they do, I think you will get a sense of the type of sea-change this effort could potentially have.

One of the largest challenges we have with regards to data centers is education.   Those of you who follow my blog know that I believe that some engineering and construction firms are incented ‘not to change’ or implementing new approaches.  The cover of complexity allows customers to remain in the dark while innovation is stifled. Those forces who desire to maintain an aura of black box complexity  around this space and repeatedly speak to the arcane arts of building out  data center facilities have been at this a long time.  To them, the interplay of systems requiring one-off monumental temples to technology on every single build is the norm.  Its how you maximize profit, and keep yourself in a profitable position. 

When I discussed this idea briefly with a close industry friend, his first question naturally revolved around how this work would compete with that of the Green Grid, or Uptime Institute, Data Center Pulse, or the other competing industry groups.  Essentially  was this going to be yet another competing though-leadership organization.  The very specific answer to this is no, absolutely not.   

These groups have been out espousing best practices for years.  They have embraced different technologies, they have tried to educate the industry.  They have been pushing for change (for the most part).  They do a great job of highlighting the challenges we face, but for the most part have waited around for universal good will and monetary pressures to make them happen.  It dawned on us that there was another way.   You need to ensure that you build something that gains mindshare, that gets the business leadership attention, that causes a paradigm shift.   As we put the pieces together we realized that the solution had to be credible, technical, and above all have a business case around it.   It seemed to us the parallels to the Open Source movement and the applicability of the approach were a perfect match.

To be clear, this Open Source Data Center Initiative is focused around execution.   Its focused around putting together an open and free engineering framework upon which data center designs, technologies, and the like can be quickly put together and more-over standardize the approaches that both end-users and engineering firms approach the data center industry. 

Imagine if you will a base framework upon which engineering firms, or even individual engineers can propose technologies and designs, specific solution vendors could pitch technologies for inclusion and highlight their effectiveness, more over than all of that it will remove much mystery behind the work that happens in designing facilities and normalize conversations.    

If you think of the Linux movement, and all of those who actively participate in submitting enhancements, features, even pulling together specific build packages for distribution, one could even see such things emerging in the data center engineering realm.   In fact with the myriad of emerging technologies assisting in more energy efficiency, greater densities, differences in approach to economization (air or water), use of containers or non use of containers, its easy to see the potential for this component based design.  

One might think that we are effectively trying to put formal engineering firms out of business with this kind of work.  I would argue that this is definitely not the case.  While it may have the effect of removing some of the extra-profit that results from the current ‘complexity’ factor, this initiative should specifically drive common requirements, and lead to better educated customers, drive specific standards, and result in real world testing and data from the manufacturing community.  Plus, as anyone knows who has ever actually built a data center, the devil is in the localization and details.  Plus as this is an open-source initiative we will not be formally signing the drawings from a professional engineering perspective. 

Manufacturers could submit their technologies, sample application of their solutions, and have those designs plugged into a ‘package’ or ‘RPM’ if I could steal a term from the Redhat Linux nomenclature.  Moreover, we will be able to start driving true visibility of costs both upfront and operating and associate those costs with the set designs with differences and trending from regions around the world.  If its successful, it could be a very good thing.  

We are not naive about this however.  We certainly expect there to be some resistance to this approach out there and in fact some outright negativity from those firms that make the most of the black box complexity components. 

We will have more information on the approach and what it is we are trying to accomplish very soon.  

 

\Mm

Rolling Clouds – My Move into the Mobile Cloud

As many of you saw in my last note, I have officially left Digital Realty Trust to address some personal things.   While I get those things in order I am not sitting idling by.   I am extremely happy to announce that I have taken a role at Nokia as their VP of Service Operations.  In this role I will have global responsibility for the strategy, operation and run of infrastructure aspects for Nokia’s new cloud and mobile services platforms.

Its an incredibly exciting role especially when you think of the fact that the number of mobile hand-held’s around the world are increasingly becoming the interface by which people are consuming information.  Whether that be Navigation-based applications or other content related platforms your phone is becoming your gateway to the world. 

I am also very excited by the fact that there are some fierce competitors in this space as well.  Once again I will be donning my armor and doing battle with my friends at Google.   Their Droid platform is definitely interesting and it will be interesting to see how that develops.  I have a great amount of respect for Urs Hoelze and their cloud platform is something I am fairly familiar with .  I will also be doing battle with the folks from Apple (and interestingly my good friend Olivier Sanche).  Apple definitely has the high end hand-held market here in the US, but its experience in Cloud platforms and operations is not very sophisticated just yet.  On some levels I guess I am even competing against the infrastructure and facilities I built out at Microsoft at least as it relates to the mobile world.  Those are some meaty competitors and as you have seen before, I love a good fight.

In my opinion, Nokia has some very interesting characteristics that position it extremely well if not atop the fray in this space.   First there is no arguing about Nokia penetration of hand-held devices across the world.  Especially in markets like India, China, South America, and other emerging Internet-using populations.    Additionally these emerging economies are skipping past ground-based wired technologies to wireless connectivity.   As a result of that, Nokia has an incredible presence already in those markets.   Their OVI platform today already has a significant population of users (measured at least in the 10s of millions) and so scale at the outset is definitely there.    When I think about the challenge that Google has in getting device penetration out there, or Apples high-end (and mostly US) only approach you can see the opportunity.    I am extremely excited to get going.

Hope you will join me for an incredible ride!

\Mm

Presenting that the EPA/Energy Star Event on 9/24

Just a quick note that I will be presenting at the Environmental Protection Agency’s Energy Star Event on September 24th at the Hotel Sax in Chicago.  If you plan on attending (Unfortunately, for late comers, the event has reached its maximum attendance) please feel free to hunt me down and chat.    My talk will of course focus on power efficiency.  Specifically drilling into  emerging technologies,approaches, and industry best practices.

Hope to see you there!

\Mm

Its the Law of Unintended Consequences – Some Clarity around My thoughts on Data Center Regulation

I have gotten a lot of response from the post on my thoughts on Data Center regulation.   Many of the comments in a response to an Infoworld article focused on the disbelief of regulations particularly targeting data centers.  A Greener Computing article felt that because the current administration is very tech-savvy they wouldn’t do anything to hurt data centers.  In fact the exact quote was:

I can understand Manos’ concerns, but I think he’s on the wrong track. The federal government is very unlikely to issue strict green regulations related to data centers. And if they do regulate them in some way, the regulations will no doubt be reasonable. The current administration is very technology-savvy — after all, the current Secretary of Energy Steven Chu was recently the director of the Lawrence Berkeley National Laboratory, whose work was heavily dependent on its data center. Chu did some great work related to Green IT when at the labs. He knows what can and can’t be done — and will make sure that data centers aren’t hamstrung with unnecessary regulation.

I guess for clarity sake I should state unequivocally that I do not believe that Data Centers will specifically be targeted or singled out for regulation.   Domestically here in the United States the EPA has kicked off its Energy Star Data Center evaluation which looks to study data centers as a sector, and something may come out of that, but in all honesty that wont be for some time.  I think the more immediate threat is in the efforts around Carbon Cap and Trade.  As the Greener Computing Article calls out, it was front and center at the G8 meetings.   With the UK leading the charge and the only real legislation on the books in this space, it would be hard for the other countries not to use it as the base for their programs.   My previous post focuses specifically on the fact that Data Centers will end up being significant contributing factors to Carbon metrics for companies.  Data Center Managers just aren’t thinking about it, and wont be until its far too late.  

While I am hopeful that leaders like Steven Chu and the Obama administration will weigh all possible aspects in a Carbon Cap and Trade program, the fact remains that they will need to legislate to the least common denominator and data centers are unlikely to be called out unless there is a group specifically calling attention to it.  Ergo my call for an industry wide group lobbying on its behalf.     I have doubts they will altruistically incorporate all possible sub cases into the mix without that kind of pressure.   President Obama frankly has bigger problems to be thinking about in my opinion.   

I am reminded of a quote from another excellent communicator and activist president, Ronald Reagan:

"The nine most terrifying words in the English language are: ‘I’m from the government and I’m here to help.’"

Its those times more than any other that you should put your guard up even higher.  I guess only time will tell, but one thing is certain Data Centers and IT departments will have a role to play in Carbon Reporting. 

\Mm

At the Intersection of Marketing and Metrics, the traffic lights don’t work.

First let me start out with the fact that the need for datacenter measurement is paramount if this industry is to be able to manage itself effectively.   When I give a talk I usually begin by asking the crowd three basic questions

1) How many in attendance are monitoring and tracking electrical usage?

2) How many in attendance measure datacenter efficiency?

3) How many work for organizations in which the CIO looks at the power bills?

The response to these questions has been abysmally low for years, but I have been delighted by the fact that slowly but surely, the numbers have been rising.  Not in great numbers mind you, but incrementing.  We are approaching a critical time in the development of the data center industry and where it (and the technologies involved) will go.  

To that end there is no doubt that the PUE metric has been instrumental in driving awareness and visibility on the space.  The Green Grid really did a great job in pulling this metric together evangelizing it to the industry.  Despite a host of other potential metrics out there, PUE has captured the industry given its relatively straight forward approach.   But PUE is poised to be a victim of its own success in my opinion unless the industry takes steps to standardizes its use in marketing material and how it is talked about. 

Don’t get me wrong, I am rabidly committed to PUE as a metric and as a guiding tool in our industry.   In fact I have publicly defended the detractors of this metric for years.  So this post is a small plea for sanity. 

These days, I view each and every public statement of PUE with a full heaping shovel-full of skepticism regardless of company or perceived leadership position.   In my mind measurement of your company’s environment and energy efficiency is a pretty personal experience.   I don’t care which metric you use (even if its not PUE) as long as you take a base measurement and consistently measure over time making changes to achieve greater and greater efficiency.   There is no magic pill, no technology, no approach that gives you efficiency nirvana. It is a process that involves technology (both high tech and low tech), process, procedure,  and old fashioned roll up your sleeves operational best practices over time that gets you there. 

With mounting efforts around regulation, internal scrutiny around capital spending, lack of general market inventory and a host of other reasons, the push for efficiency has never been greater and the spotlight on efficiency as a function of “data center product” is in full swing.  Increasingly PUE is moving from the data center professional and facilities groups to the marketing department.  I view this as bad. 

Enter the Marketing Department

In my new role I get visibility to all sorts of interesting things I never got to see in my role managing the infrastructure for a globally ubiquitous cloud roll out.  One of the more interesting items was an RFP issued by a local regional government for a data center requirement.   The RFP had all the normal things you would expect to find in terms of looking for that kind of thing, but there was a caveat that this facility must have a PUE of 1.2.  When questioned around this PUE target, the person in charge stated, if Google and Microsoft are achieving this level we want the same thing and this is becoming the industry standard.   Of course the realization in differences in application make up, legacy systems, or the fact that it would have to also house massive tape libraries (read low power density) and a host of other factors made it impossible for them to really achieve this.   It was then that I started to get an inkling that PUE was starting to get away from its original intention.  

You don’t have to look far to read about the latest company that has broken the new PUE barrier of 1.5 or 1.4 or 1.3 or 1.2 or even 1.1.  Its like the space race.   Except that the claims of achieving those milestones are never really backed up with real data to prove or disprove it.  Its all a bunch of nonsensical bunk.  And its in this nonsensical bunk that we will damn ourselves with those who have absolutely no clue about how this stuff actually works.  Marketing wants the quick bullet points and a vehicle to allow them to show some kind of technological superiority or green badges of honor.  When someone walks up to me at a tradeshow or emails me braggadocios claims of PUE they are unconsciously picking a fight with me and I am always up for the task.

WHICH PUE DO YOU USE?

Lets have an honest, open and frank conversation around this topic shall we?  When someone tells me of the latest greatest PUE they have achieved, or have heard about, my first question is ‘Oh yeah?  Which PUE are they/you using?’.  I love the response I typically get when I ask the question. Eyebrows twist up and a perplexed look takes over their face.   Which PUE?

If you think about it, its a valid question.  Are they looking at Average Annual PUE?  Are they looking at AVERAGE PEAK PUE?  Are they looking at design point PUE?  Are they looking at Annual Average Calculated PUE? Are they looking at Commissioning state PUE?  What is the interval at which they are measuring? Is this the PUE rating they achieved one time at 1:30AM on the coldest night in January?

I sound like an engineer here but there is a vast territory of values between these numbers all of them and none of them may have anything to do with reality.   If you will allow me a bit of role-playing here lets walk through a scenario where we (you and I dear reader) are about to build and commission our first facility.  

We are building out a 1MW facility with a targeted PUE of 1.5.   After successful build out with no problems or hiccups (we are role-playing remember) we begin the commissioning with load banks to simulate load.   During the process of commissioning we have a measured PUE our target of 1.40.  Congratulations we have beaten our design goal! right? We have crossed the 1.5 barrier! Well maybe not.  Lets ask the question…How long did we run the Level 5 commissioning for?  There are some vendors who burn it in over a course of 12 hours.  Some a full day.  Does that 1.40 represent the average of the values collected?  Does it measure the averaged peak?  Was it the lowest value?  What month are we in?  Will it be significantly different in July? January?  May?  Where is the facility located?   The scores over time versus at commissioning will vary significantly over time.  

A few years back when I was at Microsoft, we publicly released the data below for a mature facility at capacity that has been operating and collecting information four years.  We had been tracking PUE or at least the variables used in PUE for that long.  You can see in the chart the variations of PUE.  Keep in mind this chart shows a very concentrated effort to drive efficiency over time.   Even in a mature facility where the load remains mostly constant over time, the PUE has variation and fluctuation.   Add to that the deltas between average, peak and average peak.  Which numbers are you using?

image

(source: GreenM3 blog)

Ok lets say we settle on using just average (its always the lowest number PUE with the exception of a one time measurement).  We want to look good to management right?  If you are a colo company or data center wholesaler you may even give marketing a look-see to see if there is any value in that regard.    We are very proud of ourselves.  There is much back slapping and glad handing as we send our production model out the door.

Just like an automobile our data center depreciates quickly as soon as the wheels hit the street.  Except that with data centers its the PUE that is negatively affected.

The IMPORTANCE OF USE

Our brand new facility is now empty.  The load-banks have been removed, we have pristine white floor space ready to go.   With little to no IT load in our facility we currently have a PUE somewhere between 7 and 500.  Its just math (refer back to how PUE is actually calculated).  So now our PUE will be a function of how quickly we consume the capacity.  But wait, how can our PUE be so high?  We have proof from commissioning that we have created an extremely efficient facility.   Its all in the math.  Its math marketing people don’t like.  It screws with the message.   Small revelation here – Data Centers become more “efficient” the more energy they consume!  Regulations that take PUE into account will need to worry about this troublesome side effect. 

There are lots of interesting things you can do to minimize this extremely high PUE at launch like shutting down CRAH units, removing perf tiles and replacing them with solid tiles, but ultimately your PUE is going to be much much higher regardless.  

Now lets take the actual deployment of IT ramps in new data centers.  In many cases enterprises build data centers to last them over a long period of time. This means that there is little likelihood that your facility will look close to your commissioning numbers (with load banks installed).  Add to the fact that traditional data center construction has you building out all of the capacity from the start.  This essentially means that your PUE is not going to have a great story for quite a bit of time.   Its also why I am high on the modularized approach.  Smaller, more modular units allow you to more efficiently (from cost as well as energy efficiency) grow your facility out.

So if we go back to our marketing friends, our PUE looks nothing like the announcement any more.  Future external audits might highlight this, and we may full under scrutiny of falsely advertising our numbers.  So lets pretend we are trying to do everything correctly and have projected that we will completely fill our facility in 5 years.  

The first year we successfully fill 200kw of load in our facility.  We are right on track.   Except that the 200kw was likely not deployed all at once.  It was deployed over the course of the year.   Which means my end of year PUE number may be something like 3.5 but it was much higher earlier in the year.  If I take my annual average, it certainly wont be 3.5.  It will be much higher.   In fact if I equally distribute the 200kw that first year over 12 months, my PUE looks like this:

image 

That looks nothing like the PUE we advertised does it? Additionally I am not even counting the variability introduced by time.   This is just end of month numbers.   So the frequency will have an impact on this number as well.  The second year of operation our numbers are still quite poor when compared to our initial numbers.

image

Again, if I take annual average PUE for the second year of operation, I am not at my design target nor am I at our commissioned PUE rating.  So how can firms unequivocally state such wonder PUEs?  They cant.  Even this extremely simplistic example doesn’t take into effect that load in the data center moves around based upon utilization, it also almost never achieves the power draw you think it will take.  There are lots of variables here. 

Lets be clear – This is how PUE is supposed to work!  There is nothing wrong with these calcs.  There is nothing wrong with the high values.   It is what it is.  The goal is to drive efficiency in your usage.   Falsely focusing on extremely low numbers that are the result of highly optimized integration between software and infrastructure and making them the desirable targets will do nothing more than place barriers and obstacles in our way later on.  Outsiders looking in want to find simplicity.  They want to find the quick and dirty numbers by which to manage the industry by.   As engineers you know this is a bit more complex.   Marketing efforts and focusing on low PUEs will only damn us later on.

Additionally, if you allow me to put my manager/businessman hat on – there is a law of diminishing return by focusing on lower and lower PUE.  The cost for continued integration and optimization starts losing its overall business value and gains in efficiency are offset by the costs to achieve those gains.  I speak as someone who drove numbers down into that range.   The larger industry would be better served by focusing more on application architecture, machine utilization, virtualization, and like technologies before pushing closer to 1.0. 

So what to do?

I fundamentally believe that this would be an easy thing to correct.   But its completely dependent upon how strong a role the Green Grid wants to play in this.   I feel that the Green Grid has the authority and responsibility to establish guidelines in the formal usage of PUE ratings.  I would posit the following ratings with apologies in advance as I am not a marketing guy who could come up with more clever names:

Design Target PUE (DTP) – This is the PUE rating that theoretically the design should be able to achieve.   I see too many designs that have never manifested physically.  This would be the least “trustworthy” rating until the facility or approach has been built.

Commissioned  Witnessed PUE (CWP) – This is the actual PUE witnessed at the time of commissioning of the facility.  There is a certainty about this rating as it has actually achieved and witnessed.  This would be the rating that most colo providers and wholesalers would need to use as they have little impact or visibility into customer usage.

Annual Average PUE (AAP) – This is what it says it is.  However I think that the Green Grid needs to come up with a minimal standard of frequency (my recommendation is at least 3 times a day, data collection) to establish this rating.  You also couldn’t publish this number without a full years worth of data.

Annual Average Peak PUE (APP) – My preference would be to use this as its a value that actually matters to the ongoing operation of the facility.  When you combine with the Operations challenge of managing power within a facility, you need to account for peaks more carefully especially as you approach the end capacity of the space you are deploying.    Again hard frequencies need to be established along with a full years worth of data here as well.

I think this would greatly cut back on ridiculous claims or at least get closer to a “truth in advertising’ position.  It would also allow for outside agencies to come in and audit those claims over time.  You could easily see extensions to the ISO14k and ISO27K and other audit certifications to test for it.   Additionally it gives the outsiders a peak at the complexity at the space and allows for smarter mechanics that drive for greater efficiency (how about a 10% APP reduction target per year instead). 

As the Green Grid is a consortium of different companies (some of whom are likely to want to keep the fuzziness around PUE for their own gains) it will be interesting to see if they step into better controlling the monster we have unleashed.  

Lets re-claim PUE and Metrics from the Marketing People. 

\Mm

Coming Soon to a Data Center near you, Regulation.

As an industry, we have been talking about it for some time.  Some claimed it would never come and it was just a bunch of fear mongering. Others like me said it was the inevitable outcome of the intensifying focus on energy consumption.   Whether you view this to be a good thing or bad thing its something that you and your company are going to have to start planning for very shortly.  This is no longer a drill.

CRC – its not just a cycle redundancy check

I have been tracking the energy efficiency work being done in the United Kingdom for quite some time and developments in the Carbon Reduction Commitment (CRC).  My recent trip to London afforded me the opportunity to drive significantly harder into the draft and discuss it with a user community (at the Digital Realty Round table event) who will likely be the first impacted by such legislation. For those of you unfamiliar with the initiative let me give a quick overview of the CRC and how it will work. 

The main purpose of the CRC is a mandatory carbon reduction and energy efficiency scheme aimed at changing energy use behaviors and further incent the adoption of technology and infrastructure.  While not specifically aimed at Data Centers (its aimed at everyone) you can see that by its definition Data Centers will be significantly affected.  It was introduced as part of the Climate Change Act 2008.

In effect it is an auction based carbon emissions trading scheme designed to operate under a Cap and Trade mechanism.  While its base claim says that it will be revenue neutral to the government (except of course for penalties resulting from non-compliance), it provides a very handy vehicle for future taxation and revenue.  This is important, because as data center managers you are now placed in a position where you have primary regulatory reporting responsibilities for your company.  No more hiding under the radar, your roles will now be front and center.                             

All organizations including governmental agencies who consume more than 6000 MWh in 2008 are required to participate.  The mechanism is expected to go live in April 2010.  Please keep in mind that this consumption requirement is called out as MWh and not Megawatts.  What’s the difference? Its energy use over time for your whole company.  If you as a data center manager run a 500 kilowatt facility you account for almost 11% of the total energy consumption.  You can bet you will be front and center on that issue. Especially when the proposed introductory price is £12/tCO2 (or $19.48/tCO2).  Its real money.  Again, while not specifically focused on data centers you can see that they will be an active contributor and participant in the process.  For those firms with larger facilities, lets say 5MW of data center space – dont forget to add in your annual average PUE – the data centers will qualify all to themselves.

image 

 

For more information of the CRC you can check out the links below:

While many of you may be reading this and feel poorly for your brothers and sisters in Great Britain while sighing in relief that its not you, keep in mind that there are already other mechanisms being put in place.  The EU has the ETS, and the Obama Administration has been very public about a similar cap and trade program here in the United States.  You can bet that the US and other countries will be closely watching the success and performance of the CRC initiative in the UK. They are likely to model their own versions after the CRC (why invent the wheel over again, when you can just localize to your country or region).  SO it might be a good idea to read through it and start preparing how you and your organization will respond and/or collect.

I would bet that you as a Data Center Manager have not been thinking of this, that your CIO has not thought about this, the head of your facilities group has not thought about this.  First you need to start driving awareness to this issue.    Next we should heed to a call to arms.

One of the items that came out during the Roundtable discussions was how generally disconnected government regulators are to the complexities of the data center.   They want to view Data Centers as big bad energy using boxes that are all the same.  When the differences in what is achievable from small data centers to mega-scale facilities are great.  Achieving PUEs of 1.2x might be achievable for large scale Internet firms who control the entire stack from physical cabling to application development,  banks and financial insitutions are mandated to redundancy requirements which force them to maintain scores of 2.0. 

Someone once decried to me that data centers are actually extremely efficient as they have to integrate themselves into the grid, they generally purchase and procure the most energy efficient technologies, and are incented from an operating budget perspective to keep costs low.  Why would the government go after them before they went after the end users who typically do not have the most energy efficient servers or perhaps the OEMs that manufacture them.  The simple answer is that data centers are easy high energy concentration targets.   Politically going after users is a dicey affair and as such DCs will bear the initial brunt.

As an industry we need to start involving ourselves in educating and representing  the government  and regulatory agencies in our space.   While the Green Grid charter specifically forbids this kind of activity, having a Data Center industry lobby group to ensure dumb things wont happen is a must in my opinion.  

Would love to get your thoughts on that.

/Mm

Forecast Cloudy with Continued Enterprise

image

This post is a portion of my previous post that I broke into two.   It more clearly defines where I think the market is evolving to and why companies like Digital Trust Realty will be at the heart of the change in our industry.

 

The Birth of Utilities and why a Century ago will matter today and forward…

I normally kick off my vision of the future talk with a mention first to history (my former Microsoft folks are probably groaning at this moment if they are reading this).   I am a huge history buff.  In December of 1879, Thomas Edison harnessed the power of electricity for the first time to light a light bulb.  What’s not apparent is that this “invention” was in itself not complete.  To get this invention from this point to large scale, commercial application required a host of other things to be invented as well.   While much ado is made about the successful kind of filament used to ensure a consistent light source, there were no less than at least seven other inventions to make electric light (and ultimately the electric utility) practical for everyone.  Invention of things like the parallel circuit, an actual durable light bulb, an improved dynamo, underground conductor networks, devices to maintain constant voltage, insulting materials and safety fuses, the light socket, the on/off switch, and a bunch of other minor things.   Once all these things were solved, the creation of the first public electricity utility was created.  On September of 1882, the first commercial power station, located on Pearl Street in lower Manhattan opened its doors and began providing light and electrical power to all customers within the “massive” area of one square mile.   This substation was a marvel of technology staffed with 10s of technicians, maintaining the complex machinery to exacting standards.   The ensuing battle between Direct Current and Alternating Current was then created and in some areas still continues today. More on this in a bit.

A few years earlier a host of people were working on what would eventually become known as the telephone.   In the United States this work is attributed to Alexander Graham Bell and its that story I will focus on here for a second.  Through trial and error Bell and his compatriot Watson, accidently stumbled across a system to transfer sound in June of 1875.  After considerable work on refinement the product launched (There is an incredibly interesting history of this at SCRIBD), and after ever more additional trial and error the first telephonic public utility was created with the very first central office coming online in January of 1878 in New Haven, Connecticut.  This first central office was a marvel to behold.  Again the extremely high tech equipment with a host of people ensuring that telephonic utility was always available and calls were transferred appropriately.  Interestingly by 1881 only 9 cities with populations above 10,000 were without access to the telephone utility and only 1 above 15,000!  That is an adoption rate that remains boggling even by today’s standards.  

These are significant moments in time that truly changed the world in the way we live every day.   Today we are the birth of another such utility.  The Information Utility.   Many people I have spoken to claim this “Information Utility” is something different. It’s more of a product, because it uses existing utility services. Some maintain that its truly not revolutionary because its not leveraging new concepts.   But the same can be said of those utilities as well.   The communications infrastructure we use today whether telephone or data has its very roots in the telegraph.  The power utilities have a lot to thank the gas-lamp utilities of the past for solving early issues as well.  Everything old is new again and everything gets refined into something newer and better.  Some call this new Information Utility the “cloud”, others the Information-sphere, others just call it the Internet.  Regardless what you call it, access to information is going to be at your finger tips more today and tomorrow than it has ever been before. 

Even though this utility is built upon existing services, this utility too will have its infrastructure.  Just as the electric utility has its sub-stations and distribution yards, and the communication utilities have central offices, so too will data centers become the distribution mechanism for the the Information Utility.   We still have a lot of progress to make as well.   Not everything is invented or understood yet.  Just as Edison had to invent a host of other items to make electricity practical, and Bell and Watson have to develop the telephone, the telephone ringer (or more correctly, thumper or buzzer), so to does our information Utility have a long way to go.  In some respects its even more complicated than its predecessors as their was not burdened with legislation and government involvement that would affect its early development.  The “Cloud” does.

And that innovation does not always come from a select few.  Westinghouse and his alternating current eventually won out over direct current because it found its killer app and business case.   Alternating current was clearly the technically superior and better for distribution. They had even demonstrated generating power at Niagara Falls and successfully transferred that power all the way to Buffalo, New York! Something direct current was unable to do.  In the end, Westinghouse worked with appliance manufacturers to create devices that used alternating current.  By driving his killer app (things like refrigerators), Edison eventually lost out.  So too will the cloud have its killer apps.  The pending software and services battle will be interesting to note.  However what is interesting to me is that it was the business case that drove adoption and evolution here.  This also modified how the utility was used and designed.   DC substations gave way to AC substations, what used to take scores of people to support has dwindled to occasional visitations and pre-scheduled preventative maintenance.   At the data center level, we cannot afford to think that these killer applications will not change our world.    Our killers applications are coming and it will forever change how our world does business.  Data Centers and their evolution are at the heart of our future. 

On Fogs, Mist, and the Clouds Ahead . . .

After living in Seattle for close to 10 years, you learn you become an expert in three things.  Clouds, rain, and more clouds.   Unlike the utilities of the past, this new Information Utility is going to be made up of lots of independent cloudlets full of services.  The Microsoft’s, Google’s and Amazon’s of the world will certainly play a large part of the common platforms used by everyone, but the applications, products, content, customer information, and key development components will continue to have a life in facilities and infrastructure owned or controlled by companies providing those services.    In addition, external factors are already beginning to have a huge influence on cloud infrastructure.  Despite the growing political trend of trans-nationalism where countries are giving up some of their sovereign rights to participate in more regionally-aware economics and like-minded political agendas, that same effect does not seem to be taking place in the area of taxation and regulation of cloud and information infrastructure.  Specifically as it relates to electronic or intellectual property entities that derive revenue from infrastructure housed in those particular countries or do so (drive revenue) off of online activity of citizens of those nations.

There are numerous countries today that have, or are, seriously engaged in establishing and managing their national boundaries digitally online.  What do I mean by that?  There is a host of legislation across the globe that is the beginning to govern the protection and online management of their their citizens through legislation and mandates in accordance with their own laws.   This is having (and will continue to have) a dramatic impact on how infrastructure and electronic products and services will be deployed, where that data is stored,  and how revenue from that activity can and will be taxed by the local country.  This level of state exercised control can be economically, politically, or socially motivated and cloud services providers need to pay attention to it.   A great example of this is Canada which has passed legislation in response to the U.S. Patriot Act.   This legislation forbids personally identifiable information (PII) of Canadian citizens to be housed outside of the boundaries of Canada or perhaps more correctly, forbids its storage in the United States.    There are numerous laws and legislation making their way across Europe and Asia as well.   That puts an interesting kink in the idea of a world wide federated cloud user-base where information will be stored “in the cloud”.  From an infrastructure perspective it will mandate that there are facilities in each country to house that data.  While the data storage and retention challenge is an interesting software problem to solve the physical fact that the data will need to remain in a local geography will require data centers and components of cloud infrastructure to be present.  I expect this to continue as governments become more technically savvy and understand the impact of the rate of change being caused by this technology evolution. Given the fact that data centers are extremely capital intensive only a few players will be able to deploy private global infrastructures.  This means that the “information sub-station” providers will have an even more significant role in driving the future standards of this new Information Utility. One might think that this could be a service that is ultimately provided by the large cloud providers as a service.   That could be a valid assumption however, there is an interesting wrinkle developing around taxation or more correctly exposure to double taxation or multiple-country taxation that those large providers will face.   In my opinion the federation of “information substation” providers will provide the best balance of off-setting taxation issues and still providing a very granular and regionally acceptable way to service customers. That is where companies like Digital Realty Trust are going to come in and drive significant value and business protection.

I watch a lot of these geo-political and economic developments pretty closely as it relates to Data Center and infrastructure legislation and will continue to do so.  But even outside of these issues, the “cloud” or whatever term you like will continue to evolve and the “channels” created by this paradigm will continue to drive innovation at the products and services level.  Its at this level where the data center story will continue to evolve as well.   To start we need to think about the business version of the IT “server-hugging” phenomena. For the uninitiated, “Server Huggers” are those folks in an IT department who believe that the servers have to be geographically close  in order to work on them. In some cases its the right mentality, in others, where the server is located truly doesn’t matter.   It’s as much a psychological experiment as a technical one.   At a business level, there is a general reluctance to house the company jewels outside of corporate controlled space.  Sometimes this is regulated (like banks and financial institutions), most often its because those resources (proprietary applications, data sets, information stores, etc) are crucial to the success of the company, and in many instances ARE the company. Not something you necessarily want to outsource to others for control.  Therefore wholesale adoption of cloud resources is still a very very long way off.  That is not to say that this infrastructure wont get adopted into solutions that companies ultimately use to grow their own businesses.  This is going to drive tons of innovation where businesses evolve their applications , create new business models, and join together in mutually beneficial alliances that will change the shape, color, and feel of the cloud.  In fact, the cloud or “Information Utility” becomes the ultimate channel distribution mechanism.

The first grouping I can see evolving is fraternal operating groups or FOGs.  This is essentially a conglomeration of like minded or related industry players coming together to build shared electronic compute exchanges or product and service exchanges.  These applications and services will be highly customized to that particular industry. They will never be sated by the solutions that the big players will be putting into play, they are too specialized.  This infrastructure will likely not sit within individual company data centers but are likely to be located in common ground facilities or leased facilities with some structure for joint ownership.   Whether large or small, business to business, or business to consumer, I see this as an evolving sector.  There will be definitely companies looking to do this on their behalf, but given the general capital requirements to get into this type of business these FOG Agreements may be just the answer to find a great trade off between capital investment and return on the compute/service.

The next grouping builds off of the “company jewels” mindset and how it could blend with cloud infrastructure.  To continue the overly used metaphor of clouds,I will call them Managed Instances Stationed Territorially or MISTs.   There will likely be a host of companies that want to take advantage of the potential savings of cloud managed infrastructure, but want the warm and fuzzy feeling knowing its literally right in their backyard.  Imagine servers and infrastructure deployed at each customer data center, but centrally managed from cloud service providers.   Perhaps its owned by the cloud provider, perhaps the infrastructure has been purchased by the end-user company.   One can imagine the container-based server solutions being dropped into container-ready facilities or jury-rigged in the parking lot of a corporate owned or leased facility.  This gives companies the ability to structure their use of cloud technologies and map them into their own use case scenarios.  What makes the most sense for them.  The recent McKinsey paper talked about how certain elements of the cloud are more expensive than managing the resources through traditional means.  This is potentially a great hybrid scenario where companies can integrate as they need to using those services.  One could even see Misty FOGs or Foggy Mists.  I know the analogy is getting old at this point, but hopefully you can see that the future isn’t as static as some would have you believe.  This ability to channelize the technologies of the cloud will have a huge impact on business costs, operations, and technology.   It also suggests that mission critical infrastructure is not going to go away but become even more important and potentially more varied.  This is why I think that the biggest infrastructure impact will occur in the “information substation provider” level.  Data Centers aren’t going away, they might actually be growing in terms of demand, and the one thing definitely for sure is that they are evolving today and will continue to evolve as this space matures.  Does your current facility allow for this level of interconnectivity?  Do you have the ability to have a mixed solution management providers in your facility?  Lots of questions lots of opportunities to develop answers.

The last grouping is potentially an evolution of modern content delivery infrastructure or edge computing capabilities.  I will quit with the cutesy cloud names and call this generically Near Cloud Content Objects.   Given that products, services, and data will become the domain of those entities owning them, and a general reluctance to wholesale store them in someone else’s infrastructure, one could see that this proprietary content could leverage the global cloud infrastructures through regional gateways where they will be able to maintain ownership and control of their asset.  This becomes even more important when you balance into this the economic and geo-political aspects emerging in cloud compute.

In the end the cloud approach is going to significantly drive data center demand and cause it to evolve even further.  It will  not as some would like to project end the need for corporate data centers.  Then there is that not so little issue of the IT Applications and internal company services we use everyday.  This leads me into my next point . . .

The Continued and Increasing Importance of Enterprise Data Centers

This post has concentrated a lot on the future of cloud computing, so I will probably tick off a bunch of cloud-fan-folk with this next bit, but the need for the corporate data centers is not going away.  They may change in size, shape, efficiency, and the like, but there is a need to continue to maintain a home for those company jewels and to serve internal business communities.  The value of any company is the information and intellectual property developed, maintained, and driven by their employees.   Concepts like FOGs and MISTs and such still require ultimate homes or locations for that work to be terminated into or results sent to.  Additionally look at the suite of software each company may have in its facilities today supporting their business.  We are at least a decade or more away before those could be migrated to a distributed cloud based infrastructure.  Think about the migration costs of any particular application you have, then compound that with having the complexity of having your data stored in those cloud environments as well.  Are you then locked into a single cloud provider forever? It obviously requires cloud interoperability, which doesn’t exist today with exception of half-hearted non-binding efforts that don’t actually include any of the existing cloud providers.   If you believe as I do that the “cloud”  will actually be many little and large channelized solution cloudlets, you have to believe that the corporate data center is here to stay.  The mix of applications and products in your facilities may differ in the future, but you will still have them.  That’s not to say the facilities themselves will not have to evolve.  They will.  With changing requirements around energy efficiency and green reporting, along with the geo-political and other regulations coming through the pipeline, the enterprise data center will still be an area full of innovation as well.  

/Mm

Starting something new….

image

This post was an interesting struggle for me.  What should my first post since my departure from Microsoft be about?  I have a great amount of topics that I definitely want to talk about regarding the distance and gap from the executive suite, to Information Technology to the data center floor and why there continues to be challenge in this space across the industry.  In fact I probably have a whole series of them.  I am thinking of calling them “Chiller-side Chats” aimed at priming both sides in conversations with the other.   There are some industry-wide metric related topics that I want to take on, interesting trends I see developing, and literally a host of other things ranging from technology to virtualization.  While at Microsoft I maintained Loosebolts and an internal Microsoft blog which as it turns out was quite a bit of work.   I now have time to focus my energies in one place here at Loosebolts and unfortunately I may subject everyone reading this to even more of my wild ramblings.  But to talk to any of these technical issues, business issues, or industry issues would be ignoring the gigantic, purple spotted, white elephant in the room.   In fact, by the time I finished the original version of this post it was 6 pages long, and ran far afield on what I think is fundamentally changing in the data center space.  Instead of subjecting you to one giant blog, I was counseled by close friends to cut it down a bit into different sections.  So I will chop it up into two seperate posts.  The first question of course is – Why did I leave Microsoft for Digital Realty Trust?

I accomplished a great deal at Microsoft and I am extremely proud of my work there.  I have an immense amount of pride in the team that I developed there and the knowledge that it continues to drive that vision within the company.  Rest assured Microsoft has a great vision for where things are going in that space and the program is on rails as they say.  My final goodbye post talks more about my feelings there.  Within it, however, are some of the seeds (to continue that farming analogy even farther!)  of my departure.  First we need to pull our heads out of the tactical world of data centers and look at the larger emerging landscape in which data centers sit.  Microsoft, along with Google, Amazon and a few others are taking aim at Cloud Computing and are designing, building, and operating a different kind of infrastructure with different kinds of requirements. Specifically building ubiquitous services around the globe.  In my previous role, I was tasked with thinking about and building this unique infrastructure in concert with hundreds of development groups taking aim at building a core set of services for the cloud.   A wonderful blend of application and infrastructure.  Its a great thing.  But as my personal thought processes matured and deepened on this topic flavored with what I was seeing as emerging trends in business, technology and data center requirements I had a personal epiphany.  The concept of large monolithic clouds ruling the Information-sphere was not really complete.  Don’t get me wrong, they will play a large and significant role in how we compute tomorrow, but instead of an oligarchy of the few, I realized that enterprise data centers are here to stay and additionally we are likely to see an explosion of different cloud types are on the horizon.

In my opinion it is here in this new emerging space where the Information Utility will ultimately be born, defined, and true innovation in our industry (data center-wise) will take place.   This may seem rather unintuitive given the significant investments being made by the big cloud players but it is really not.   We have to remember that today, any technology must sate basic key requirements.  First and foremost amongst these is that it must solve the particular business problems.  Technology for technology sake will never result in significant adoption and the big players are working to perfect platforms that will work across a predominance of applications being specifically developed for their infrastructure.   In effect they are solving for their issues.  Issues that most of those looking to leverage cloud or shared compute will not necessarily match in either scale or standardization of server and IT environments.    There will definitely be great advances in technology, process, and a host of other areas, as a result of this work, but their leveragability is ultimately minimized as their environments, while they look like each other’s, will not easily map into the enterprise, near-enterprise, or near-cloud space.   The NASA space program has had thousands of great solutions, and some of them have been commercialized for the greater good.  I see similar things happening in the data center space.  Not everyone can get sub 1.3 Average PUE numbers, but they can definitely use those learnings to better their own efficiency in some  way.  While these large platforms in conjunction with enterprise data centers will provide key and required services, the innovation and primary requirement drivers in the future will come from the channel. 

So Why Digital Realty Trust?

Innovation can happen everywhere in any situation but it is most often born under the pressure of constraints.  While there are definitely some constraints that the big players have in evolving their programs, the real focus and attention in the industry will be at the Enterprise and Information Sub Station provider layer.   This is the part of the industry that is going to feel the biggest pinch as the requirements evolve.  Whether they be political, economical, social, or otherwise this layer will define how most of the data center industry looks like.   It is here at this layer in which a majority of companies around the world will be.  It is here at this layer that will be the most exciting for me personally.  The Moon Missions were great but they were not about bringing space travel to the masses.  Definitely some great learnings there that can be leveraged, but the commercialization and solution to the masses problem is different, perhaps bigger, and in my opinion more challenging.   At the end of the day it has to be economical and worthwhile.  We have to solve that basic business need and use case or it will remain an interesting scientific curiosity much like electricity was viewed before the light bulb. 

In Digital Realty Trust I found the great qualities I was looking for in any company.   First, they are positioned to provide either “information substation” or “enterprise” solutions and will need to solve for both.  They are effectively right in the middle of solving these issues and they are big enough to have a dramatic impact on the industry.  Secondly, and perhaps more importantly, they have a passionate, forward looking management team whom I have interacted with in the Industry for quite some time.  Let me reiterate that passionate point a moment, this is not some real estate company looking to make a quick buck on mission critical space.  I have seen enough of those in my career.  This is a firm focused on educating the market, driving innovation in application of technology, and near zealot commitment on driving efficiencies for their customers.  Whether its their frequent webinars, their industry speaking engagements, or personal conversations they are dedicated to this space, and dedicated on informing their customers.  Even when we have disagreed on topics or issues in the past, its always a great respectful conversation.  In a nutshell, they GET IT.  Another key piece that probably needs some addressing is that bit about application of technology.  We are living in some interesting times with data center technologies in a wonderful and terrible time of evolution.   The challenge for any enterprise is making heads and tails of which technologies will be great for them, what works, what doesn’t, what’s vaporware versus what is truly going to drive value.   The understanding and application of that technology is an area that Digital knows very well and the scale of their deployments allow them to learn the hard lessons before their clients have to.   Moreover they are implementing these technologies and building solutions that will fit for everyone, today! 

Another area where there is significant alignment in terms of my own personal beliefs and those of Digital Realty Trust is around speed of execution and bringing capacity online just in time.   Its no secret that I have been an active advocate of moving from big build and construction to a just in time production model.  These beliefs have long been espoused by Chris Crosby, Jim Smith, and the rest of the Digital team for some time and is very clearly articulated in the POD ARCHITECTURE approach that they have been developing for quite a few years.  Digital has done a great job of bringing this approach to the market for enterprise users and wants to drive it even faster!  One of my primary missions will be to develop the ability to deliver data center capacity start to finish in 16 weeks.   You cannot get there without a move to standardizing the supply chain and driving your program to production rather than pure construction.   Data Center planning and capacity planning is the single largest challenge in this industry.  The typical business realizes to late that they are in need to add data center capacity and these efforts typically result in significant impacts to their own business needs through project delays or cost.  As we all know, data center capacity is not ubiquitous and getting capacity just in time is either very expensive or impossible in most markets.  You can solve this problem by trying to force companies to do a better job of IT and capacity planning (i.e. boiling the ocean) or you can change how that capacity is developed, procured, and delivered.   This is one of my major goals and something I am looking forward to delivering.

In the end, my belief is that it will be companies like Digital Realty Trust at the spearhead of driving the design, physical technology application and requirements for the global Information Utility infrastructure.  They will clearly be situated the closest to those changing requirements for the largest amount of affected groups.  It is going to be a huge challenge. A challenge, I for one am extremely excited about and can’t wait to dig in and get started.

\Mm