Sites and Sounds of DataCentre2012: Thoughts and my Personal Favorite presentations Day 1

We wrapped our first full day of talks here at DataCentre2012 and I have to say the content was incredibly good.    A couple of the key highlights that really stuck out in my mind were the talk given by Christian Belady who covered some interesting bits of the Microsoft Data Center Strategy moving forward.   Of course I have a personal interest in that program having been there for Generation1 through Generation4 of the evolutions of the program.   ms-beladyChristian covered some of the technology trends that they are incorporating into their Generation 5 facilities.  It was some very interesting stuff and he went into deeper detail than I have heard so far around the concept of co-generation of power at data center locations.   While I personally have some doubts about the all-in costs and immediacy of its applicability it was great to see some deep meaningful thought and differentiation out of the Microsoft program.  He also went into a some interesting “future” visions which talked about data being the next energy source.  While he took this concept to an entirely new level  I do feel he is directionally correct.  His correlations between the delivery of “data” in a utility model rang very true to me as I have long preached about the fact that we are at the dawning of the Information Utility for over 5 years.

Another fascinating talk came from Oliver J Jones of a company called Chayora.   Few people and companies really understand the complexities and idiosyncrasies of doing business let alone dealing with the development and deployment of large scale infrastructure there.    The presentation done by Mr. Jones was incredibly well done.  Articulating the size, opportunity, and challenges of working in China through the lens of the data center market he nimbly worked in the benefits of working with a company with this kind of expertise.   It was a great way to quietly sell Chayora’s value proposition and looking around the room I could tell the room was enthralled.   His thoughts and data points had me thinking and running through scenarios all day long.  Having been to many infrastructure conferences and seeing hundreds if not thousands of presentations, anyone who can capture that much of my mindshare for the day is a clear winner. 

Tom Furlong and Jay Park of Facebook gave a great talk on OCP with a great focus on their new facility in Sweden.  They also talked  a bit about their other facilities in Prineville and North Carolina as well.   With Furlong taking the Mechanical innovations and Park going through the electrical it was a great talk to created lots of interesting questions.  fb-parkAn incredibly captivating portion of the talk was around calculating data center availability.   In all honesty it was the first time I had ever seen this topic taken head on at a data center conference. In my experience, like PUE, Availability calculations can fall under the spell of marketing departments who truly don’t understand that there SHOULD be real math behind the calculation.   There were two interesting take aways for me.  The first was just how impactful this portion of the talk had on the room in general.   There was an incredible amount of people taking notes as Jay Park went through the equation and way to think about it.   It led me to my second revelation – There are large parts of our industry who don’t know how to do this.   fb-furlongIn private conversations after their talk some people confided that had never truly understood how to calculate this.   It was an interesting wake-up call for me to ensure I covered the basics even in my own talks.

After the Facebook talk it was time for me to mount the stage for Global Thought Leadership Panel.   I was joined on stage by some great industry thinkers including Christian Belady of Microsoft, Len Bosack (founder of Cisco Systems) now CEO XKL Systems, Jack Tison-CTO of Panduit, Kfir Godrich-VP and Chief Technologist at HP, John Corcoran-Executive Chairman of Global Switch, and Paul-Francois Cattier-Global VP of Data Centers  at Schneider Electric.   That’s a lot of people and brainpower to fit on a single stage.  We really needed three times the amount of time allotted for this panel, but that is the way these things go.   Perhaps one of the most interesting recurring themes from question to question was the general agreement that at the end of the day – great technology means nothing without the will do something different.   There was an interesting debate on the differences between enterprise users and large scale users like Microsoft, Google, Facebook, Amazon, and AOL.  I was quite chagrined and a little proud to hear AOL named in that list of luminaries (it wasn’t me who brought it up).   But I was quick to point out that AOL is a bit different in that it has been around for 30 years and our challenges are EXACTLY like Enterprise data center environments.   More on that tomorrow in my keynote I guess.

All in all, it was a good day – there were lots of moments of brilliance in the panel discussions throughout the day.  One regret I have was on the panel regarding DCIM.   They ran out of time for questions from the audience which was unfortunate.   People continue to confuse DCIM as BMS version 2.0 and really miss capturing the work and soft costs, let alone the ongoing commitment to the effort once started.   Additionally there is the question of once you have mountains of collected data, what do you do with that.   I had a bunch of questions on this topic for the panel, including if any of the major manufacturers were thinking about building a decision engine over the data collection.  To me it’s a natural outgrowth and next phase of DCIM.  The one case study they discussed was InterXion.  It was a great effort but I think in the end maintained the confusion around a BMS with a web interface versus true Facilities and IT integration.     Another panel on Modularization got some really lively discussion on feature/functionality and differentiation, and lack of adoption.  To a real degree it highlighted an interesting gulf between manufacturers (mostly represented by the panel) who need to differentiate their products and the users who require vendor interoperability of the solution space.   It probably doesn’t help to have Microsoft or myself in the audience when it comes to discussions around modular capacity.   On to tomorrow!

\Mm

Uptime, Cowgirls, and Success in California

This week my teams have descended upon the Uptime Institute Symposium in Santa Clara.  The moment is extremely bittersweet for me as this is the first Symposium in quite sometime I have been unable to attend.  With my responsibilities expanding at AOL beginning this week there was simply too much going on for me to make the trip out.  It’s a down right shame too.  Why?

We (AOL) will be featured in two key parts at Symposiums this time around for some incredibly ground breaking work happening at the company.   The first is a recap of the incredible work going on in the development of our own cloud platforms.  Last year you may recall that we were asked to talk about some of the wins and achievements we were able to accomplish with the development of our cloud platform.   The session was extremely well received.   We were asked to come back, one year on, to discuss about how that work has progressed even more.   Aaron Lake, the primary developer of our cloud platforms and my Infrastructure Development Team, will be talking on the continued success, features, and functionality, and the launch of our ATC Cloud Only Data Center.   Its been an incredible break neck pace for Aaron and his team and they have delivered world-class capabilities for us internally.

Much of Aaron’s work has also enabled us to win the Uptime Institutes First Annual Server Round Up Award.  I am especially proud of this particular honor as it is the result of an amazing amount of hard work within the organization on a problem faced by companies all over the planet.   Essentially this is Operations Hygiene at a huge scale, getting rid of old servers, driving consolidation, moving platforms to our cloud environments and more.  This talk will be lead by Julie Edwards, our Director of Business Operations and Christy Abramson, our Director of Service Management.  Together these two teams led the effort to drive out “Operational Absurdities” and our “Power Hog” programs.  We have sent along Lee Ann Macerelli and Rachel Paiva who were the primary project managers instrumental in making this initiative such a huge success.  These “Cowgirls” drove an insane amount of work across the company resulting in over 5 million dollars of un-forecasted operational savings, proving that there is always room for good operational practices.  They even starred in a funny internal video to celebrate their win which can be found here using the AOL Studio Now service.

If you happen to be attending Symposium this year feel free to stop by and say hello to these amazing individuals.   I am incredibly proud of the work that they have driven within the company.

 

\Mm

IPvSexy, Yes You to can get there too.

Today we celebrated going live with IPV6 versions of many of our top rated sites.  The work was done in advance of our participation in the IPV6 Launch Day.  For the uninitiated, IPv6 Launch Day is the date where major sites will begin to have their websites publicly available in the new Internet numbering scheme and is currently set for June 6, 2012.  As many of you likely know the IPv4 space which has served the Internet so well since its inception is running out of unique addresses.  I am especially proud of the fact that we are the first of the largest Internet players to achieve this unique feat.   In fact three of our sites occupy slots in the Top 25 Sites in the ranking including www.aol.com, www.engadget.com, and www.mapquest.com.  As with all things there are some interesting caveats.  For example – Google is IPv6 enabled for some ISPs, but not all.  I am specifically highlighting global availability.  

 

clip_image001

 

The journey has been an interesting one and there are many lessons learned going through these exercises.   In many conversations on this topic with other large firms exploring making this move, I often hear how difficult this process appears to be and a general reluctance to even begin.  Like the old saying goes even the longest journey begins with the first step.  

This work was far from easy and our internal team had some great learnings in our efforts to take our sites live beginning with the World IPV6 day in 2011.   Although our overall traffic levels remain pretty tiny (sustained about 4-5Mb/s) it is likely to grow as more ISPs convert their infrastructure to IPv6.

Perhaps the most significant thing I would like to share is that while migrating to the numbering system – I was very pleased to find the number of options available to companies in staging their moves to the IPv6.  Companies have a host of options available to them outside of an outright full scale renumbering of their networks.  There are IPv4 to IPv6 Gateways, capabilities already built into your existing routing and switching equipment that could ease the way, and even some capabilities in external service providers like Akamai that could help ease your adoption into the new space.  Eventually everything will need to get migrated and you will need to have a comprehensive plan to get you there, but its nice to know that firms have a bunch of options available to assist in this technical journey. 

 

\Mm

Attacking the Cruft

Today the Uptime Institute announced that AOL won the Server Roundup Award.  The achievement has gotten some press already (At Computerworld, PCWorld, and related sites) and I cannot begin to tell you how proud I am of my teams.   One of the more personal transitions and journeys I have made since my experience scaling the Microsoft environments from tens of thousands of servers to hundreds of thousands of servers has been truly understanding the complexity facing a problem most larger established IT departments have been dealing with for years.  In some respects, scaling infrastructure, while incredibly challenging and hard, is in large part a uni-directional problem space.   You are faced with growth and more growth followed by even more growth.  All sorts of interesting things break when you get to big scale. Processes, methodologies, technologies, all quickly fall to the wayside as you climb ever up the ladder of scale.

At AOL I faced a multi-directional problem space in that, as a company and as a technology platform we were still growing.  Added to that there was 27 years of what I call “Cruft”.   I define “Cruft” as years of build-up of technology, processes, politics, fiscal orphaning and poor operational hygiene.  This cruft can act as a huge boat anchor and barrier to an organization to drive agility in its online and IT operations.  On top of this Cruft a layer of what can best be described as lethargy or perhaps apathy can sometimes develop and add even more difficulty to the problem space.

One of the first things I encountered at AOL was the cruft.  In any organization, everyone always wants to work on the new, cool, interesting things. Mainly because they are new and interesting..out of the norm.  Essentially the fun stuff!  But the ability for the organization to really drive the adoption of new technologies and methods was always slowed, gated or in some cases altogether prevented by years interconnected systems, lost owners, servers of unknown purpose lost in the distant historical memory and the like.   This I found in healthy populations at AOL. 

We initially set about building a plan to attack this cruft.   To earnestly remove as much of the cruft  as possible and drive the organization towards agility.  Initially we called this list of properties, servers, equipment and the like the Operations $/-\!+ list. As this name was not very user-friendly it migrated into a series of initiatives grouped the name of Ops-Surdities.   These programs attacked different types of cruft and were at a high level grouped into three main categories:

The Absurdity List – A list of projects/properties/applications that had a questionable value, lack of owner, lack of direction, or the like but was still drawing load and resources from our data centers.   The plan here was to develop action plans for each of the items that appeared on this list.

Power Hog – An effort to audit our data center facilities, equipment, and the like looking for inefficient servers, installations, and /or technology and migrating them to new more efficient platforms or our AOL Cloud infrastructure.  You knew you were in trouble when you had a trophy of a bronze pig appear on your desk or office and that you were marked. 

Ops Hygiene – The sometimes tedious task of tracking down older machines and systems that may have been decomissioned in the past, marked for removal, or were fully depreciated and were never truly removed.  Pure Vampiric load.  You may or may not be surprised how much of this exists in modern data centers.  It’s a common issue I have had with most data center management professionals in the industry.

So here we are, in a timeline measured in under a year, and being told all along the way by“crufty old-timers” that we would never make any progress, my teams have de-comissioned almost 10,000 servers from our environments. (Actually this number is greater now, but the submission deadline for the award was earlier in the year).  What an amazing accomplishment.  What an amazing team!

So how did we do it?

As we will be presenting this in a lot more detail at the Uptime Symposium, I am not going to give away all of our secrets in a blog post and give you a good reason to head to the Uptime event and listen to and ask the primary leaders of this effort how they did it in person.  It may be a good use of that Travel budget your company has been sitting on this year.

What I will share is some guidelines on approach and some things to be wary of if you are facing similar challenges in your organization.

FOCUS AND ATTENTION

I cannot tell you how many I have spoken with that have tried to go after ‘cruft’ like this time and time again and failed.   One of the key drivers for success in my mind is ensuring that there is focus and attention on this kind of project at all levels, across all organizations, and most importantly from the TOP.   To often executives give out blind directives with little to no follow through and assume this kind of thing gets done.   They are generally unaware of the natural resistance to this kind of work there is in most IT organizations.    Having a motivated, engaged, and focused leadership on these types of efforts goes and extraordinarily long way to making headway here.  

BEWARE of ORGANIZATIONAL APATHY

The human factors that stack up against a project like this are impressive.  While they may not be openly in revolt over such projects there is a natural resistance to getting things done.  This work is not sexy.  This work is hard.  This work is tedious.  This likely means going back and touching equipment and kit that has not been messed with for a long time.   You may have competing organizational priorities which place this kind of work at the bottom of the workload priority list.   In addition to having Executive buy in and focus, make sure you have some really driven people running these programs.  You are looking for CAN DO people, not MAKE DO people.

TECHNOLOGY CAN HELP, BUT ITS NOT YOUR HEAVY LIFTER

Probably a bit strange for a technology blog to say, but its true.  We have an incredible CMDB and Asset System at AOL.  This was hugely helpful to the effort in really getting to the bottom of the list.   However no amount of Technology in place will be able to perform the myriad of tasks required to actually make material movement on this kind of work.   Some of it requires negotiation, some of it requires strength of will, some of it takes pure persistence in running these issues down…working with the people.  Understanding what is still required, what can be moved.  This requires people.   We had great technologies in place from the perspective of knowing where are stuff was, what it did, and what it was connected to.  We had great technologies like our Cloud to move some of these platforms to ultimately.    However, you need to make sure you don’t go to far down the people trap.  I have a saying in my organization – There is a perfect number of project managers and security people in any organization.  Where the work output and value delivered is highest.   What is that number?  Depends – but you definitely know when you have one too many of each.

MAKE IT FUN IF YOU CAN

From the brass pigs, to minor celebrations each month as we worked through the process we ensured that the attention given the effort was not negative. Sure it can be tough work, but you are at the end of the day substantially investing in the overall agility of your organization.  Its something to be celebrated.    In fact at the completion of our aggressive goals the primary project leads involved did a great video (which you can see here) to highlight and celebrate the win.   Everyone had a great laugh and a ton of fun doing what was ultimately a tough grind of work.  If you are headed to Symposium I strongly encourage you to reach out to my incredible project leads.  You will be able to recognize them from the video….without the mustaches of course!

\Mm

Chaos Monkeys, Donkeys and the Innovation of Action

Last week I once again had the pleasure of speaking at the Uptime Institute’s Symposium.  As one of the premiere events in the Data Center industry it is definitely one of those conferences that is a must attend to get a view into what’s new, what’s changing, and where we are going as an industry.  Having attended the event numerous times in the past, this year I set out on my adventure with a slightly different agenda.

Oh sure I would definitely attend the various sessions on technology, process, and approach.  But this time I was also going with the intent to listen equally to the presenters as well as the scuttlebutt, side conversations, and hushed whispers of the attendees.   Think of it as a cultural experiment in being a professional busy body.  As I wove my way around from session to session I was growing increasingly anxious that while the topics were of great quality, and discussed much needed areas of improvement in our technology sector – most of them were issues we have covered, talked about and have been dealing with as an industry for many years.   In fact I was hard pressed to find anything of real significance in the new category.   These thoughts were mirrored in those side conversations and hushed whispers I heard around the various rooms as well.

One of the new features of Symposium is that the 451 Group has opted to expand the scope of the event to be more far reaching covering all aspects of the issues facing our industry.   It has brought in speakers from Tier 1 Research and other groups that have added an incredible depth to the conference.    With that depth came some really good data.   In many respects the data reflected (in my interpretation) that while technology and processes are improving in small pockets, our industry ranges from stagnant to largely slow to act.  Despite mountains of data showing energy efficiency benefits, resulting cost benefits, and the like we just are not moving the proverbial ball down the field.

In a purely unscientific poll I was astounded to find out that some of the most popular sessions were directly related to those folks who have actually done something.  Those that took the new technologies (or old technologies) and put them into practice were roundly more interesting than more generic technology conversations.   Giving very specific attention to detail on the how they accomplished the tasks at hand, what they learned, what they would do differently.   Most of these “favorites” were not necessarily in those topics of “bleeding edge” thought leadership but specifically the implementation of technologies and approaches we have talked about the event for many years.   If I am honest, one of those sessions that surprised me the most was our own.   AOL had the honor of winning an IT Innovation Award from Uptime and as a result the teams responsible for driving our cloud and virtualization platforms were allowed to give a talk about what we did, what the impact was and how it all worked out.   I was surprised because I was not sure how many people would come to this side session and listen to presentation or find the presentation relevant.  Of course I thought it was relevant (We were after all going to get a nifty plaque for the achievement) but to my surprise the room was packed full, ran out of chairs, and had numerous people standing for the presentation.   During the talk we had a good interaction of questions from the audience and after the talk we were inundated with people coming up to specifically dig into more details.  We had many comments around the usefulness of the talk because we were giving real life experiences in making the kinds of changes that we as an industry have been talking about for years.  Our talk and adaption of technology even got a little conversation in some of the Industry press such as Data Center Dynamics.

Another session that got incredible reviews was the presentation by Andrew Stokes of Deutsche Bank who guided the audience through their adoption of 100% free air cooled data center in the middle of New York City.  Again, the technology here was not new (I had built large scale facilities using this in 2007) – but it was the fact that Andrew and the folks at Deutsche Bank actually went out and did something.   Not someone from those building large-scale cloud facilities, not some new experimental type of server infrastructure.  Someone who used this technology servicing IT equipment that everyone uses, in a fairly standard facility who actually went ahead and did something Innovative.  They put into practice something that others have not. Backed back facts, and data, and real life experiences the presentation went off incredibly and was roundly applauded by those I spoke with as one of the most eye-opening presentations of the event.

By listening the audiences, the hallway conversations, and the multitude of networking opportunities throughout the event a pattern started to emerge,  a pattern that reinforced the belief that I was already coming to in my mind.   Despite a myriad of talk on very cool technology, application, and evolving thought leadership innovations – the most popular and most impactful sessions seemed to center on those folks who actually did something, not with the new bleeding edge technologies, but utilizing those recurring themes that have carried from Symposium to Symposium over the years.   Air Side economization?  Not new.   Someone (outside Google, Microsoft, Yahoo, etc) doing it?  Very New-Very Exciting.  It was what I am calling the Innovation of ACTION.  Actually doing those things we have talked about for so long.

While this Innovation of Action had really gotten many people buzzing at the conference there was still a healthy population of people who were downplaying those technologies.  Downplaying their own ability to do those things.    Re-stating the perennial dogmatic chant that these types of things (essentially any new ideas post 2001 in my mind) would never work for their companies.

This got me thinking (and a little upset) about our industry.  If you listen to those general complaints, and combine it with the data that we have been mostly stagnant in adopting these new technologies – we really only have ourselves to blame.   There is a pervasive defeatist attitude amongst a large population of our industry who view anything new with suspicion, or surround it with the fear that it will ultimately take their jobs away.  Even when the technologies or “new things” aren’t even very new any more.  This phenomenon is clearly visible in any conversation around ‘The Cloud’ and its impact on our industry.    The data center professional should be front and center on any conversation on this topic but more often than not self-selects out of the conversation because they view it more as an application thing, or more IT than data center thing.   Which is of course complete bunk.   Listening to those in attendance complain that the ‘Cloud’ is going to take their jobs away, or that only big companies like Google , Amazon, Rackspace, or  Microsoft would ever need them in the future were driving me mad.   As my keynote at Uptime was to be centered around a Cloud survival guide – I had to change my presentation to account for what I was hearing at the conference.

In my talk I tried to focus on what I felt to be emerging camps at the conference.    To the first, I placed a slide prominently featuring Eeyore (from Winnie the Pooh fame) and captured many of the quotes I had heard at the conference referring to how the Cloud, and new technologies were something to be mistrusted rather than an opportunity to help drive the conversation.     I then stated that we as an industry were an industry of donkeys.  That fact seems to be backed up by data.   I have to admit, I was a bit nervous calling a room full of perhaps the most dedicated professionals in our industry a bunch of donkeys – but I always call it like I see it.

I contrasted this with those willing to evolve their thought forward, embrace that Innovation of Action by highlighting the Cloud example of Netflix.   When Netflix moved heavily into the cloud they clearly wanted to evolve past the normal IT environment and build real resiliency into their product.   They did so by creating a rogue process (on purpose) call the Chaos Monkey which randomly shut down processes and wreaked havoc in their environment.   At first the Chaos Monkey was painful, but as they architected around those impacts their environments got stronger.   This was no ordinary IT environment.  This was something similar, but new.  The Chaos Monkey creates Action, results in Action and on the whole moves the ball forward.

Interestingly after my talk I literally have dozens of people come up and admit they had been donkeys and offered to reconnect next year to demonstrate what they had done to evolve their operations.

My challenge to the audience at Uptime, and ultimately my challenge to you the industry is to stop being donkeys.   Lets embrace the Innovation of Action and evolve into our own versions of Chaos Monkeys.    Lets do more to put the technologies and approaches we have talked about for so long into action.    Next Year at Uptime (and across a host of other conferences) lets highlight those things that we are doing.  Lets put our Chaos Monkeys on display.

As you contemplate your own job – whether IT or Data Center professional….Are you a Donkey or Chaos Monkey?

\Mm

C02K Doubter? Watch the Presidential address today

Are you a Data Center professional who doubts that Carbon legislation is going to happen or that this initiative will never get off the ground?   This afternoon President Obama plans to outline his intention to assess a cost for Carbon consumption at a conference highlighting his economic accomplishments to date.   The backdrop of this of course is the massive oil rig disaster in the Gulf.

As my talk at the Uptime Institute Symposium highlighted this type of legislation will have a big impact on data center and mission critical professionals.  Whether you know it or not, you will be front and center in assisting with the response, collection and reporting required to react to this kind of potential legislation.  In my talk where I questioned the audience in attendance it was quite clear that most of those in the room were vastly ill-prepared and ill-equipped to this kind of effort. 

If passed this type of legislation is going to cause a severe reaction inside organizations to ensure that they are in compliance and likely lead to a huge increase of spending in an effort to collect energy information along with reporting.  For many organizations this will result in significant spending.

image The US House of Representatives has already passed a version of this known as the Waxman Markey bill.   You can bet that there will be a huge amount of pressure to get a Senate version passed and out the door in the coming weeks and months.

This should be a clarion call for data center managers to step up and raise awareness within their organizations about this pending legislation and take a proactive role in establishing a plan for a corporate response.   Take an inventory of your infrastructure and assess what will you need to begin collecting this information?  It might even be wise to get a few quotes to get an idea or ballpark cost of what it might take to bring your organization up to the task.  Its probably better to start doing this now, than to be told by the business to get it done.

\Mm

Open Source Data Center Initiative

There are many in the data center industry that have repeatedly called for change in this community of ours.  Change in technology, change in priorities, Change for the future.  Over the years we have seen those changes come very slowly and while they are starting to move a little faster now, (primarily due to the economic conditions and scrutiny over budgets more-so than a desire to evolve our space) our industry still faces challenges and resistance to forward progress.   There are lots of great ideas, lots of forward thinking, but moving this work to execution and educating business leaders as well as data center professionals to break away from those old stand by accepted norms has not gone well.

That is why I am extremely happy to announce my involvement with the University of Missouri in the launch of a Not-For-Profit Data Center specific organization.   You might have read the formal announcement by Dave Ohara who launched the news via his industry website, GreenM3.   Dave is another of of those industry insiders who has long been perplexed by the lack of movement and initiative we have had on some great ideas and stand outs doing great work.  More importantly, it doesn’t stop there.  We have been able to put together quite a team of industry heavy-weights to get involved in this effort.  Those announcements are forthcoming, and when they do, I think you will get a sense of the type of sea-change this effort could potentially have.

One of the largest challenges we have with regards to data centers is education.   Those of you who follow my blog know that I believe that some engineering and construction firms are incented ‘not to change’ or implementing new approaches.  The cover of complexity allows customers to remain in the dark while innovation is stifled. Those forces who desire to maintain an aura of black box complexity  around this space and repeatedly speak to the arcane arts of building out  data center facilities have been at this a long time.  To them, the interplay of systems requiring one-off monumental temples to technology on every single build is the norm.  Its how you maximize profit, and keep yourself in a profitable position. 

When I discussed this idea briefly with a close industry friend, his first question naturally revolved around how this work would compete with that of the Green Grid, or Uptime Institute, Data Center Pulse, or the other competing industry groups.  Essentially  was this going to be yet another competing though-leadership organization.  The very specific answer to this is no, absolutely not.   

These groups have been out espousing best practices for years.  They have embraced different technologies, they have tried to educate the industry.  They have been pushing for change (for the most part).  They do a great job of highlighting the challenges we face, but for the most part have waited around for universal good will and monetary pressures to make them happen.  It dawned on us that there was another way.   You need to ensure that you build something that gains mindshare, that gets the business leadership attention, that causes a paradigm shift.   As we put the pieces together we realized that the solution had to be credible, technical, and above all have a business case around it.   It seemed to us the parallels to the Open Source movement and the applicability of the approach were a perfect match.

To be clear, this Open Source Data Center Initiative is focused around execution.   Its focused around putting together an open and free engineering framework upon which data center designs, technologies, and the like can be quickly put together and more-over standardize the approaches that both end-users and engineering firms approach the data center industry. 

Imagine if you will a base framework upon which engineering firms, or even individual engineers can propose technologies and designs, specific solution vendors could pitch technologies for inclusion and highlight their effectiveness, more over than all of that it will remove much mystery behind the work that happens in designing facilities and normalize conversations.    

If you think of the Linux movement, and all of those who actively participate in submitting enhancements, features, even pulling together specific build packages for distribution, one could even see such things emerging in the data center engineering realm.   In fact with the myriad of emerging technologies assisting in more energy efficiency, greater densities, differences in approach to economization (air or water), use of containers or non use of containers, its easy to see the potential for this component based design.  

One might think that we are effectively trying to put formal engineering firms out of business with this kind of work.  I would argue that this is definitely not the case.  While it may have the effect of removing some of the extra-profit that results from the current ‘complexity’ factor, this initiative should specifically drive common requirements, and lead to better educated customers, drive specific standards, and result in real world testing and data from the manufacturing community.  Plus, as anyone knows who has ever actually built a data center, the devil is in the localization and details.  Plus as this is an open-source initiative we will not be formally signing the drawings from a professional engineering perspective. 

Manufacturers could submit their technologies, sample application of their solutions, and have those designs plugged into a ‘package’ or ‘RPM’ if I could steal a term from the Redhat Linux nomenclature.  Moreover, we will be able to start driving true visibility of costs both upfront and operating and associate those costs with the set designs with differences and trending from regions around the world.  If its successful, it could be a very good thing.  

We are not naive about this however.  We certainly expect there to be some resistance to this approach out there and in fact some outright negativity from those firms that make the most of the black box complexity components. 

We will have more information on the approach and what it is we are trying to accomplish very soon.  

 

\Mm

Kickin’ Dirt

mikeatquincy

I recently got an interesting note from Joel Stone, the Global Operations Chief at Global Switch.  As some of you might know Joel used to run North American Operations for me at Microsoft.  I guess he was digging through some old pictures and found this old photo of our initial site selection trip to Quincy, Washington.

As you can see, the open expanse of farmland behind me, ultimately became Microsoft’s showcase facilities in the Northwest.  In fact you can even see some farm equipment just behind me.   It got me reminiscing about that time and how exciting and horrifying that experience can be.

At the time Quincy, Washington was not much more than a small agricultural town, whose leaders did some very good things (infrastructurally speaking) and benefitted by the presence of large amounts of hydro-power.  When we went there, there were no other active data centers for hundreds of miles, there were no other technology firms present, and discussions around locating a giant industrial-technology complex here seemed as foreign as landing on the moon might have sounded during World War Two.

Yet if you fast forward to today companies like Microsoft, Yahoo, Sabey, Intuit, and others have all located technology parks in this one time agricultural hub.   Data Center Knowledge recently did an article on the impacts to Quincy. 

Many people I speak to at conferences generally think that the site selection process is largely academic.   Find the right intersection of a few key criteria and locate areas on a map that seem to fit those requirements.   In fact, the site selection strategy that we employed took many different factors into consideration each with its own weight leading ultimately to a ‘heat map’ in which to investigate possible locations. 

Even with some of the brightest minds, and substantial research being done, its interesting to me that ultimately the process breaks down into something I call ‘Kickin Dirt’.   Those ivory tower exercises ultimately help you narrow down your decisions to a few locations, but the true value of the process is when you get out to the location itself and ‘kick the dirt around’.   You get a feel for the infrastructure, local culture, and those hard to quantify factors that no modeling software can tell you.  

Once you have gone out and kicked the dirt,  its decision time.  The decision you make, backed by all the data and process in the world, backed by personal experience of the locations in question,  ultimately nets out to someone making a decision.   My experience is that this is something that rarely works well if left up to committee.  At some point someone needs the courage and conviction, and in some cases outright insanity to make the call. 

If you are someone with this responsibility in your job today – Do your homework, Kick the Dirt, and make the best call you can.  

To my friends in Quincy – You have come along way baby!  Merry Christmas!

 

\Mm

Data Center Regulation Awareness Increasing, Prepare for CO2K

This week I had the pleasure of presenting at the Gartner Data Center Conference in Las Vegas, NV.  This was my first time presenting at the Gartner event and it represented an interesting departure from my usual conference experience in a few ways and I came away with some new observations and thoughts.   As always, the greatest benefit I personally get from these events is the networking opportunities with some of the smartest people across the industry.  I was surprised by both the number of attendees ( especially given the economic drag and the almost universal slow-down on corporate travel) and the quality of questions I heard in almost every session.

My talk centered around the coming Carbon Cap and Trade Regulation and its specific impact on IT organizations and the data center industry.  I started my talk with a joke about how excited I was to be addressing a room of tomorrow’s eco-terrorists.  The joke went flat and the audience definitely had a fairly serious demeanor.   This was reinforced when I asked how many people in the audience thought that regulation was a real and coming concern for IT organizations.  Their response startled me.

I was surprised because nearly 85% of the audience had raised their hands.  If I contrast that to the response to the exact same question asked three months earlier at the Tier One Research Data Center Conference where only about 5% of the audience raised their hands, its clear that this is a message that is beginning to resonate, especially in the larger organizations.  

In my talk, I went through the Carbon Reduction Commitment legislation passed in the UK and the negative effects it is having upon data center and IT industry there, as well as the negative impacts to Site Selection Activity that it is causing firms to skip investing Data Center capital in the UK by and large.   I also went through the specifics of the Waxman-Markey bill in the US House of Representatives and the most recent thought on the various Senate based initiatives on this topic.   I have talked here about these topics before, so I will not rehash those issues for this post.  Most specifically I talked about the potential cost impacts to IT organizations and Data Center Operations and the complexity of managing both carbon reporting and both direct and indirect costs resulting from these efforts.  

While I was pleasantly surprised by the increased awareness of senior IT,  business managers, and Data Center Operators around the coming regulation impacts, I was not surprised by the responses I received with regards to their level of preparedness to reacting these initiatives.   Less than 10% of the room had the technology in place to even begin to collect the needed base information for such reporting and roughly 5% had begun a search for software or initiate development efforts to aggregate and report this information.

With this broad lack of infrastructural systems in place, let alone software for reporting –  I predict we are going to see a phenomena similar to the Y2K craziness in the next 2-3 years.  As the regulatory efforts here in the United States and across the EU begin to crystallize, organizations will need to scramble to get the proper systems and infrastructure in place to ensure compliance.   I call this coming phenomena – CO2K.  Regardless what you call it, I suspect, the coming years will be good for those firms with power management infrastructure and reporting capabilities.

\Mm

A look back and a look forward…

For those of you who are not on the Digital Realty Trust email distribution for such things, I recently did a video for them on some reflections of the past and looking ahead with regards to the data center industry, technologies, and such.  You can find the video link here if your interested.   

I for one would never trust some data center dork in a video.

\Mm