IPv6 to IPv4 Translation Made Business Beautiful. Think an Easy, less painful to your business in transitioning your Data Center.

ipv4ipv6

I am a lover of simple, efficient, and beautiful things.  Ivan Peplnjak of ipSpace gets The Loosebolt’s Oscar Award for Elegance and Simplicity in a Complex Network Application.  There may not be a little statue holding up a giant router or anything but his solution to IPv4 to IPv6 translation on the Internet is pretty compelling and allows the application developers and IT folks to “outsource” all concerns about this issue to the network.

At some point your Data Centers and network are going to have to tackle the interface between the commercial IPv4 Internet and the IPv6 Internet.  If you are pretty aggressive on the IPv6 conversion in your data center, that pesky IPv4 Internet is going to prove to be a problem. Some think this can be handled by straight Network Address Translation, or having to dual home the servers in your data center on both networks.  But this challenge has cascading challenges to your organization.  Essentially it creates work for your System Admins, your developers, Web admins, etc.  In short, you may have to figure out solutions at every level of the stack.   I think Ivan’s approach makes it pretty simple and compelling if a bit of an unorthodox.  His use of Stateless IP/ICMP Translation,  which was originally intended a part of NAT64 and not on its own, solves an interesting problem and allows businesses to begin the conversion in a way that allows them to solve it one layer at a time and still allow those non-adopting IPv4 folks access to all the goodness within your data center.

His webcast on his approach can be found here.

\Mm

Through an idea and force of will, he created an industry…

This week the Data Center Industry got the terrible news it knew might be coming for some time.   That Ken Brill, founder of the Uptime Institute had passed away.  Many of us knew that Ken had been ill for some time and although it may sound silly, were hoping he could somehow pull through it.   Even as ill as he was, Ken was still sending and receiving emails and staying in touch with this industry that quite frankly he helped give birth to.  

I was recently asked about Ken and his legacy for a Computerworld article and it really caused me to stop and re-think his overall legacy and gift to the rest of us in the industry.  Ken Brill was a pioneering, courageous, tenacious, visionary who through his own force of will saw the inefficiencies in a nascent industry and helped craft it into what it is today.

Throughout his early career experience Ken was able to see the absolute silo’ing of information, best practices, and approaches that different enterprises were developing around managing their mission critical IT spaces.    While certainly not alone in the effort, he became the strongest voice and champion to break down those walls, help others through the process and build a network of people who would share these ideas amongst each other.  Before long an industry was born.   Sewn together through his sometimes delicate, sometimes not so delicate cajoling and through it all his absolute passion for the Data Center industry at large.

One of the last times Ken and I got to speak in person.In that effort he also created and permeated the language that the industry uses as commonplace.   Seeing a huge gap in terms of how people communicated and compared mission critical capabilities he became the klaxon of the Tiering system which essentially normalized the those conversations across the Data Center Industry.   While some (including myself) have come to think it’s a time to re-define how we classify our mission critical spaces, we all have to pay homage to the fact that Ken’s insistence and drive for the Tiering system created a place and a platform to even have such conversations.  

One of Ken’s greatest strengths was his adaptability.   For example, Ken and I did not always agree.   I remember an Uptime Fellows meeting back in 2005 or 2006 or so in Arizona.  In this meeting I started talking about the benefits of modularization and reduced infrastructure requirements augmented by better software.   Ken was incredulous and we had significant conversations around the feasibility of such an approach.   At another meeting we discussed the relative importance or non-importance of a new organization called ‘The Green Grid’ (Smile)and if Uptime should closely align itself with those efforts.   Through it all Ken was ultimately adaptable. Whether it was giving those ideas light for conversation amongst the rest of the Uptime community via audio blogs, or other means, Ken was there to have a conversation.

In an industry where complacency has become commonplace, where people rarely question established norms, it was always comforting to know that Ken was there acting the firebrand, causing the conversation to happen.   This week we lost one of the ‘Great Ones’ and I for one will truly miss him.  To his family my deepest sympathies, to our industry I ask, “Who will take his place?”

 

\Mm

The AOL Micro-DC adds new capability

Back in July, I announced AOL’s Data Center Independence Day with the release of our new ‘Micro Data Center’ approach.   In that post we highlighted the terrific work that the teams put in to revolutionize our data center approach and align it completely to not only technology goals but business goals as well.   It was an incredible amount of engineering and work to get to that point and it would be foolish to think that the work represented a ‘One and Done’ type of effort.  

So today I am happy to announce the roll out of a new capability for our Micro-DC – An indoor version of the Micro-DC.

Aol MDC-Indoor2

While the first instantiations of our new capability were focused on outdoor environments, we were also hard at work at an indoor version with the same set of goals.   Why work on an indoor version as well?   Well you might recall in the original post I stated:

We are no longer tied to traditional data center facilities or colocation markets.   That doesn’t mean we wont use them, it means we now have a choice.  Of course this is only possible because of the internally developed cloud infrastructure but we have freed ourselves from having to be bolted onto or into existing big infrastructure.   It allows us to have an incredible amount geo-distributed capacity at a very low cost point in terms of upfront capital and ongoing operational expense.

We need to maintain a portfolio of options for our products and services.  In this case – having an indoor version of our capabilities to ensure that our solution can live absolutely anywhere.   This will allow our footprint, automation and all, to live inside any data center co-location environment or the interior of any office building anywhere around the planet, and retain the extremely low maintenance profile that we were targeting from an operational cost perspective.  In a sense you can think of it as “productizing” our infrastructure.  Could we have just deployed racks of servers, network kit, etc. like we have always done?  Sure.   But by continuing to productize our infrastructure we continue to drive down the costs relating to our short term and long term infrastructure costs.  In my mind, Productizing your infrastructure, is actually the next evolution in standardization of your infrastructure.   You can have infrastructure standards in place – Server Model, RAM, HD space, Access switches, Core switches, and the like.  But until you get to that next phase of standardizing, automating, and ‘productizing’ it into a discrete set of capabilities – you only get a partial win.

Some people have asked me, “Why didn’t you begin with the interior version to start with? It seems like it would be the easier one to accomplish.”  Indeed I cannot argue with them, it would have probably been easier as there were much less challenges to solve.  You can make basic assumptions around where this kind of indoor solution would live in, and reduce much of the complexity.   I guess it all nets out to a philosophy of solving the harder problems first.   Once you prove the more complicated use case, the easier ones come much faster.   This is definitely the situation here.  

While this new capability continues the success we are seeing in re-defining the cost and operations of our particular engineering environments, the real challenge here (as with all sorts infrastructure and cloud automation) is whether or not we can map similar success of our applications and services to work correctly in that space.   On that note, I should have more to post soon. Stay Tuned!  Smile

 

\Mm

AOL’s Data Center Independence Day

Yesterday we celebrated Independence Day here in the United States.   It’s a day where we embrace the freedoms we enjoy as a country, look back on where we have come, and celebrate the promise of the future.   Yesterday was also a different kind of Independence Day for my teams at AOL.  A Data Center Independence Day, if you will. 

You may or may not have been following the progress of the work that we have been doing here at AOL over the last 14 or so months but the pace of change has been simply breathtaking.  One of the first things I did when I entered into the company was deeply review all of the aspects of Operations.  From Data Centers to Network Engineering, to the engineering teams supporting the products and services and everything in between.   The net of the exercise was that AOL was probably similar to most companies out there in terms of technology mix, from the CRUFT that I mentioned in a previous post, to latest technologies.  There were some incredible technologies built over the last three decades, some outdated processes and procedures, and if I am honest traces of a culture where the past had more meaning of the present or future.

In a very short period of time all of that changed.  We aggressively made changes to the organization,  re-aligned priorities, and perhaps most of all we created and defined a powerful collection of changes and evolutions we would need to bring about with very aggressive timelines.    These changes were part of a defined Technology Roadmap that broke the work we needed to accomplish into three categories of work.   The categorization focused on the internal technical challenges and tools we needed to make to enhance our own internal efficiencies.  The second categorization focused on the technical challenges and aggressive things we could do to enhance and bring greater scalability to our products and services.   This would include things like additional services and technology suites to our internally developed cloud infrastructure, and other items that would allow for more rapid product delivery of our products and services.   The last categorization of work, was for the incredibly aggressive “wish list” types of changes.  Items that could be so disruptive, so incredibly game-changing for us, that they could redefine our work on the whole.  In fact we named this group of work “Nibiru” after a mythical planet that is said to cross into our solar system and wreaks havoc and brings about great change. 

On July 4, 2012, one of our Nibiru items arrived and I am extremely ecstatic to state that we achieved our “Data Center Independence Day”.  Our primary “Nibiru” goal was to develop and deliver a data center environment without the need of a physical building.  The environment needed to require as minimal amount of physical “touch” as possible and allow us the ultimate flexibility in terms of how we delivered capacity for our products and services. We called this effort the Micro Data Center.   If you think about the amount of things that need to change to evolve to this type of strategy it’s a bit mind-boggling. 

image

Here is just a few of the things required to look at/change/and automate to even make this kind of achievement possible:

  • Developing an entirely new Technology Suite and the ability to deliver that capacity anywhere in the world with minimal to no staffing.
  • Delivering extremely dense compute capacity (think the latest technology) to give us the longest possible use of these assets once deployed into the field.
  • The ability to deliver a “Microdata Center” anywhere on the planet regardless of temperature and humidity settings
  • The ability to support/maintain/and administer remotely.
  • The ability to fit into the power envelope of a normal office building
  • Participation in our cloud environment and capabilities
  • The processes by which these facilities are maintained and serviced
  • and much much more…

In my mind, Its one thing to claim a technical achievement, its quite another to operationalize that achievement and make the process of supporting it repeatable. That’s my measure as to when you can REALLY declare victory.  Science Experiments don’t count.   It has to just plain work.    To that end our first “beta” site for the technology was the AOL campus in Dulles, Virginia.  Out on a lonely slab of concrete in the back of one of the buildings our future has taken shape.

Thanks in part to a lot of the work going on in the data center containerization imagespace, we were able to jump start much of the work in a relatively quick fashion.  In fact the pace set the Data Center and Technology Operations teams to deliver this achievement is more than a bit astounding.   Most, if not all, of the existing AOL Data Centers would fall somewhere around a traditional Tier III / Tier II Uptime Institute definition.   The teams really pushed ahead way outside their comfort zones to deliver some incredibly evolutions in a very short period of time.   Of course there were steps along the way to get here.  But those steps now seem to be in double time.  A few months back we announced the launching of ATC, Our first completely automated facility.   The work that went into ATC, was foundational to our achievement yesterday.   It allowed us to really start working on the hard stuff first.   That is to say the ‘Operationalization’ of these kinds of environments.   It set the stage of how we could evolve to this next tier of evolution.   Below is a summary of some of the achievements of our ATC launch, but if you were curious for the specifics on our work there feel free to click the ‘Breaking the Chrysalis’ post I did at that time.  You can see how the work that we have been driving in our own internal cloud environments, the changes in operational procedure, the change in thought is additive and fundamental to our latest achievement.   Its especially interesting to note that with all of the interesting blips and hiccups occurring in the ‘cloud industry’ like the leap second and  the terrible storms on the East Coast this week which affected many data centers, that ATC, our completely unmanned facility just kept humming along with no issues (To be fair neither did our traditional facilities) despite much of the initial negative feedback we had received was solely based around the reliability of such moves.   It goes to show how important engineering FOR Operation is.  For AOL we have built this in from the start.

What does this actually buy AOL?

Ok, we stuck some computers in a box and we made sure it requires very little care and feeding – what does this buy us?  Quite a bit actually.  Jay Moran, the Distinguished Engineer who was in charge of driving this effort is always quick to point out that the problem space here is not just about the Technology.  It has to be a marriage with the business side as well.  Obviously the inherent flexibility of the design allows us a greater number of places around the planet we can deploy capacity to and that in and of itself is pretty revolutionary.   We are no longer tied to traditional data center facilities or colocation markets.   That doesn’t mean we wont use them, it means we now have a choice.  Of course this is only possible because of the internally developed cloud infrastructure but we have freed ourselves from having to be bolted onto or into existing big infrastructure.   It allows us to have an incredible amount geo-distributed capacity at a very low cost point in terms of upfront capital and ongoing operational expense.   This is a huge game changer.  So much so, allow me to do a bit of the ‘back of the napkin math’ with you.   Lets call our global capacity in terms of compute, storage, etc. that we have today in our traditional environments – the Total Compute Capability or TCC. Its essentially the bandwidth for the work that we can get done.   Inside the cost for TCC you have operating costs such power, lease costs, Data Center facility maintenance costs, support staff, etc.  You additionally have the imagedepreciation for the facilities themselves (or the specific buildouts – if colocating), the server and other equipment depreciation, and the rest.   Lets call that baseline X.   The MicroData Center strategy built out with the latest, our most dense server standards and infrastructure would allow us to have 5X the amount of total TCC in less than 10% of the cost and physical footprint.   If you think about how this will allow us to aggregate and grow over time it ultimately drives us to a VERY LOW operational cost structure for delivering our products and services.   Additionally it positions us for the future in very significant ways.

  • It redefines software architecture for greater resiliency
  • It allows us an incredibly flexible platform for driving and addressing privacy laws, regulatory oversight, and other such concerns allowing us to respond rapidly.
  • It further reduces energy consumption and carbon footprint emissions (important as taxation evolves around the world, as well as ongoing operational costs)
  • Gives us the ability to drive Edge Computing delivery to potentially bypass CDNs for certain content.
  • Gives us the capability to drive ‘Community-in-a-box’ whereby we can quickly launch new products in markets, quickly expand existing footprints like Patch in a low cost, but still hyper-local platform, allow the Huffington Post a platform to rapidly partner and enter new markets with minimal cost turn ups.
  • The fact that the technology mix in our SKUs is comprised of compute, storage, and network capacity maximizes the amount of products and services we can deploy to it.  

As Always its really about the People

I cannot let a post about this huge win for us to go by without mentioning the teams involved in delivering this capability.  This is not just a win for AOL, or to a lesser degree the industry at large in another proof-point that it cant evolve if it puts its mind to changing, but rather the Technology Teams at AOL.  When I was first approached about joining AOL, my slightly sarcastic and comedic response was probably much like yours – ‘Are they still around?’ But the fact of the matter is that AOL has a vision of where they want to go, and what they want to be.   That was compelling for me personally, compelling enough for me to make the move.   What has truly amazed me however is the dedication and tenacity of its employees.  These achievements would not be possible without the outright aggressiveness the organization has taken to moving the company forward.  Its always hard to assess from the outside just how hard an effort is internally to achieve.  In the case of our micro Data Center Strategy, the teams had just about every kind of barrier to deliver this capacity.  Every kind of excuse to not make it, or even not to try.   They put all of those things aside and just plain executed.  If you allow me a small moment of bravado – Not only did my teams simply kick ass, they did it in a way that moved the needle for the company, and in my mind once again catapulted themselves into the forefront of operations and technology at scale.   We still have a bunch of Nibiru projects to deliver, so my guess is we haven’t heard the last of some of these big wins.

\Mm

Patent Wars may Chill Data Center Innovation

Yahoo may have just sent a cold chill across the data center industry at large and begun a stifling of data center innovation.  In a May 3, 2012 article, Forbes did a quick and dirty analysis on the patent wars between Facebook and Yahoo. It’s a quick read but shines an interesting light on the potential impact something like this can have across the industry.   The article, found here,  highlights that :

In a new disclosure, Facebook added in the latest version of the filing that on April 23 Yahoo sent a letter to Facebook indicating that Yahoo believes it holds 16 patents that “may be relevant” to open source technology Yahoo asserts is being used in Facebook’s data centers and servers.

While these types of patent infringement cases happen all the time in the Corporate world, this one could have far greater ramifications on an industry that has only recently emerged into the light of sharing of ideas.    While details remain sketchy at the time of this writing, its clear that the specific call out of data center and servers is an allusion to more than just server technology or applications running in their facilities.  In fact, there is a specific call out of data centers and infrastructure. 

With this revelation one has to wonder about its impact on the Open Compute Project which is being led by Facebook.   It leads to some interesting questions. Has their effort to be more open in their designs and approaches to data center operations and design led them to a position of risk and exposure legally?  Will this open the flood gates for design firms to become more aggressive around functionality designed into their buildings?  Could companies use their patents to freeze competitors out of colocation facilities in certain markets by threatening colo providers with these types of lawsuits?  Perhaps I am reaching a bit but I never underestimate litigious fervor once the  proverbial blood gets in the water. 

In my own estimation, there is a ton of “prior art”, to use an intellectual property term, out there to settle this down long term, but the question remains – will firms go through that lengthy process to prove it out or opt to re-enter their shells of secrecy?  

After almost a decade of fighting to open up the collective industry to share technologies, designs, and techniques this is a very disheartening move.   The general Glasnost that has descended over the industry has led to real and material change for the industry.  

We have seen the mental shift of companies move from measuring facilities purely around “Up Time” measurements to one that is primarily more focused around efficiency as well.  We have seen more willingness to share best practices and find like minded firms to share in innovation.  One has to wonder, will this impact the larger “greening” of data centers in general.   Without that kind of pressure – will people move back to what is comfortable?

Time will certainly tell.   I was going to make a joke about the fact that until time proves out I may have to “lawyer” up just to be safe.  Its not really a joke however because I’m going to bet other firms do something similar and that, my dear friends, is how the innovation will start to freeze.

 

\Mm

Chaos Monkeys, Donkeys and the Innovation of Action

Last week I once again had the pleasure of speaking at the Uptime Institute’s Symposium.  As one of the premiere events in the Data Center industry it is definitely one of those conferences that is a must attend to get a view into what’s new, what’s changing, and where we are going as an industry.  Having attended the event numerous times in the past, this year I set out on my adventure with a slightly different agenda.

Oh sure I would definitely attend the various sessions on technology, process, and approach.  But this time I was also going with the intent to listen equally to the presenters as well as the scuttlebutt, side conversations, and hushed whispers of the attendees.   Think of it as a cultural experiment in being a professional busy body.  As I wove my way around from session to session I was growing increasingly anxious that while the topics were of great quality, and discussed much needed areas of improvement in our technology sector – most of them were issues we have covered, talked about and have been dealing with as an industry for many years.   In fact I was hard pressed to find anything of real significance in the new category.   These thoughts were mirrored in those side conversations and hushed whispers I heard around the various rooms as well.

One of the new features of Symposium is that the 451 Group has opted to expand the scope of the event to be more far reaching covering all aspects of the issues facing our industry.   It has brought in speakers from Tier 1 Research and other groups that have added an incredible depth to the conference.    With that depth came some really good data.   In many respects the data reflected (in my interpretation) that while technology and processes are improving in small pockets, our industry ranges from stagnant to largely slow to act.  Despite mountains of data showing energy efficiency benefits, resulting cost benefits, and the like we just are not moving the proverbial ball down the field.

In a purely unscientific poll I was astounded to find out that some of the most popular sessions were directly related to those folks who have actually done something.  Those that took the new technologies (or old technologies) and put them into practice were roundly more interesting than more generic technology conversations.   Giving very specific attention to detail on the how they accomplished the tasks at hand, what they learned, what they would do differently.   Most of these “favorites” were not necessarily in those topics of “bleeding edge” thought leadership but specifically the implementation of technologies and approaches we have talked about the event for many years.   If I am honest, one of those sessions that surprised me the most was our own.   AOL had the honor of winning an IT Innovation Award from Uptime and as a result the teams responsible for driving our cloud and virtualization platforms were allowed to give a talk about what we did, what the impact was and how it all worked out.   I was surprised because I was not sure how many people would come to this side session and listen to presentation or find the presentation relevant.  Of course I thought it was relevant (We were after all going to get a nifty plaque for the achievement) but to my surprise the room was packed full, ran out of chairs, and had numerous people standing for the presentation.   During the talk we had a good interaction of questions from the audience and after the talk we were inundated with people coming up to specifically dig into more details.  We had many comments around the usefulness of the talk because we were giving real life experiences in making the kinds of changes that we as an industry have been talking about for years.  Our talk and adaption of technology even got a little conversation in some of the Industry press such as Data Center Dynamics.

Another session that got incredible reviews was the presentation by Andrew Stokes of Deutsche Bank who guided the audience through their adoption of 100% free air cooled data center in the middle of New York City.  Again, the technology here was not new (I had built large scale facilities using this in 2007) – but it was the fact that Andrew and the folks at Deutsche Bank actually went out and did something.   Not someone from those building large-scale cloud facilities, not some new experimental type of server infrastructure.  Someone who used this technology servicing IT equipment that everyone uses, in a fairly standard facility who actually went ahead and did something Innovative.  They put into practice something that others have not. Backed back facts, and data, and real life experiences the presentation went off incredibly and was roundly applauded by those I spoke with as one of the most eye-opening presentations of the event.

By listening the audiences, the hallway conversations, and the multitude of networking opportunities throughout the event a pattern started to emerge,  a pattern that reinforced the belief that I was already coming to in my mind.   Despite a myriad of talk on very cool technology, application, and evolving thought leadership innovations – the most popular and most impactful sessions seemed to center on those folks who actually did something, not with the new bleeding edge technologies, but utilizing those recurring themes that have carried from Symposium to Symposium over the years.   Air Side economization?  Not new.   Someone (outside Google, Microsoft, Yahoo, etc) doing it?  Very New-Very Exciting.  It was what I am calling the Innovation of ACTION.  Actually doing those things we have talked about for so long.

While this Innovation of Action had really gotten many people buzzing at the conference there was still a healthy population of people who were downplaying those technologies.  Downplaying their own ability to do those things.    Re-stating the perennial dogmatic chant that these types of things (essentially any new ideas post 2001 in my mind) would never work for their companies.

This got me thinking (and a little upset) about our industry.  If you listen to those general complaints, and combine it with the data that we have been mostly stagnant in adopting these new technologies – we really only have ourselves to blame.   There is a pervasive defeatist attitude amongst a large population of our industry who view anything new with suspicion, or surround it with the fear that it will ultimately take their jobs away.  Even when the technologies or “new things” aren’t even very new any more.  This phenomenon is clearly visible in any conversation around ‘The Cloud’ and its impact on our industry.    The data center professional should be front and center on any conversation on this topic but more often than not self-selects out of the conversation because they view it more as an application thing, or more IT than data center thing.   Which is of course complete bunk.   Listening to those in attendance complain that the ‘Cloud’ is going to take their jobs away, or that only big companies like Google , Amazon, Rackspace, or  Microsoft would ever need them in the future were driving me mad.   As my keynote at Uptime was to be centered around a Cloud survival guide – I had to change my presentation to account for what I was hearing at the conference.

In my talk I tried to focus on what I felt to be emerging camps at the conference.    To the first, I placed a slide prominently featuring Eeyore (from Winnie the Pooh fame) and captured many of the quotes I had heard at the conference referring to how the Cloud, and new technologies were something to be mistrusted rather than an opportunity to help drive the conversation.     I then stated that we as an industry were an industry of donkeys.  That fact seems to be backed up by data.   I have to admit, I was a bit nervous calling a room full of perhaps the most dedicated professionals in our industry a bunch of donkeys – but I always call it like I see it.

I contrasted this with those willing to evolve their thought forward, embrace that Innovation of Action by highlighting the Cloud example of Netflix.   When Netflix moved heavily into the cloud they clearly wanted to evolve past the normal IT environment and build real resiliency into their product.   They did so by creating a rogue process (on purpose) call the Chaos Monkey which randomly shut down processes and wreaked havoc in their environment.   At first the Chaos Monkey was painful, but as they architected around those impacts their environments got stronger.   This was no ordinary IT environment.  This was something similar, but new.  The Chaos Monkey creates Action, results in Action and on the whole moves the ball forward.

Interestingly after my talk I literally have dozens of people come up and admit they had been donkeys and offered to reconnect next year to demonstrate what they had done to evolve their operations.

My challenge to the audience at Uptime, and ultimately my challenge to you the industry is to stop being donkeys.   Lets embrace the Innovation of Action and evolve into our own versions of Chaos Monkeys.    Lets do more to put the technologies and approaches we have talked about for so long into action.    Next Year at Uptime (and across a host of other conferences) lets highlight those things that we are doing.  Lets put our Chaos Monkeys on display.

As you contemplate your own job – whether IT or Data Center professional….Are you a Donkey or Chaos Monkey?

\Mm

Preparing for the Cloud: A Data Center and Operations Survival Guide

image 

This May, I once again have the distinct honor of presenting at the Uptime Institute’s Symposium. This year it will be held in Santa Clara, CA from May 9 through the 12th.  This year my primary topic is entitled ‘Preparing for the Cloud: A Data Center Survival Guide.’   I am really looking forward to this presentation on two fronts.  

First, it will allow me to share some of the challenges, observations, and opportunities I have seen over the last few years and package it up for Data Center Operators and IT professionals in a way that’s truly relevant to how to start preparing for the impact on their production environments. The whole ‘cloud’ industry is now rife with competing definitions, confusing marketing, and a broad spectrum of products and services meant to cure all ills. To your organization’s business leaders the cloud means lower costs, quicker time to market, and an opportunity to streamline IT Operations and reduce or eliminate the need for home-run data center environments. But what is the true impact on the operational environments? What plans do you need to have in place to ensure this kind of move can be successful? Is you organization even ready to make this kind of move? Is the nature of your applications and environments ‘Cloud-Ready? There are some very significant things to keep in mind when looking into this approach and many companies have not thought them all through.  My hope is that this talk will help prepare the professional with the necessary background and questions to ensure they are armed with the correct information to be an asset to the conversation within their organizations.

The second front is to really dig into the types of services available in the market and how to build an internal scorecard to ensure that your organization is approaching the analysis in a true – apples to apples kind of comparison.   So often I have heard horror stories of companies

caught up in the buzz of the Cloud and pursuing devastating cloud strategies that are either far more expensive than what they had to begin with.  The cloud can be a powerful tool and approach to serve the business, but you definitely need to go in with both eyes wide open.

I will try to post some material in the weeks ahead of the event to set the stage for the talk.  As always, If you are planning on attending Symposium this year feel free to reach out to me if you see me walking the halls.  

\Mm

Reflections on Uptime Symposium 2010 in New York

This week I had the honor to be a keynote Speaker at the Uptime Institute’s Symposium event in New York City.   I also participated in some industry panels which is always tons of fun. However, as a keynote at the first Symposium a few years back it was an interesting experience to come back and see how it has changed and evolved over the intervening years.  This year my talk was about the coming energy regulation and its impact on data centers, and more specifically what data center managers and mission critical facilities professionals could and should be doing to get their companies ready for what I call CO2K.   I know I will get a lot of pushback on the CO2K title, but I think my analogy makes sense.  First companies are generally not aware of the impact that their data centers and energy consumption have, Second most companies are dramatically unprepared and do not have the appropriate tools in place to collect the information, which will of course lead to the third item, lots of reactionary spending to get this technology and software in place.  While Y2K was generally a flop and a lot of noise, if legislation is passed (and lets be clear about the very direct statements the Obama administration has made on this topic) this work will lead to a significant change in reporting and management responsibilities for our industry.

Think we are ready for this legislation?

Brings me back to my first reflection on Symposium this year.   I was joking with Pitt Turner just before I went on stage that I was NOT going to ask the standard three questions I ask before every data center audience.   Lets face it, I thought, that “Shtick” had gotten old, and I have been asking those same three questions for at least that last three years at every conference I have spoken at (which is a lot).  However as I got on stage, talking about the the topic of regulation I had to ask, it was like a hidden burning desire I could not quench.  So there I went, “How many people are measuring for energy consumption and efficiency today?”  “Raise you hand if in your organization, the CIO sees the power bill?”  and then finally “How many people in here today have the appropriate tooling in place to collect and reporting energy usage in their data centers?”  It had to come out.   I saw Pitt shaking his head.  What was more surprising, was the amount of people who had raised their hands on those questions. Why?  About 10% of the audience had raised their hands.  Don’t get me wrong, 10% is about the highest I have seen that number at any event.  But those of you who are uninitiated into the UI Symposium lore, you need to understand something important, Symposium represents the hardest of the hard core data center people.   This is where all of us propeller heads geek it out in mechanical and electrical splendor, we dance and raise the “floor” (data center humor).  This amazing collection of the best of the best had only had a 10% penetration on the monitoring in their environments.   When this regulation comes, its going to hurt.  I think I will do a post at a later time on my talk at Symposium and what you as a professional can do to start raising awareness.  But for now, that was my first big startle point.

My second key observation this year was the amount of people.  Symposium is truly an international event and their were over 900 attendees for the talks, and if memory serves, about 1300 for the exhibition hall.  I had heard that 20 out of the worlds 30 time-zones had representatives at the conference.  It was especially good for one of the key recurring benefits of this event: Networking.   The networking opportunities were first rate and by the looks of the impromptu meetings and hallways conversations this continued to be an a key driver for the events success.  As fun as making new friends is, it was also refreshing to spend some time and quick catch ups with old friends like Dan Costello and Sean Farney from Microsoft, Andrew Fanara, Dr. Bob Sullivan, and a host of others.

My third observation and perhaps the one I was most pleased with with the diversity of thought in the presentations.  Its a fair to say that I have been critical of Uptime for some time by a seemingly droningly dogmatic recurring set of themes and particular bend of thinking.   While those topics were covered, so too were a myriad of what I will call counter-culture topics.  Sure there were still  a couple of the salesy presentations you find at all of these kinds of events, but the diversity of thought and approach this time around was striking.   Many of them addressed larger business issues, the impact, myths, approach to cloud computing, virtualization, and decidedly non-facilities related material affecting our worlds.   This might have something to do with the purchase by the 451 Group and its related Data Center think tank organization Tier 1, but it was amazingly refreshing and they knocked the ball out of the park.

My fourth observation was that the amount of time associated with the presentations was too short.   While I have been known to completely abuse any allotted timeslots in my own talks due to my desire to hear myself talk, I found that many presentations had to end due to time just as things were getting interesting.  Many of the hallways conversations were continuations of those presentations and it would have been better to keep the groups in the presentation halls.  

 

Calvin thumb on noseMy fifth observation revolved around the quantity, penetration and maturation of container and containment products, presentations and services.   When we first went public with the approach when I was at Microsoft the topic was so avant-garde and against the grain of common practices it got quite a reception (mostly negative).  This was followed by quite a few posts (like Stirring Anthills) which got lots of press attention and resulting industry experts stating that containers and containment were never going to work for most people.   If the presentations, products, and services represented at Uptime were any indication of industry adoption and embrace I guess I would have to make a childish gesture with thumb to my nose, wiggle my fingers and say…. Nah Nah .  🙂

 

I have to say the event this year was great and I enjoyed my time thoroughly.  A great time and a great job by all. 

\Mm

Rolling Clouds – My Move into the Mobile Cloud

As many of you saw in my last note, I have officially left Digital Realty Trust to address some personal things.   While I get those things in order I am not sitting idling by.   I am extremely happy to announce that I have taken a role at Nokia as their VP of Service Operations.  In this role I will have global responsibility for the strategy, operation and run of infrastructure aspects for Nokia’s new cloud and mobile services platforms.

Its an incredibly exciting role especially when you think of the fact that the number of mobile hand-held’s around the world are increasingly becoming the interface by which people are consuming information.  Whether that be Navigation-based applications or other content related platforms your phone is becoming your gateway to the world. 

I am also very excited by the fact that there are some fierce competitors in this space as well.  Once again I will be donning my armor and doing battle with my friends at Google.   Their Droid platform is definitely interesting and it will be interesting to see how that develops.  I have a great amount of respect for Urs Hoelze and their cloud platform is something I am fairly familiar with .  I will also be doing battle with the folks from Apple (and interestingly my good friend Olivier Sanche).  Apple definitely has the high end hand-held market here in the US, but its experience in Cloud platforms and operations is not very sophisticated just yet.  On some levels I guess I am even competing against the infrastructure and facilities I built out at Microsoft at least as it relates to the mobile world.  Those are some meaty competitors and as you have seen before, I love a good fight.

In my opinion, Nokia has some very interesting characteristics that position it extremely well if not atop the fray in this space.   First there is no arguing about Nokia penetration of hand-held devices across the world.  Especially in markets like India, China, South America, and other emerging Internet-using populations.    Additionally these emerging economies are skipping past ground-based wired technologies to wireless connectivity.   As a result of that, Nokia has an incredible presence already in those markets.   Their OVI platform today already has a significant population of users (measured at least in the 10s of millions) and so scale at the outset is definitely there.    When I think about the challenge that Google has in getting device penetration out there, or Apples high-end (and mostly US) only approach you can see the opportunity.    I am extremely excited to get going.

Hope you will join me for an incredible ride!

\Mm

Private Clouds – Not just a Cost and Technology issue, Its all about trust, the family jewels, corporate value, and identity

I recently read a post by my good friend James Hamilton at Amazon regarding Private Clouds.   James and I worked closely together at Microsoft and he was always a good source for out of the box thinking and challenging the status quo.    While James post found here, speaks to the Private Cloud initiative being what amounts to be an evolutionary dead end, I would have to respectfully disagree.

James’ post starts out by correctly pointing out that at scale the large cloud players have the resources and incentive to achieve some pretty incredible cost savings.  From an infrastructure perspective he is dead on.  But I don’t necessarily agree that this innovation will never reach the little guy.  In my role at Digital Realty Trust I think I might have a pretty unique perspective on the infrastructure developments both at the “big” guys along with what most corporate enterprises have available to them from a leasing or commercial perspective.  

Companies like Digital Realty Trust,  Equinix, Terramark, Dupont Fabros, and a host of others in the commercial data center space are making huge advancements in this space as well.   The free market economy has now placed an importance on low PUE highly efficient buildings.   You are starting to see these firms commission buildings with Commission PUEs Sub 1.4.   Compared to most existing data center facilities this is a huge improvement.  Likewise these firms are incented to hire mechanical and electrical experts.  This means that this same expertise is available to the enterprise through leasing arrangements.  Where James is potentially correct is at that next layer of IT specific equipment.

This is an area where there is an amazing amount of innovation happening by Amazon, Google, and Microsoft.   But even here in this space there are firms stepping up to provide solutions to bring extensive virtualization and cloud-like capabilities to bear.    Companies like Hexagrid have software solutions offerings that are being marketed to typical co-location and hosting firms to do the same thing.  Hexagrid and others are focusing on the software and hardware combinations to deliver full service solutions for those companies in this space.    In fact (as some comments on James’ blog mention) there is a lack of standards and a fear of vendor lock-in by choosing one of the big firms.  Its an interesting thought to ponder if a software+hardware solution offered to the hundreds of co-location players and hosting firms might be more of a universal solution without fear of lockdown.  Time will tell.

But this brings up one of the key criticisms that this is not just about cost and technology.   I believe what is really at stake here is much more than that.   James makes great points on greater resource utilization of the big cloud players and how much more efficient they are at utilizing their infrastructure.   To which i will snarkly (and somewhat tongue-in-cheek) say to that, “SO WHAT!”  🙂   Do enterprises really care about this?  Do they really optimize for this?  I mean if you pull back that fine veneer of politically correct answers  and “green-suitable” responses is that what their behavior in REAL LIFE is indicative of?    NO.

This was a huge revelation for me when I moved into my role at Digital.  When I was at Microsoft, I optimized for all of the things that James mentions because it made sense to do when you owned the whole pie.   In my role at Digital I have visibility into tens of data centers, across hundreds of customers that span just about every industry.  There is not, nor has there been a massive move (or any move for that matter) to become more efficient in the utilization of their resources.   We have had years of people bantering about how wonderful, cool, and how revolutionary a lot of this stuff is, but world wide Data center utilization levels have remained abysmally low.   Some providers bank on this.  Over subscription of their facilities is part of their business plan.  They know companies will lease and take down what they think they need, and never take it down in REALITY.   

So if this technology issue is not a motivating factor what is?  Well cost is always part of the equation.   The big cloud providers will definitely deliver cost savings, but private clouds could also deliver cost savings as well.   More importantly however, Private clouds will allow companies to retain their identity and uniqueness, and keep what makes them competitively them –Them.

I don’t so much see it as a Private cloud or Public cloud kind of thing but more of a Private Cloud AND Public cloud kind of thing.   To me it looks more an exercise of data abstraction.   The Public offerings will clearly offer infrastructure benefits in terms of cost, but will undoubtedly lock a company into that single solution.  The IT world has been bit before by putting all their eggs in a single basket and the need for flexibility will remain more key.    Therefore you might begin to see Systems Integrators, Co-location and hosting firms, and others build their own platforms, or much more likely, build platforms that umbrella over the big cloud players to give enterprises the best of both worlds. 

Additionally we must keep in mind that  the biggest resistance to the adoption of the cloud is not technology or cost but RISK and TRUST.  Do you, Mr. CIO, trust Google to run all of your infrastructure? your applications?  Do you Mrs. CIO, Trust Microsoft or Amazon to do the same for you?    The answer is not a blind yes or no.   Its a complicated set of minor yes responses and no responses.   They might feel comfortable outsourcing mail operations, but not the data warehouse manifesting decades of customer information.     The Private cloud approach will allow you to spread your risk.   It will allow you to maintain those aspects of the business that are core to the company. 

The cloud is an interesting place, today.  It is dominated by technologists.  Extremely smart engineering people who like to optimize and solve for technological challenges.  The actual business adoption of this technology set has yet to be fully explored.   Just wait until the “Business” side of the companies get their hooks into this technology set and start placing other artificial constraints, or optimizations around other factors.  There are thousands of different motivators out in the world.  Once they starts to happen earnest.  I think what you will find is a solution that looks more like a hybrid solution than the pure plays we dream about today.

Even if you think my ideas and thoughts on this topic is complete BS, I would remind you of something that I have told my teams for a very long time.  “There is no such thing as a temporary data center.”  This same mantra will hold true for the cloud.  If you believe that the Private Cloud will be a passing and temporary thing, just keep in mind that there will be systems and solutions build to this technology approach thus imbuing it with a very very long life.  

\Mm