Through an idea and force of will, he created an industry…

This week the Data Center Industry got the terrible news it knew might be coming for some time.   That Ken Brill, founder of the Uptime Institute had passed away.  Many of us knew that Ken had been ill for some time and although it may sound silly, were hoping he could somehow pull through it.   Even as ill as he was, Ken was still sending and receiving emails and staying in touch with this industry that quite frankly he helped give birth to.  

I was recently asked about Ken and his legacy for a Computerworld article and it really caused me to stop and re-think his overall legacy and gift to the rest of us in the industry.  Ken Brill was a pioneering, courageous, tenacious, visionary who through his own force of will saw the inefficiencies in a nascent industry and helped craft it into what it is today.

Throughout his early career experience Ken was able to see the absolute silo’ing of information, best practices, and approaches that different enterprises were developing around managing their mission critical IT spaces.    While certainly not alone in the effort, he became the strongest voice and champion to break down those walls, help others through the process and build a network of people who would share these ideas amongst each other.  Before long an industry was born.   Sewn together through his sometimes delicate, sometimes not so delicate cajoling and through it all his absolute passion for the Data Center industry at large.

One of the last times Ken and I got to speak in person.In that effort he also created and permeated the language that the industry uses as commonplace.   Seeing a huge gap in terms of how people communicated and compared mission critical capabilities he became the klaxon of the Tiering system which essentially normalized the those conversations across the Data Center Industry.   While some (including myself) have come to think it’s a time to re-define how we classify our mission critical spaces, we all have to pay homage to the fact that Ken’s insistence and drive for the Tiering system created a place and a platform to even have such conversations.  

One of Ken’s greatest strengths was his adaptability.   For example, Ken and I did not always agree.   I remember an Uptime Fellows meeting back in 2005 or 2006 or so in Arizona.  In this meeting I started talking about the benefits of modularization and reduced infrastructure requirements augmented by better software.   Ken was incredulous and we had significant conversations around the feasibility of such an approach.   At another meeting we discussed the relative importance or non-importance of a new organization called ‘The Green Grid’ (Smile)and if Uptime should closely align itself with those efforts.   Through it all Ken was ultimately adaptable. Whether it was giving those ideas light for conversation amongst the rest of the Uptime community via audio blogs, or other means, Ken was there to have a conversation.

In an industry where complacency has become commonplace, where people rarely question established norms, it was always comforting to know that Ken was there acting the firebrand, causing the conversation to happen.   This week we lost one of the ‘Great Ones’ and I for one will truly miss him.  To his family my deepest sympathies, to our industry I ask, “Who will take his place?”

 

\Mm

Pointy Elbows, Bags of Beans, and a little anthill excavation…A response to the New York Times Data Center Articles

I have been following with some interest the series of articles in the New York Times by Jim Glanz.  The series premiered on Sunday with an article entitled Power, Pollution and the Internet, which was followed up today with a deeper dive in some specific examples.  The examples today (Data  Barns in a farm town, Gobbling Power and Flexing muscle) focused on the Microsoft program, a program of which I have more than some familiarity since I ran it for many years.   After just two articles, reading the feedback in comments, and seeing some of the reaction in the blogosphere it is very clear that there is more than a significant amount of misunderstanding, over-simplification, and a lack of detail I think is probably important.   In doing so I want to be very clear that I am not representing AOL, Microsoft, or any other organization other than my own personal observations and opinions.  

As mentioned in both of the articles I was one of hundreds of people interviewed by the New York Times for this series.  In those conversations with Jim Glanz a few things became very apparent.  First – He has been on this story for a very long time, at least a year.   As far as journalists go, he was incredibly deeply engaged and armed with tons of facts.  In fact, he had a trove of internal emails, meeting minutes, and a mountain of data through government filings that must have taken him months to collect.  Secondly, he had the very hard job of turning this very complex space into a format where the uneducated masses can begin to understand it.  Therein lies much of the problem – This is an incredibly complex space to try and communicate it to those not tackling it day to day or even understand that technological, regulatory forces involved.  This is not an area or topic that can be sifted down to a sound bite.   If this were easy, there really wouldn’t be a story would there?

At issue for me is that the complexity of the powers involved seems to get scant attention aiming larger for the “Data Centers are big bad energy vampires hurting the environment” story.   Its clearly evident reading through the comments on the both of the articles so far.   Claiming that the sources and causes have everything to do from poor web page design to government or multi-national companies conspiracies to corner the market on energy. 

So I thought I would take a crack article by article to shed some light (the kind that doesn’t burn energy) on some of the topics and just call out where I disagree completely.     In full transparency  the “Data Barns” article doesn’t necessarily paint me as a “nice guy”.  Sometimes I am.  Sometimes I am not.  I am not an apologist, nor do I intend to do so in this post.  I am paid to get stuff done.  To execute. To deliver.  Quite frankly the PUD missed deadlines (the progenitor event to my email quoted in the piece) and sometimes people (even utility companies) have to live in the real world of consequences.   I think my industry reputation, work, and fundamental stances around driving energy efficiency and environmental conservancy in this industry can stand on its own both publicly and for those that have worked for me. 

There is an inherent irony here that these articles were published in both print and electronically to maximize the audience and readership.  To do that, these articles made “multiple trips” through a data center, and ultimately reside in one (or more).  They seem to denote that keeping things online is bad which seems to go against the availability and need of the articles themselves.  Doesn’t the New York times expect to make these articles available on-line for people to read?  Its posted online already.  Perhaps they expect that their micro-fiche experts would be able to serve the demand for these articles in the future?  I do not think so. 

This is a complex eco-system of users, suppliers, technology, software, platforms, content creators, data (both BIG and small), regulatory forces, utilities, governments, financials, energy consumption, people, personalities, politics, company operating tenets, community outreach to name a very few.  On top of managing through all these variables they also have to keep things running with no downtime.

\Mm

Sites and Sounds of DataCentre2012: My Presentation, Day 2, and Final Observations

nice

Today marked the closing lot of sessions for DataCentres2012 and my keynote session to the attendees.    After sitting through a series of product, technology, and industry trend presentations over the last two days I was feeling that my conversation would at the very least be something different.   Before I get to that – I wanted to share some observations from the morning. 

It all began with an interesting run-down of the Data Center and infrastructure industry trends across Europe from Steve Wallage of The BroadGroup.   It contained some really compelling information and highlighted some interesting divergence between the European market and the US market in terms of adoption and trends of infrastructure.   It looks like they have a method for those interested to get their hand on the detailed data (for purchase) if you are interested.  The parts I found particularly industry was the significant slow down of the Wholesale data center market across Europe while Colocation providers continued to do well.   Additionally the percentages of change within the customer base of those providers by category was compelling and demonstrated a fundamental shift and move of content related customers across the board.

This presentation was followed by a panel of European Thought Leaders made up mostly of those same colocation providers.  Given the presentation by Wallage I was expecting some interesting data-points to emerge.  While there was a range of ideas and perspectives represented by the panel, I have to say it really got me worked up and not in a good way.   In many ways I felt the responses from many (not all) on the panel highlighted a continued resistance to change in thinking around everything from efficiency, to technology approach.  It represented the things I despise most about about our industry at large.  Namely the slow adoption of change. The warm embrace of the familiar.  The outright resistance to new ideas.    At one point, a woman in the front row whom I believe was from Germany got up to ask a question if the panelists had any plans to move their facilities outside of the major metros.  She referenced Christian Belady’s presentation around the idea of Data as Energy and remote locations like Quincy, Washington or Lulea, Sweden.   She referred to the overall approach and thinking differently as quite visionary.   Now the panel could have easily have referred to the fact that companies like Microsoft, Google, Facebook and the like have much greater software level control than a colo-provider could provide.   Or perhaps they could have referenced that most of their customers are limited by distance to existing infrastructure deployments due to inefficiencies in commercial or custom internally deployed applications. Databases with response times architected for in-rack or in-facility levels of response times.   They did at least reference that most customers tend to be server huggers and want their infrastructure close by.  

Instead the initial response was quite strange in my mind.  It was to go after the ideas as “innovative” and to imply that nothing was really innovative about what Microsoft had done and the fact that they built a “mega data center” in Dublin shows that there is nothing innovative really happening.  Really?   The adoption of 100% Air Side economization is something everyone does?   The deployment of containerized compute capacity is run of the mill?  The concepts about the industrialization of compute is old-hat?  I had to do a mental double take and question whether they were even listening during ANY of the earlier sessions.   Don’t get me wrong, I am not trying to be an apologist for the Microsoft program, in fact there are some tenets of the program I find myself not in agreement with.  However – You cannot deny that they are doing VERY different things.   It illustrated an interesting undercurrent I felt during the entire event (and maybe even our industry).  I definitely got the sensation of a growing gap between users requirements and their forward roadmaps and desires and what manufacturers and service providers are providing.  This panel, and a previous panel on modularization really highlighted these gulfs pretty demonstrably.   At a minimum I definitely walked away with an interesting new perspective on some of the companies represented.

It was then time for me to give my talk.   Every discussion up until this point had really focused on technology or industry trends.  I was going to talk about something else. Something more important.  The one thing seemingly missing from the entire event.   That is – the people attending.   All the technology in the world, all of the understanding of the trends in our industry are nothing unless the people in the room were willing to act. Willing to step up and take active roles in their companies to drive strategy.  As I have said before – to get out of the basement and into the penthouse.   The pressures on our industry and our job roles has never been more complicated.   So I walked through regulations, technologies, and cloud discussions.  Using the work that we did at AOL as a backdrop and example – I really tried to drive my main point.   That our industry – specifically the people doing all the work – were moving to a role of managing a complex portfolio of technologies, contracts, and a continuum of solutions.  Gone are the days where we can hide sheltered in our data center facilities.   Our resistance to embrace change, need to evolve with us, or it will evolve around us.   I walked through specific examples of how AOL has had to broaden its own perspective and approach to this widening view of our work roles at all levels.   I even pre-announced something we are calling Data Center Independence Day.   An aggressive adoption of modularized compute capacity that we call MicroData Centers  to help solve many of the issues we are facing as a business and the rough business case as to why it makes sense for us to move to this model.    I will speak more of that in the weeks to come with a greater degree of specifics, but stressed again the need for a wider perspective to manage a large portfolio of technologies and approaches to be successful in the future.

In closing – the event was fantastic.   The ability this event provides to network with leaders and professionals across the industry was first rate.   If I had any real constructive feedback it would be to either lengthen sessions, or reduce panel sizes to encourage more active and lively conversations.  Or both!

Perhaps at the end of the day, it’s truly the best measure of a good conference if you walk away wishing that more time could be spent on the topics.  As for me I am headed back Stateside and to digging into the challenges of my day job.    To the wonderful host city of Nice, I say Adieu!

 

\Mm

Patent Wars may Chill Data Center Innovation

Yahoo may have just sent a cold chill across the data center industry at large and begun a stifling of data center innovation.  In a May 3, 2012 article, Forbes did a quick and dirty analysis on the patent wars between Facebook and Yahoo. It’s a quick read but shines an interesting light on the potential impact something like this can have across the industry.   The article, found here,  highlights that :

In a new disclosure, Facebook added in the latest version of the filing that on April 23 Yahoo sent a letter to Facebook indicating that Yahoo believes it holds 16 patents that “may be relevant” to open source technology Yahoo asserts is being used in Facebook’s data centers and servers.

While these types of patent infringement cases happen all the time in the Corporate world, this one could have far greater ramifications on an industry that has only recently emerged into the light of sharing of ideas.    While details remain sketchy at the time of this writing, its clear that the specific call out of data center and servers is an allusion to more than just server technology or applications running in their facilities.  In fact, there is a specific call out of data centers and infrastructure. 

With this revelation one has to wonder about its impact on the Open Compute Project which is being led by Facebook.   It leads to some interesting questions. Has their effort to be more open in their designs and approaches to data center operations and design led them to a position of risk and exposure legally?  Will this open the flood gates for design firms to become more aggressive around functionality designed into their buildings?  Could companies use their patents to freeze competitors out of colocation facilities in certain markets by threatening colo providers with these types of lawsuits?  Perhaps I am reaching a bit but I never underestimate litigious fervor once the  proverbial blood gets in the water. 

In my own estimation, there is a ton of “prior art”, to use an intellectual property term, out there to settle this down long term, but the question remains – will firms go through that lengthy process to prove it out or opt to re-enter their shells of secrecy?  

After almost a decade of fighting to open up the collective industry to share technologies, designs, and techniques this is a very disheartening move.   The general Glasnost that has descended over the industry has led to real and material change for the industry.  

We have seen the mental shift of companies move from measuring facilities purely around “Up Time” measurements to one that is primarily more focused around efficiency as well.  We have seen more willingness to share best practices and find like minded firms to share in innovation.  One has to wonder, will this impact the larger “greening” of data centers in general.   Without that kind of pressure – will people move back to what is comfortable?

Time will certainly tell.   I was going to make a joke about the fact that until time proves out I may have to “lawyer” up just to be safe.  Its not really a joke however because I’m going to bet other firms do something similar and that, my dear friends, is how the innovation will start to freeze.

 

\Mm

Chaos Monkeys, Donkeys and the Innovation of Action

Last week I once again had the pleasure of speaking at the Uptime Institute’s Symposium.  As one of the premiere events in the Data Center industry it is definitely one of those conferences that is a must attend to get a view into what’s new, what’s changing, and where we are going as an industry.  Having attended the event numerous times in the past, this year I set out on my adventure with a slightly different agenda.

Oh sure I would definitely attend the various sessions on technology, process, and approach.  But this time I was also going with the intent to listen equally to the presenters as well as the scuttlebutt, side conversations, and hushed whispers of the attendees.   Think of it as a cultural experiment in being a professional busy body.  As I wove my way around from session to session I was growing increasingly anxious that while the topics were of great quality, and discussed much needed areas of improvement in our technology sector – most of them were issues we have covered, talked about and have been dealing with as an industry for many years.   In fact I was hard pressed to find anything of real significance in the new category.   These thoughts were mirrored in those side conversations and hushed whispers I heard around the various rooms as well.

One of the new features of Symposium is that the 451 Group has opted to expand the scope of the event to be more far reaching covering all aspects of the issues facing our industry.   It has brought in speakers from Tier 1 Research and other groups that have added an incredible depth to the conference.    With that depth came some really good data.   In many respects the data reflected (in my interpretation) that while technology and processes are improving in small pockets, our industry ranges from stagnant to largely slow to act.  Despite mountains of data showing energy efficiency benefits, resulting cost benefits, and the like we just are not moving the proverbial ball down the field.

In a purely unscientific poll I was astounded to find out that some of the most popular sessions were directly related to those folks who have actually done something.  Those that took the new technologies (or old technologies) and put them into practice were roundly more interesting than more generic technology conversations.   Giving very specific attention to detail on the how they accomplished the tasks at hand, what they learned, what they would do differently.   Most of these “favorites” were not necessarily in those topics of “bleeding edge” thought leadership but specifically the implementation of technologies and approaches we have talked about the event for many years.   If I am honest, one of those sessions that surprised me the most was our own.   AOL had the honor of winning an IT Innovation Award from Uptime and as a result the teams responsible for driving our cloud and virtualization platforms were allowed to give a talk about what we did, what the impact was and how it all worked out.   I was surprised because I was not sure how many people would come to this side session and listen to presentation or find the presentation relevant.  Of course I thought it was relevant (We were after all going to get a nifty plaque for the achievement) but to my surprise the room was packed full, ran out of chairs, and had numerous people standing for the presentation.   During the talk we had a good interaction of questions from the audience and after the talk we were inundated with people coming up to specifically dig into more details.  We had many comments around the usefulness of the talk because we were giving real life experiences in making the kinds of changes that we as an industry have been talking about for years.  Our talk and adaption of technology even got a little conversation in some of the Industry press such as Data Center Dynamics.

Another session that got incredible reviews was the presentation by Andrew Stokes of Deutsche Bank who guided the audience through their adoption of 100% free air cooled data center in the middle of New York City.  Again, the technology here was not new (I had built large scale facilities using this in 2007) – but it was the fact that Andrew and the folks at Deutsche Bank actually went out and did something.   Not someone from those building large-scale cloud facilities, not some new experimental type of server infrastructure.  Someone who used this technology servicing IT equipment that everyone uses, in a fairly standard facility who actually went ahead and did something Innovative.  They put into practice something that others have not. Backed back facts, and data, and real life experiences the presentation went off incredibly and was roundly applauded by those I spoke with as one of the most eye-opening presentations of the event.

By listening the audiences, the hallway conversations, and the multitude of networking opportunities throughout the event a pattern started to emerge,  a pattern that reinforced the belief that I was already coming to in my mind.   Despite a myriad of talk on very cool technology, application, and evolving thought leadership innovations – the most popular and most impactful sessions seemed to center on those folks who actually did something, not with the new bleeding edge technologies, but utilizing those recurring themes that have carried from Symposium to Symposium over the years.   Air Side economization?  Not new.   Someone (outside Google, Microsoft, Yahoo, etc) doing it?  Very New-Very Exciting.  It was what I am calling the Innovation of ACTION.  Actually doing those things we have talked about for so long.

While this Innovation of Action had really gotten many people buzzing at the conference there was still a healthy population of people who were downplaying those technologies.  Downplaying their own ability to do those things.    Re-stating the perennial dogmatic chant that these types of things (essentially any new ideas post 2001 in my mind) would never work for their companies.

This got me thinking (and a little upset) about our industry.  If you listen to those general complaints, and combine it with the data that we have been mostly stagnant in adopting these new technologies – we really only have ourselves to blame.   There is a pervasive defeatist attitude amongst a large population of our industry who view anything new with suspicion, or surround it with the fear that it will ultimately take their jobs away.  Even when the technologies or “new things” aren’t even very new any more.  This phenomenon is clearly visible in any conversation around ‘The Cloud’ and its impact on our industry.    The data center professional should be front and center on any conversation on this topic but more often than not self-selects out of the conversation because they view it more as an application thing, or more IT than data center thing.   Which is of course complete bunk.   Listening to those in attendance complain that the ‘Cloud’ is going to take their jobs away, or that only big companies like Google , Amazon, Rackspace, or  Microsoft would ever need them in the future were driving me mad.   As my keynote at Uptime was to be centered around a Cloud survival guide – I had to change my presentation to account for what I was hearing at the conference.

In my talk I tried to focus on what I felt to be emerging camps at the conference.    To the first, I placed a slide prominently featuring Eeyore (from Winnie the Pooh fame) and captured many of the quotes I had heard at the conference referring to how the Cloud, and new technologies were something to be mistrusted rather than an opportunity to help drive the conversation.     I then stated that we as an industry were an industry of donkeys.  That fact seems to be backed up by data.   I have to admit, I was a bit nervous calling a room full of perhaps the most dedicated professionals in our industry a bunch of donkeys – but I always call it like I see it.

I contrasted this with those willing to evolve their thought forward, embrace that Innovation of Action by highlighting the Cloud example of Netflix.   When Netflix moved heavily into the cloud they clearly wanted to evolve past the normal IT environment and build real resiliency into their product.   They did so by creating a rogue process (on purpose) call the Chaos Monkey which randomly shut down processes and wreaked havoc in their environment.   At first the Chaos Monkey was painful, but as they architected around those impacts their environments got stronger.   This was no ordinary IT environment.  This was something similar, but new.  The Chaos Monkey creates Action, results in Action and on the whole moves the ball forward.

Interestingly after my talk I literally have dozens of people come up and admit they had been donkeys and offered to reconnect next year to demonstrate what they had done to evolve their operations.

My challenge to the audience at Uptime, and ultimately my challenge to you the industry is to stop being donkeys.   Lets embrace the Innovation of Action and evolve into our own versions of Chaos Monkeys.    Lets do more to put the technologies and approaches we have talked about for so long into action.    Next Year at Uptime (and across a host of other conferences) lets highlight those things that we are doing.  Lets put our Chaos Monkeys on display.

As you contemplate your own job – whether IT or Data Center professional….Are you a Donkey or Chaos Monkey?

\Mm

I’ve Got Mail….A new Aol.

You may have seen the announcement today about my recent decision and move to join the new leadership team at Aol.  To some of my friends in the Technorati, and most specifically the Valley, this move probably seems very contrarian.  Having built some of the largest cloud infrastructure’s in the world, re-aligning operational processes at massive scale, Aol at first stroke may seem an odd choice.  I have worked in some of the largest multi-national companies in the world, I have successfully (and unsuccessfully) launched start-ups, have been a cost center and carried a P&L.  I think I have a pretty good understanding of the range and complexity of challenges (especially from a technology perspective)  from small business to large.   Across the spectrum of these types and sized companies you get a different feel.   Different cultures.  Different attitudes.    Different Vibes.

Aol is aggressively moving to redefine itself in the industry, to significantly transform and morph itself into a world that Aol itself helped create and define over 25 years ago.   There is no arguing that the first true scale challenges in dealing with the Internet at large were experienced by those first AOL’ers as they had to deal with numbers of users never before seen in our industry.  They pushed the boundaries of technology, they pushed the boundary of operations, they created whole new paradigms.  To reinvent itself in a market with such competition, such diversity is a huge challenge.

One of the most surprising things to me is that Vibe-thing I talked about a few moements ago. When walking around the company you cannot help but notice that definitely has more of a technology start-up feel to it.   Its palpable.  One of the folks I ran into called it a start-around.   A combination of a Start-up and a Turn-around.  Perhaps thats the best description I have to describe that vibe.   Sure things have been tough, sure there is alot of legacy to work through, but the level of commitment to those folks that are here is incredible.  Moreso than that.  Its a culture of beleivers.  Its all the self-sacrafice and personal investment you find in a startup, but with a team of seasoned veterans.  Its quite unique in my experience.

As I mentioned, Aol has long held a place of respect in terms of Operational best practices at scale, and a culture that recognized the importance of technology in the delivery of its mission.  Tim Armstrong, the CEO and Google veteran, has built an incredible team of passionate technology veterans from places like Google, Microsoft, and others.  The mission is focused.  The mission is deliberate.  The mission is clear.   The mission is hard.   Its a huge challenge.   Its the kind of challenge I love.   If you think its impossible you are only encouraging my energy more.  I could have taken a safe bet.  But where is the excitement?  Where is the challenge?  As the saying goes, “A ship is safe in the harbour, but thats not what ships are for!”   This ship is setting sail and my commitment is that not only will we find a new world, we will define it!

In the coming days/weeks/months, I hope to share many of the exciting things we will be endeavoring to accomplish and give you a real taste of some of the big changes I will be attempting.   As always, technology and operational processes will be key to the success of the mission the company is on and I have some very definite ideas on how we can leap frog current thinking in this space and ensure that our technology and operational approach is no only a strategic value to the business, but also industry leading in execution.

\Mm

Tags: , , , ,

Open Source Data Center Initiative

There are many in the data center industry that have repeatedly called for change in this community of ours.  Change in technology, change in priorities, Change for the future.  Over the years we have seen those changes come very slowly and while they are starting to move a little faster now, (primarily due to the economic conditions and scrutiny over budgets more-so than a desire to evolve our space) our industry still faces challenges and resistance to forward progress.   There are lots of great ideas, lots of forward thinking, but moving this work to execution and educating business leaders as well as data center professionals to break away from those old stand by accepted norms has not gone well.

That is why I am extremely happy to announce my involvement with the University of Missouri in the launch of a Not-For-Profit Data Center specific organization.   You might have read the formal announcement by Dave Ohara who launched the news via his industry website, GreenM3.   Dave is another of of those industry insiders who has long been perplexed by the lack of movement and initiative we have had on some great ideas and stand outs doing great work.  More importantly, it doesn’t stop there.  We have been able to put together quite a team of industry heavy-weights to get involved in this effort.  Those announcements are forthcoming, and when they do, I think you will get a sense of the type of sea-change this effort could potentially have.

One of the largest challenges we have with regards to data centers is education.   Those of you who follow my blog know that I believe that some engineering and construction firms are incented ‘not to change’ or implementing new approaches.  The cover of complexity allows customers to remain in the dark while innovation is stifled. Those forces who desire to maintain an aura of black box complexity  around this space and repeatedly speak to the arcane arts of building out  data center facilities have been at this a long time.  To them, the interplay of systems requiring one-off monumental temples to technology on every single build is the norm.  Its how you maximize profit, and keep yourself in a profitable position. 

When I discussed this idea briefly with a close industry friend, his first question naturally revolved around how this work would compete with that of the Green Grid, or Uptime Institute, Data Center Pulse, or the other competing industry groups.  Essentially  was this going to be yet another competing though-leadership organization.  The very specific answer to this is no, absolutely not.   

These groups have been out espousing best practices for years.  They have embraced different technologies, they have tried to educate the industry.  They have been pushing for change (for the most part).  They do a great job of highlighting the challenges we face, but for the most part have waited around for universal good will and monetary pressures to make them happen.  It dawned on us that there was another way.   You need to ensure that you build something that gains mindshare, that gets the business leadership attention, that causes a paradigm shift.   As we put the pieces together we realized that the solution had to be credible, technical, and above all have a business case around it.   It seemed to us the parallels to the Open Source movement and the applicability of the approach were a perfect match.

To be clear, this Open Source Data Center Initiative is focused around execution.   Its focused around putting together an open and free engineering framework upon which data center designs, technologies, and the like can be quickly put together and more-over standardize the approaches that both end-users and engineering firms approach the data center industry. 

Imagine if you will a base framework upon which engineering firms, or even individual engineers can propose technologies and designs, specific solution vendors could pitch technologies for inclusion and highlight their effectiveness, more over than all of that it will remove much mystery behind the work that happens in designing facilities and normalize conversations.    

If you think of the Linux movement, and all of those who actively participate in submitting enhancements, features, even pulling together specific build packages for distribution, one could even see such things emerging in the data center engineering realm.   In fact with the myriad of emerging technologies assisting in more energy efficiency, greater densities, differences in approach to economization (air or water), use of containers or non use of containers, its easy to see the potential for this component based design.  

One might think that we are effectively trying to put formal engineering firms out of business with this kind of work.  I would argue that this is definitely not the case.  While it may have the effect of removing some of the extra-profit that results from the current ‘complexity’ factor, this initiative should specifically drive common requirements, and lead to better educated customers, drive specific standards, and result in real world testing and data from the manufacturing community.  Plus, as anyone knows who has ever actually built a data center, the devil is in the localization and details.  Plus as this is an open-source initiative we will not be formally signing the drawings from a professional engineering perspective. 

Manufacturers could submit their technologies, sample application of their solutions, and have those designs plugged into a ‘package’ or ‘RPM’ if I could steal a term from the Redhat Linux nomenclature.  Moreover, we will be able to start driving true visibility of costs both upfront and operating and associate those costs with the set designs with differences and trending from regions around the world.  If its successful, it could be a very good thing.  

We are not naive about this however.  We certainly expect there to be some resistance to this approach out there and in fact some outright negativity from those firms that make the most of the black box complexity components. 

We will have more information on the approach and what it is we are trying to accomplish very soon.  

 

\Mm

Kickin’ Dirt

mikeatquincy

I recently got an interesting note from Joel Stone, the Global Operations Chief at Global Switch.  As some of you might know Joel used to run North American Operations for me at Microsoft.  I guess he was digging through some old pictures and found this old photo of our initial site selection trip to Quincy, Washington.

As you can see, the open expanse of farmland behind me, ultimately became Microsoft’s showcase facilities in the Northwest.  In fact you can even see some farm equipment just behind me.   It got me reminiscing about that time and how exciting and horrifying that experience can be.

At the time Quincy, Washington was not much more than a small agricultural town, whose leaders did some very good things (infrastructurally speaking) and benefitted by the presence of large amounts of hydro-power.  When we went there, there were no other active data centers for hundreds of miles, there were no other technology firms present, and discussions around locating a giant industrial-technology complex here seemed as foreign as landing on the moon might have sounded during World War Two.

Yet if you fast forward to today companies like Microsoft, Yahoo, Sabey, Intuit, and others have all located technology parks in this one time agricultural hub.   Data Center Knowledge recently did an article on the impacts to Quincy. 

Many people I speak to at conferences generally think that the site selection process is largely academic.   Find the right intersection of a few key criteria and locate areas on a map that seem to fit those requirements.   In fact, the site selection strategy that we employed took many different factors into consideration each with its own weight leading ultimately to a ‘heat map’ in which to investigate possible locations. 

Even with some of the brightest minds, and substantial research being done, its interesting to me that ultimately the process breaks down into something I call ‘Kickin Dirt’.   Those ivory tower exercises ultimately help you narrow down your decisions to a few locations, but the true value of the process is when you get out to the location itself and ‘kick the dirt around’.   You get a feel for the infrastructure, local culture, and those hard to quantify factors that no modeling software can tell you.  

Once you have gone out and kicked the dirt,  its decision time.  The decision you make, backed by all the data and process in the world, backed by personal experience of the locations in question,  ultimately nets out to someone making a decision.   My experience is that this is something that rarely works well if left up to committee.  At some point someone needs the courage and conviction, and in some cases outright insanity to make the call. 

If you are someone with this responsibility in your job today – Do your homework, Kick the Dirt, and make the best call you can.  

To my friends in Quincy – You have come along way baby!  Merry Christmas!

 

\Mm

A look back and a look forward…

For those of you who are not on the Digital Realty Trust email distribution for such things, I recently did a video for them on some reflections of the past and looking ahead with regards to the data center industry, technologies, and such.  You can find the video link here if your interested.   

I for one would never trust some data center dork in a video.

\Mm

A Practical Guide to the Early Days of Data Center Containers

In my current role (and given my past) I often get asked about the concept of Data Center Containers by many looking at this unique technology application to see if its right for them.   In many respects we are still in the early days of this technology approach and any answers one gives definitely has a variable shelf life given the amount of attention the manufacturers and the industry is giving this technology set.   Still, I thought it might be useful to try and jot down a few key things to think about when looking at data center containers and modularized solutions out there today.

I will do my best to try and balance this view across four different axis the Technology, Real Estate, Financial and Operational Considerations.  A sort of ‘ Executives View’  of this technology. I do this because containers as a technology can not and should not be looked at from a technology perspective alone.  To do so is complete folly and you are asking for some very costly problems down the road if you ignore the other factors.  Many love to focus on the interesting technology characteristics or the benefits in efficiency that this technology can bring to bare for an organization but to implement this technology (like any technology really) you need to have a holistic view of the problem you are really trying to solve.

So before we get into containers specifically lets take a quick look as to why containers have come about.  

The Sad Story of Moore’s Orphan

In technology circles, Moore’s law has come to be applied to a number of different technology advancement and growth trends and has come to represent exponential growth curves.  The original Moore’s law was actually an extrapolation and forward looking observation based on the fact that ‘the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented.’  As my good friend and long time Intel Technical Fellow now with Microsoft, Dileep Bhandarkar routinely states – Moore has now been credited for inventing the exponential.  Its a fruitless battle so we may as well succumb to the tide.orphan

If we look at the technology trends across all areas of Information Technology, whether it be processors, storage, memory, or whatever, the trend has clearly fallen into this exponential pattern in terms of numbers of instructions, amount of storage or memory, network bandwidth, or even tape technology its clear that the movement of Technology has been marching ahead at a staggering pace over the last 20 years.   Isn’t it interesting then that places where all of this wondrous growth and technological wizardry has manifested itself, the data center or computer room, or data hall has been moving along at a near pseudo-evolutionary standstill.  In fact if one truly looks at the technologies present in most modern data center design they would ultimately find small differences from the very first special purpose data room built by IBM over 40 years ago.

Data Centers themselves have a corollary to the beginning of the industrial revolution.   In fact I am positive that Moore’s observations would hold true as civilization transitioned from an agricultural based economy to that of an industrialized one.   In fact one might say that the current modularization approach to data centers is really just the industrialization of the data center itself. 

In the past, each and every data center was built lovingly by hand by a team of master craftsmen and data center artisans.  Each is a one of a kind tool built to solve a set of problems.  Think of the eco-system that has developed around building these modern day castles.  Architects, Engineering firms, construction firms, specialized mechanical industries, and a host of others that all come together to create each and every masterpiece.    So to, did those who built plows, and hammers, clocks and sextants, and the tools of the previous era specialize in making each item, one by one.   That is, of course, until the industrial revolution.industrial

The data center modularization movement is not limited to containers and there is some incredibly ingenious stuff happening in this space out there today outside of containers, but one can easily see the industrial benefits of mass producing such technology.  This approach simply creates more value, reduces cost and complexity, makes technology cheaper and simplifies the whole.  No longer are companies limited to working with the arcane forces of data center design and construction, many of these components are being pre-packaged, pre-manufactured and becoming more aggregated.  Reducing the complexity of the past.  

And why shouldn’t it?   Data Centers live at the intersection of Information and Real Estate.   They are more like machines than buildings but share common elements of both buildings and technology.   All one has to do is look at it from a financial perspective to see how true this is.   In terms of construction, the cost of data centers break down to the following simple format.  Roughly 85% of the total costs to build the facility is made up of the components, labor, and technology to deal with the distribution or cooling of the electrical consumption.

pie

This of course leaves roughly 15% of the costs relegated to land, steel, concrete, bushes, and more of the traditional real estate components of the build.  Obviously these percentages differ market to market but on the whole they are close enough for one to get the general idea.  It also raises an interesting question as to what is the big drive for higher density in data centers, but that is a post for another day. 

As a result of this incredible growth there has been an explosion, a Renaissance if you will, in Data Center Design and approach and the modularization effort is leading the way in causing people to think differently about the data centers themselves.   Its a wonderful time to be part of this industry.   Some claim that the drivers of this change are being driven by the technology.  Others claim that the drivers behind this change have to do with the tough economic times and are more financial.  The true answer (as in all things) is that its a bit of both plus some additional factors.

Driving at the intersection of IT Lane and Building Boulevard

From the perspective of the technology drivers behind this change roads is the fact that most existing data centers are not designed or instrumented to handle the demands of the changing technology requirements occurring within the data center today.

Data Center managers are being faced with increasingly varied redundancy and resiliency requirements within the footprints that they manage.   They continue to support environments that heavily rely upon the infrastructure to provide robust reliability to ensure that key applications do not fail.  But applications are changing.  Increasingly there are applications that do not require the same level of infrastructure to be deployed because either the application is built in such a way that it is more geo-diverse or server-diverse. Perhaps the internal business units have deployed some test servers or lab / R&D environments that do not need this level of infrastructure. With the amount of RFPs out there demanding more diversity from software and application developers to solve the redundancy issue in software rather than large capital spend requirements on behalf of the enterprise, this is a trend likely to continue for some time.  Regardless the reason for the variability challenge that data center managers are facing, the truth is they are greater than ever before.

Traditional data center design cannot achieve these needs without additional waste or significant additional expenditure.   Compounding this is the ever increasing requirements for higher power density and resulting cooling requirements.  This is complicated by the fact that there is no uniformity of load across most data centers.  You have certain racks or areas driving incredible power consumption requiring significant density and other environments, perhaps legacy, perhaps under-utilized which run considerably less dense.   In a single room you could see rack power densities vary by as much as 8kw per rack! You might have a bunch of racks drawing 4kw/rack and an area drawing 12kw per rack or even denser.   This could consume valuable data center resources and make data center planning very difficult.

Additionally looming on the horizon is the spectre or opportunity of commodity cloud services which might offer additional resources which could significantly change the requirements of your data center design or need for specific requirements.  This is generally an unknown at this point, but my money is that the cloud could significantly impact not only what you build, but how you build it.   This ultimately drives a modularized approach to the fore.

From a business / finance perspective companies are faced with some interesting challenges as well.  The first is that the global inventory for data center space (from a leasing or purchase perspective) is sparse at best.    This is resulting from a glut of capacity after the dotcom era and the resulting land grab that occurred after 9/11 and the Finance industry chewing up much of the good inventory.    Additive to this is the fact that there is a real reluctance to build these costly facilities speculatively.   This is a combination of how the market was burned in the dotcom days, and the general lack of availability and access to large sums of capital.  Both of these factors are driving data center space to be a tight resource.

In my opinion the biggest problem across every company I have encountered is that of capacity planning.  Most organizations cannot accurately reflect how much data center capacity they will need in next year let alone 3 or 5 years from now.   Its a challenge that I have invested a lot of time trying to solve and its just not that easy.   But this lack of predictability exacerbates the problems for most companies.  By the time they realize they are running out of capacity or need additional capacity it becomes a time to market problem.   Given the inventory challenge I mentioned above this can position a company in a very uncomfortable place.   Especially if you take the all-in industry average of building a traditional data center yourself in a timeline somewhere between 106 and 152 weeks.  

The high upfront capital costs of a traditional data center build can also be a significant endeavor and business impact event for many companies.   The amount of spending associated with the traditional method of construction could cripple a company’s resources and/or force it to focus its resources on something non-core to the business.   Data Centers can and do impact the balance sheet.  This is a fact that is not lost on the Finance professionals in the organization looking at this type of investment.

With the need for companies to remain agile and allow them to move quickly they are looking for the same flexibility from their infrastructure.    An asset like a large data center built to requirements that no longer fit can create a drag on a companies ability to stay responsive as well. 

None of this even acknowledges some basic cost factors that are beginning to come into play around the construction itself.   The construction industry is already forecasting that for every 8 people retiring in the key trades (mechanical, electrical, pipe-fitting, etc) associated with data centers only one person is replacing them.   This will eventually mean higher cost of construction and an increased scarcity in construction resources.

Modularized approaches help all of these issues and challenges and provide the modern data center manager a way to solve for both the technology and business level challenges. It allows you to move to Site Integration versus Site Construction.    Let me quickly point out that this is not some new whiz bang technology approach.  It has been around in other industries for a long long time.  

Enter the Container Data Center

While it is not the only modularized approach, this is the landscape in which the data center container has made its entry.  container

First and foremost let me say that while I am strong proponent of containment in every aspect, containers can add great value or simply not be a fit at all.  They can drive significant cost benefits or end up costing significantly more than traditional space.  The key is that you need to understand what problem you are trying to solve and that you have a couple of key questions answered first.  

So lets explore some of these things to think about in the current state of Data Center Containers out there today.  

What problem are you trying to solve?

The first question to ask yourself when evaluating if containerized data center space would be a fit is figure out which problem you are trying to solve.   In the past, the driver for me had more to do with solving deployment related issues.   We had moved the base unit of measure from servers to racks of servers ultimately to containers.    To put it more in general IT terms, it was a move of deploying tens to hundreds of servers per month, to hundreds and thousands of servers per month, to tens of thousands of servers per month.    Some people look at containers as Disaster Recovery or Business Continuity Solutions.  Others look at it from the perspective HPC clusters or large uniform batch processing requirements and modeling.    You must remember that most vendor container solutions out there today are modeled on hundreds to thousands of servers per “box”.  Is this a scale that is even applicable to your environment?   If you think its as simple as just dropping a server in place and then deploying servers in as you will, you will have a hard learning curve in the current state of ‘container-world’.   It just does not work that way today. 

Additionally one has to think about the type of ‘IT Load’ they will place inside of a container.  most containers espouse similar or like machines in bulk.  Rare to non-existent is the container that can take a multitude of different SKUs in different configurations.  Does your use drive uniformity of load or consistent use across a large number of machines?  If so, containers might be a good fit, if not, I would argue you are better off in traditional data center space (whether traditionally built or modularly built).

I will assume for purposes of this document that you feel you have a good reason to use this technology application.

Technical things to think about . . .

For purposes of this document I am going to refrain from getting into a discussion or comparison of particular vendors (except in aggregate) and generalizations as I will not endorse any vendor over another in this space.  Nor will I get into an in depth discussion around server densities, compute power, storage or other IT-specific comparisons for the containers.   I will trust that your organizations have experts or at least people knowledgeable in the areas of which servers/network gear/operating systems and the like you need for your application.   There is quite a bit of variety out there to chose from and you are a much better judge of such things for your environments than I.  What I will talk about here from a technical perspective is things that you might not be thinking of when it comes to the use of containers.  

Standards – What’s In? What’s Out?

One of the first considerations you need to look at when looking at containers is to make sure that your facilities experts do a comprehensive look at the vendors you are looking at in terms of the data center aspects of the container.  Why? The answer is simple.  There is no set industry standards when it comes to Data Center Containers.   This means that each vendor might have their own approach on what goes in, and what stays out of the container.   This has some pretty big implications for you as the user.   For example, lets take a look at batteries or UPS solutions.   Some vendors provide this function in the container itself (for ride through, or other purposes), while others assume this is part of the facility you will be connecting the container in to.   How is the UPS/batteries configured in your container?   Some configurations might have some interesting harmonics issues that will not work for your specific building configuration.    Its best to make sure you have both IT and Facilities people look at the solutions you are choosing jointly and make sure you know what base services you will need to provide to the containers themselves from the building, what the containers will provide, and the like. 

This brings up another interesting point you should probably consider.  Given the variety of Container configurations and lack of overall industry standard, you might find yourself locked into a specific container manufacturer for the long haul.  If ensuring you have multiple vendors is important you will need to ensure  that find vendors compatible to a standard that you define or wait until there is an industry standard.    Some look to the widely publicized Microsoft C-Blox specification as a potential basis for a standard.  This is their internal container specification that many vendors have configurations for, but you need to keep in mind that’s based on Microsoft’s requirements and might not meet yours.  Until the Green Grid, ASHRAE, or other such standards body starts looking to drive standards in this space, its probably something to be concerned about.   This What’s in/What’s out conversation becomes important in other areas as well.   In the section below that talks about Finance Asset Classes and Operational items understanding what is inside has some large implications.

Great Server manufacturers are not necessarily great Data Center Engineers

Related to the previous topic, I would recommend that your facilities people really take a look at the mechanical and electrical distribution configurations of the container manufacturers you are evaluating.  The lack of standards leaves a pretty interesting view of interpretation and you may find that the one-line diagrams or configuration of the container itself will not meet your specifications.   Just because a firm builds great servers, it does not mean they build great containers.  Keep in mind, a data center container is a blending of both IT and infrastructure that might normally be housed in a traditional data center infrastructure.  In many cases the actual Data Center componentry and design might be new. Some vendors are quite good, some are not.  Its worth doing your homework here.

Certification – Yes, its different than Standards

Another thing you want to look for is whether or not your provider is UL and/or CE certified.  Its not enough that the servers/internal hardware are UL or CE listed, I would strongly recommend the container itself has this certification.  This is very important as you are essentially talking about a giant metal box that is connected to  somewhere between 100kw to 500kw of power.   Believe me it is in your best interest to ensure that your solution has been tested and certified.  Why? Well a big reason can be found down the yellow brick road.

The Wizard of AHJ or Pay attention to the man behind the curtain…

For those of you who do not know who or what an AHJ is, let me explain.  It standards for Authority having Jurisdiction.  It may sound really technical but it really breaks down to being the local code inspector of where you wish to deploy your containers.   This could be one of the biggest things to pay attention to as your local code inspector could quickly sink your efforts or considerably increase the cost to deploy your container solution from both an operational as well as capital perspective.  

wiz Containers are a relatively new technology and more than likely your AHJ will not have any familiarity with how to interpret this technology in the local market.  Given the fact that there is not a large sample set for them to reference, their interpretation will be very very important.   Its important to ensure you work with your AHJ early on.   This is where the UL or CE listing can become important.  An AHJ could potentially interpret your container in one of two ways.  The first is that of a big giant refrigerator.  Its a bad example, but what I mean is a piece of equipment.    UL and CE listing on the container itself will help with that interpretation.  This should be the correct interpretation ultimately but the AHJ can do what they wish.   They might look at the container as a confined work space.    They might ask you all sorts of interesting questions like how often will people be going into this to service the equipment, (if there is no UL/CE listing)they might look at the electrical and mechanical installations and distribution and rule that it does not meet local electrical codes for distances between devices etc.   Essentially, the AHJ is an all powerful force who could really screw things up for a successful container deployment.  Its important to note, that while UL/CE gives you a great edge, your AHJ could still rule against you. If he rules the container as a confined work space for example, you might be required to suit your IT workers up in hazmat/thermal suits in two man teams to change out servers or drives.  Funny?  That’s a real example and interpretation from an AHJ.    Which brings us to the importance the IT configuration and interpretation is for your use of containers.

Is IT really ready for this?

As you read this section please keep our Wizard of AHJ in the back of your mind. His influence will still be felt in your IT world, whether your IT folks realize it or not.  Containers are really best suited if you have a high degree of automation in your IT function for those services and applications to be run inside them.   If you have an extremely ‘high touch’ environment where you do not have the ability to remotely access servers and need physical human beings to do a lot of care and feeding of your server environment, containers are not for you.  Just picture, IT folks dressed up like spacemen.    It definitely requires that you have a great deal of automation and think through some key items.

Lets first look at your ability to remotely image brand new machines within thestartline container.   Perhaps you have this capability through virtualization or perhaps through software provided by your server manufacturer.   One thing is a fact, this is an almost must-have technology with containers.   Given the fact the a container can come with hundreds to thousands of servers, you really don’t want Edna from IT in a container with DVDs and manually loaded software images.   Or worse, the AHJ might be unfavorable to you and you might have to have two people in suits with the DVDs for safety purposes.  

So definitely keep in mind that you really need a way to deploy your images from a central image repository in place.   Which then leads to the integration with your potential configuration management systems (asset management systems) and network environments.   

Configuration Management and Asset Management systems are also essential to a successful deployment so that the right images get to the right boxes.  Unless you have a monolithic application this is going to be a key problem to solve.    Many solutions in the market today are based upon the server or device ‘ARP’ing out its MAC address and some software layer intercepting that arp correlating that MAC address to some data base to your image repository or configuration management system.   Otherwise you may be back to Edna and her DVDs and her AHJ mandated buddy. 

Of course the concept of Arp’ing brings up your network configuration.   Make sure you put plenty of thought into network connectivity for your container.   Will you have  one VLAN or multiple VLANs across all your servers?   Can your network equipment selected handle the amount of machines inside the container? How your container is configured from a network perspective, and your ability to segment out the servers in a container could be crucial to your success.   Everyone always blames the network guys for issues in IT, so its worth having the conversation up front with the Network teams on how they are going to address the connectivity A) to the container and B) inside the container from a distribution perspective. 

As long as I have all this IT stuff, Containers are cheaper than traditional DC’s right?

Maybe.  This blends a little with the next section specifically around finance things to think about for containers but its really sourced from a technical perspective.   Today you purchase containers in terms of total power draw for the container itself.   150kw, 300kw, 500kw and like denominations.   This ultimately means that you want to optimize your server environments for the load you are using.  Not utilizing the entire power allocation could easily flip the economic benefits of going to containers quickly.    I know what your thinking, Mike, this is the same problem you have in a traditional data center so this should really be a push and a non-issue.

The difference here is that you have a higher upfront cost with the containers.  Lets say you are deploying 300kw containers as a standard.    If you never really drive those containers to 300kw and lets say your average is 100kw you are only getting 33% of the cost benefit.   If you then add a second container and drive it to like capacity, you may find your self paying a significant premium for that capacity at a much higher price point that deploying those servers to traditional raised floor space for example.    Since we are brushing up on economic and financial aspects lets take a quick look at things to keep an eye on in that space.

Finance Friendly?

Most people have the idea that containers are ultimately cheaper and therefore those Finance guys are going to love them.   They may actually be cheaper or they may not, regardless there are other things your Finance teams will definitely want to take a look at.

money

The first challenge for your finance teams is to figure out how to classify this new asset called a container.   If you think about traditional asset classification for IT and data center investments they typically fall into 3 categories from which the rules for depreciation are set.  The first is Software, The second is server related infrastructure such as Servers, Hardware, racks, and the like.  The last category is the data center components itself.    Software investments might be capitalized over anywhere between 1-10 years.   Servers and the like typically range from 3-5 years, and data centers components (UPS systems, etc) are depreciated closer to 15-30 years.   Containers represent an asset that is really a mixed asset class.  The container obviously houses servers that have a useful life (presumably shorter than the container housing itself), the container also contains components that might be found in the data center therefore traditionally having a longer depreciation cycle.   Remember our What’s in? What’s out conversation? So your finance teams are going to have to figure out how they deal with a Mixed Asset class technology.   There is no easy answer to this.  Some Finance systems are set up for this, others are not.  An organization could move to treat it in an all or nothing fashion.  For example, If the entire container is depreciated over a server life cycle it will dramatically increase the depreciation hit for the business.  If you opt to depreciate it over the longer lead time items, then you will need to figure out how to deal with the fact that the servers within will be rotated much more frequently and be accounted for.    I don’t have an easy answer to this, but I can tell you one thing.   If your Finance folks are not looking at containers along with your facilities and IT folks, they should be.  They might have some work to do to accommodate this technology.

Related to this, you might also want to think about Containers from an insurance perspective.   How is your insurer looking at containers and how do they allocate cost versus risk for this technology set.  Your likely going to have some detailed conversations to bring them up to speed on the technology by and large.  You might find they require you to put in additional fire suppression (its a metal box, it something catches on fire inside, it should naturally be contained right?)  What about the burning plastics?  How is water delivered to the container for cooling, where and how does electrical distribution take place.   These are all questions that could adversely affect the cost or operation of your container deployment so make sure you loop them in as well.

Operations and Containers

Another key area to keep in mind is how your operational environments are going to change as a result of the introduction to containers.   Lets jump back a second and go back to our Insurance examples.   A container could weigh as much as 60,000 pounds (US).  That is pretty heavy.  Now imagine you accidently smack into a load bearing wall or column as you try to push it into place.  That is one area where Operations and Insurance are going to have to work together.   Is your company licensed and bonded for moving containers around?  Does your area have union regulations that only union personnel are certified and bonded to do that kind of work?   Important questions and things you will need to figure out from an Operations perspective.   

Going back to our What’s in and What’s out conversation – You will need to ensure that you have the proper maintenance regimen in place to facilitate the success of this technology.    Perhaps the stuff inside is part of the contract you have with your container manufacturer.  Perhaps its not.   What work will need to take place to properly support that environment.   If you have batteries in your container – how do you service them?  What’s the Wizard of AHJ ruling on that? 

The point here is that an evaluation for containers must be multi-faceted.  If you only look at this solution from a technology perspective you are creating a very large blind spot for yourself that will likely have significant impact on the success of containers in your environment.

This document is really meant to be the first of an evolutionary watch of the industry as it stands today. I will add observations as I think of them and repost accordingly over time. Likely (and hopefully) many of the challenges and things to think about may get solved over time and I remain a strong proponent of this technology application.   The key is that you cannot look at containers purely from a technology perspective.  There are a multitude of other factors that will make or break the use of this technology.  I hope this post helped answer some questions or at least force you to think a bit more holistically around the use of this interesting and exciting technology. 

\Mm