Industry Impact : Brothers from Different Mothers and Beyond…

Screen Shot 2013-11-15 at 12.19.43 PM

My reading material and video watching habits these past two weeks have brought me some incredible joy and happiness. Why?  Because Najam Ahmad of Facebook is finally getting some credit for the amazing work that he has done and been doing in the world of Software Defined Networking.  In my opinion Najam is a Force Majeure in the networking world.   He is passionate.  He is focused. He just gets things done.  Najam and I worked very closely at Microsoft as we built out and managed the company’s global infrastructure. So closely in fact that we were frequently referred to as brothers from different mothers.   Wherever Najam was-I was not far behind, and vice versa. We laughed. We cried.  We fought.  We had alot of fun while delivered some pretty serious stuff.  To find out that he is behind the incredible Open Compute Project advances in Networking is not surprising at all.   Always a forward thinking guy he has never been satisfied with the status quo.    
If you have missed any of that coverage you I strongly encourage you to have a read at the links below.   


This got me to thinking about the legacy of the Microsoft program on the Cloud and Infrastructure Industry at large.   Data Center Knowledge had an article covering the impact of some of the Yahoo Alumni a few years ago. Many of those folks are friends of mine and deserve great credit.  In fact, Tom Furlong now works side by side with Najam at Facebook.    The purpose of my thoughts are not to take away from their achievements and impacts on the industry but rather to really highlight the impact of some of the amazing people and alumni from the Microsoft program.  Its a long overdue acknowledgement of the legacy of that program and how it has been a real driving force in large scale infrastructure.   The list of folks below is by no means comprehensive and doesnt talk about the talented people Microsoft maintains in their deep stable that continue to drive the innovative boundaries of our industry.  

Christian Belady of Microsoft – Here we go, first person mentioned and I already blow my own rule.   I know Christian is still there at Microsoft but its hard not to mention him as he is the public face of the program today.  He was an innovative thinker before he joined the program at Microsoft and was a driving thought leader and thought provoker while I was there.  While his industry level engagements have been greatly sidelined as he steers the program into the future – he continues to be someone willing to throw everything we know and accept today into the wind to explore new directions.
Najam Ahmad of Facbook - You thought  I was done talking about this incredible guy?  Not in the least, few people have solved network infrastructure problems at scale like Najam has.   With his recent work on the OCP front finally coming to the fore, he continues to drive the capabilities of what is possible forward.  I remember long meetings with Network vendors where Najam tried to influence capabilities and features with the box manufacturers within the paradigm of the time, and his work at Facebook is likely to end him up in a position where he is both loved and revilved by the Industry at large.  If that doesn’t say your an industry heavy weight…nothing does.
James Hamilton of Amazon - There is no question that James continues to drive deep thinking in our industry. I remain an avid reader of his blog and follower of his talks.    Back in my Microsoft days we would sit  and argue philosophical issues around the approach to our growth, towards compute, towards just about everything.   Those conversations either changed or strengthed my positions as the program evolved.   His work in the industry while at Microsoft and beyond has continued to shape thinking around data centers, power, compute, networking and more.
Dan Costello of Google - Dan Costello now works at Google, but his impacts on the Generation 3 and Generation 4 data center approaches and the modular DC industry direction overall  will be felt for a very long time to come whether Google goes that route or not.   Incredibly well balanced in his approach between technology and business his ideas and talks continue to shape infrastructre at scale.  I will spare people the story of how I hired him away from his previous employer but if you ever catch me at a conference, its a pretty funny story. Not to mention the fact that he is the second best break dancer in the Data Center Industry.
Nic Bustamonte of Google – Nic is another guy who has had some serious impact on the industry as it relates to innovating the running and operating of large scale facilities.   His focus on the various aspects of the operating environments of large scale data centers, monioring, and internal technology has shifted the industry and really set the infancy for DCIM in motion.   Yes, BMS systems have been around forever, and DCIM is the next interation and blending of that data, but his early work here has continued to influence thinking around the industry.
Arne Josefsberg of ServiceNow - Today Arne is the CTO of Service Now, and focusing on infrastructure and management for enterprises to the big players alike and if their overall success is any measure, he continues to impact the industry through results.  He is *THE* guy who had the foresight of building an organiation to adapt to this growing change of building and operating at scale.   He the is the architect of building an amazing team that would eventually change the industry.
Joel Stone of Savvis/CenturyLink – Previously the guy who ran global operations for Microsoft, he has continued to drive excellence in Operations at Global Switch and now at Savvis.   An early adopter and implmenter of blending facilities and IT organizations he mastered issues a decade ago that most companies are still struggling with today.
Sean Farney of Ubiquity – Truly the first Data center professional who ever had to productize and operationalize data center containers at scale.   Sean has recently taken on the challenge of diversifying data center site selection and placement at Ubquity repurposing old neighorbood retail spaces (Sears, etc) in the industry.   Given the general challenges of finding places with a confluence of large scale power and network, this approach may prove to be quite interesting as markets continue to drive demand.   
Chris Brown of Opscode – One of the chief automation architects at my time at Microsoft, he has moved on to become the CTO of Opscode.  Everyone on the planet who is adopting and embracing a DevOps has heard of, and is probably using, Chef.  In fact if you are doing any kind of automation at large scale you are likely using his code.
None of these people would be comfortable with the attention but I do feel credit should be given to these amazing individuals who are changing our industry every day.    I am so very proud to have worked the trenches with these people. Life is always better when you are surrounded by those who challenge and support you and in my opinion these folks have taken it to the next level.
\Mm

Google Purchase of Deep Earth Mining Equipment in Support of ‘Project Rabbit Ears’ and Worldwide WIFI availability…

(10/31/2013 – Mountain View, California) – Close examination of Google’s data center construction related purchases has revealed the procurement of large scale deep earth mining equipment.   While the actual need for the deep mining gear is unclear, many speculate that it has to do with a secretive internal project that has come to light known only as Project: Rabbit Ears. 

According to sources not at all familiar with Google technology infrastructure strategy, Project Rabbit ears is the natural outgrowth of Google’ desire to provide ubiquitous infrastructure world wide.   On the surface, these efforts seem consistent with other incorrectly speculated projects such as Project Loon, Google’s attempt to provide Internet services to residents in the upper atmosphere through the use of high altitude balloons, and a project that has only recently become visible and the source of much public debate – known as ‘Project Floating Herring’, where apparently a significantly sized floating barge with modular container-based data centers sitting in the San Francisco Bay has been spied. 

“You will notice there is no power or network infrastructure going to any of those data center shipping containers,” said John Knownothing, chief Engineer at Dubious Lee Technical Engineering Credibility Corp.  “That’s because they have mastered wireless electrical transfer at the large multi-megawatt scale.” 

Real Estate rates in the Bay Area have increased almost exponentially over the last ten years making the construction of large scale data center facilities an expensive endeavor.  During the same period, The Port of San Francisco has unfortunately seen a steady decline of its import export trade.  After a deep analysis it was discovered that docking fees in the Port of San Francisco are considerably undervalued and will provide Google with an incredibly cheap real estate option in one of the most expensive markets in the world. 

It will also allow them to expand their use of renewable energy through the use of tidal power generation built directly into the barges hull.   “They may be able to collect as much as 30 kilowatts of power sitting on the top of the water like that”, continues Knownothing, “and while none of that technology is actually visible, possible, or exists, we are certain that Google has it.”

While the technical intricacies of the project fascinate many, the initiative does have its critics like Compass Data Center CEO, Chris Crosby, who laments the potential social aspects of this approach, “Life at sea can be lonely, and no one wants to think about what might happen when a bunch of drunken data center engineers hit port.”  Additionally, Crosby mentions the potential for a backslide of human rights violations, “I think we can all agree that the prospect of being flogged or keel hauled really narrows down the possibility for those outage causing human errors. Of course, this sterner level of discipline does open up the possibility of mutiny.”

However, the public launch of Project Floating Herring will certainly need to await the delivery of the more shrouded Project Rabbit Ears for various reasons.  Most specifically the primary reason for the development of this technology is so that Google can ultimately drive the floating facility out past twelve miles into International waters where it can then dodge all national, regional, and local taxation, the safe harbor and privacy legislation of any country or national entity on the planet that would use its services.   In order to realize that vision, in the current network paradigm, Google would need exceedingly long network cables  to attach to Network Access Points and Carrier Connection points as the facilities drive through international waters.

This is where Project Rabbit Ears becomes critical to the Google Strategy.   Making use of the deep earth mining equipment, Google will be able to drill deep into the Earths crust, into the mantle, and ultimately build a large Network Access Point near the Earth’s core.  This Planetary WIFI solution will be centrally located to cover the entire earth without the use of regional WIFI repeaters.  Google’s floating facilities could then gain access to unlimited bandwidth and provide yet another consumer based monetization strategy for the company. 

Knownothing also speculates that such a move would allow Google to make use of enormous amounts of free geo-thermic power and almost singlehandedly become the greenest power user on the planet.   Speculation also abounds that Google could then sell that power through its as yet un-invented large scale multi-megawatt wireless power transfer technology as unseen on its floating data centers.

Much of the discussion around this kind of technology innovation driven by Google has been given credible amounts of veracity and discussed by many seemingly intelligent technology based news outlets and industry organizations who should intellectually know better, but prefer not to acknowledge the inconvenient lack of evidence.

 

\Mm

Editors Note: I have many close friends in the Google Infrastructure organization and firmly believe that they are doing some amazing, incredible work in moving the industry along especially solving problems at scale.   What I find simply amazing is in the search for innovation how often our industry creates things that may or may not be there and convince ourselves so firmly that it exists. 

Through an idea and force of will, he created an industry…

This week the Data Center Industry got the terrible news it knew might be coming for some time.   That Ken Brill, founder of the Uptime Institute had passed away.  Many of us knew that Ken had been ill for some time and although it may sound silly, were hoping he could somehow pull through it.   Even as ill as he was, Ken was still sending and receiving emails and staying in touch with this industry that quite frankly he helped give birth to.  

I was recently asked about Ken and his legacy for a Computerworld article and it really caused me to stop and re-think his overall legacy and gift to the rest of us in the industry.  Ken Brill was a pioneering, courageous, tenacious, visionary who through his own force of will saw the inefficiencies in a nascent industry and helped craft it into what it is today.

Throughout his early career experience Ken was able to see the absolute silo’ing of information, best practices, and approaches that different enterprises were developing around managing their mission critical IT spaces.    While certainly not alone in the effort, he became the strongest voice and champion to break down those walls, help others through the process and build a network of people who would share these ideas amongst each other.  Before long an industry was born.   Sewn together through his sometimes delicate, sometimes not so delicate cajoling and through it all his absolute passion for the Data Center industry at large.

One of the last times Ken and I got to speak in person.In that effort he also created and permeated the language that the industry uses as commonplace.   Seeing a huge gap in terms of how people communicated and compared mission critical capabilities he became the klaxon of the Tiering system which essentially normalized the those conversations across the Data Center Industry.   While some (including myself) have come to think it’s a time to re-define how we classify our mission critical spaces, we all have to pay homage to the fact that Ken’s insistence and drive for the Tiering system created a place and a platform to even have such conversations.  

One of Ken’s greatest strengths was his adaptability.   For example, Ken and I did not always agree.   I remember an Uptime Fellows meeting back in 2005 or 2006 or so in Arizona.  In this meeting I started talking about the benefits of modularization and reduced infrastructure requirements augmented by better software.   Ken was incredulous and we had significant conversations around the feasibility of such an approach.   At another meeting we discussed the relative importance or non-importance of a new organization called ‘The Green Grid’ (Smile)and if Uptime should closely align itself with those efforts.   Through it all Ken was ultimately adaptable. Whether it was giving those ideas light for conversation amongst the rest of the Uptime community via audio blogs, or other means, Ken was there to have a conversation.

In an industry where complacency has become commonplace, where people rarely question established norms, it was always comforting to know that Ken was there acting the firebrand, causing the conversation to happen.   This week we lost one of the ‘Great Ones’ and I for one will truly miss him.  To his family my deepest sympathies, to our industry I ask, “Who will take his place?”

 

\Mm

The AOL Micro-DC adds new capability

Back in July, I announced AOL’s Data Center Independence Day with the release of our new ‘Micro Data Center’ approach.   In that post we highlighted the terrific work that the teams put in to revolutionize our data center approach and align it completely to not only technology goals but business goals as well.   It was an incredible amount of engineering and work to get to that point and it would be foolish to think that the work represented a ‘One and Done’ type of effort.  

So today I am happy to announce the roll out of a new capability for our Micro-DC – An indoor version of the Micro-DC.

Aol MDC-Indoor2

While the first instantiations of our new capability were focused on outdoor environments, we were also hard at work at an indoor version with the same set of goals.   Why work on an indoor version as well?   Well you might recall in the original post I stated:

We are no longer tied to traditional data center facilities or colocation markets.   That doesn’t mean we wont use them, it means we now have a choice.  Of course this is only possible because of the internally developed cloud infrastructure but we have freed ourselves from having to be bolted onto or into existing big infrastructure.   It allows us to have an incredible amount geo-distributed capacity at a very low cost point in terms of upfront capital and ongoing operational expense.

We need to maintain a portfolio of options for our products and services.  In this case – having an indoor version of our capabilities to ensure that our solution can live absolutely anywhere.   This will allow our footprint, automation and all, to live inside any data center co-location environment or the interior of any office building anywhere around the planet, and retain the extremely low maintenance profile that we were targeting from an operational cost perspective.  In a sense you can think of it as “productizing” our infrastructure.  Could we have just deployed racks of servers, network kit, etc. like we have always done?  Sure.   But by continuing to productize our infrastructure we continue to drive down the costs relating to our short term and long term infrastructure costs.  In my mind, Productizing your infrastructure, is actually the next evolution in standardization of your infrastructure.   You can have infrastructure standards in place – Server Model, RAM, HD space, Access switches, Core switches, and the like.  But until you get to that next phase of standardizing, automating, and ‘productizing’ it into a discrete set of capabilities – you only get a partial win.

Some people have asked me, “Why didn’t you begin with the interior version to start with? It seems like it would be the easier one to accomplish.”  Indeed I cannot argue with them, it would have probably been easier as there were much less challenges to solve.  You can make basic assumptions around where this kind of indoor solution would live in, and reduce much of the complexity.   I guess it all nets out to a philosophy of solving the harder problems first.   Once you prove the more complicated use case, the easier ones come much faster.   This is definitely the situation here.  

While this new capability continues the success we are seeing in re-defining the cost and operations of our particular engineering environments, the real challenge here (as with all sorts infrastructure and cloud automation) is whether or not we can map similar success of our applications and services to work correctly in that space.   On that note, I should have more to post soon. Stay Tuned!  Smile

 

\Mm

DataCentres 2012 – Nice,France

datacentre2012

This week I am headed off to France to be a keynote speaker at DataCentres2012.   In my opinion this event is the pre-eminent infrastructure and operations conference across the whole of Europe if not the world.   Regularly attracting the best speakers and infrastructure professionals in the industry (myself excluded of course – perhaps they are looking for some comic relief?) along with an incredible list of attendees which is a veritable who’s who of our industry world-wide. 

By the looks of it, Cloud and Energy concerns will be the topic of many of the conversations.    No doubt many will focus on the impact  of technology, its usefulness, features, and capabilities.   But those of you who have heard me speak before know, that I believe there is a larger more personal story – for both the professional and the companies they represent.  The problems we are facing in the industry today are complicated, multi-faceted, and deep-rooted in the past of our own decisions or the decisions of our predecessors.   We sometimes think technology alone is the salve for all ills.   This is no more true than the purchase of a pen being the solution to writers block. 

My talk will center on this multi-faceted problem space.   I will use real world examples of how I have, and am tackling those issues.  Who knows?  I might even let loose some of the top secret work we have been doing internally to position us for the future.   

As always – If you happen to be at the conference or in Nice -  Don’t be a stranger!  

\Mm

Uptime, Cowgirls, and Success in California

This week my teams have descended upon the Uptime Institute Symposium in Santa Clara.  The moment is extremely bittersweet for me as this is the first Symposium in quite sometime I have been unable to attend.  With my responsibilities expanding at AOL beginning this week there was simply too much going on for me to make the trip out.  It’s a down right shame too.  Why?

We (AOL) will be featured in two key parts at Symposiums this time around for some incredibly ground breaking work happening at the company.   The first is a recap of the incredible work going on in the development of our own cloud platforms.  Last year you may recall that we were asked to talk about some of the wins and achievements we were able to accomplish with the development of our cloud platform.   The session was extremely well received.   We were asked to come back, one year on, to discuss about how that work has progressed even more.   Aaron Lake, the primary developer of our cloud platforms and my Infrastructure Development Team, will be talking on the continued success, features, and functionality, and the launch of our ATC Cloud Only Data Center.   Its been an incredible break neck pace for Aaron and his team and they have delivered world-class capabilities for us internally.

Much of Aaron’s work has also enabled us to win the Uptime Institutes First Annual Server Round Up Award.  I am especially proud of this particular honor as it is the result of an amazing amount of hard work within the organization on a problem faced by companies all over the planet.   Essentially this is Operations Hygiene at a huge scale, getting rid of old servers, driving consolidation, moving platforms to our cloud environments and more.  This talk will be lead by Julie Edwards, our Director of Business Operations and Christy Abramson, our Director of Service Management.  Together these two teams led the effort to drive out “Operational Absurdities” and our “Power Hog” programs.  We have sent along Lee Ann Macerelli and Rachel Paiva who were the primary project managers instrumental in making this initiative such a huge success.  These “Cowgirls” drove an insane amount of work across the company resulting in over 5 million dollars of un-forecasted operational savings, proving that there is always room for good operational practices.  They even starred in a funny internal video to celebrate their win which can be found here using the AOL Studio Now service.

If you happen to be attending Symposium this year feel free to stop by and say hello to these amazing individuals.   I am incredibly proud of the work that they have driven within the company.

 

\Mm

Patent Wars may Chill Data Center Innovation

Yahoo may have just sent a cold chill across the data center industry at large and begun a stifling of data center innovation.  In a May 3, 2012 article, Forbes did a quick and dirty analysis on the patent wars between Facebook and Yahoo. It’s a quick read but shines an interesting light on the potential impact something like this can have across the industry.   The article, found here,  highlights that :

In a new disclosure, Facebook added in the latest version of the filing that on April 23 Yahoo sent a letter to Facebook indicating that Yahoo believes it holds 16 patents that “may be relevant” to open source technology Yahoo asserts is being used in Facebook’s data centers and servers.

While these types of patent infringement cases happen all the time in the Corporate world, this one could have far greater ramifications on an industry that has only recently emerged into the light of sharing of ideas.    While details remain sketchy at the time of this writing, its clear that the specific call out of data center and servers is an allusion to more than just server technology or applications running in their facilities.  In fact, there is a specific call out of data centers and infrastructure. 

With this revelation one has to wonder about its impact on the Open Compute Project which is being led by Facebook.   It leads to some interesting questions. Has their effort to be more open in their designs and approaches to data center operations and design led them to a position of risk and exposure legally?  Will this open the flood gates for design firms to become more aggressive around functionality designed into their buildings?  Could companies use their patents to freeze competitors out of colocation facilities in certain markets by threatening colo providers with these types of lawsuits?  Perhaps I am reaching a bit but I never underestimate litigious fervor once the  proverbial blood gets in the water. 

In my own estimation, there is a ton of “prior art”, to use an intellectual property term, out there to settle this down long term, but the question remains – will firms go through that lengthy process to prove it out or opt to re-enter their shells of secrecy?  

After almost a decade of fighting to open up the collective industry to share technologies, designs, and techniques this is a very disheartening move.   The general Glasnost that has descended over the industry has led to real and material change for the industry.  

We have seen the mental shift of companies move from measuring facilities purely around “Up Time” measurements to one that is primarily more focused around efficiency as well.  We have seen more willingness to share best practices and find like minded firms to share in innovation.  One has to wonder, will this impact the larger “greening” of data centers in general.   Without that kind of pressure – will people move back to what is comfortable?

Time will certainly tell.   I was going to make a joke about the fact that until time proves out I may have to “lawyer” up just to be safe.  Its not really a joke however because I’m going to bet other firms do something similar and that, my dear friends, is how the innovation will start to freeze.

 

\Mm