Industry Impact : Brothers from Different Mothers and Beyond…

Screen Shot 2013-11-15 at 12.19.43 PM

My reading material and video watching habits these past two weeks have brought me some incredible joy and happiness. Why?  Because Najam Ahmad of Facebook is finally getting some credit for the amazing work that he has done and been doing in the world of Software Defined Networking.  In my opinion Najam is a Force Majeure in the networking world.   He is passionate.  He is focused. He just gets things done.  Najam and I worked very closely at Microsoft as we built out and managed the company’s global infrastructure. So closely in fact that we were frequently referred to as brothers from different mothers.   Wherever Najam was-I was not far behind, and vice versa. We laughed. We cried.  We fought.  We had alot of fun while delivered some pretty serious stuff.  To find out that he is behind the incredible Open Compute Project advances in Networking is not surprising at all.   Always a forward thinking guy he has never been satisfied with the status quo.    
If you have missed any of that coverage you I strongly encourage you to have a read at the links below.   


This got me to thinking about the legacy of the Microsoft program on the Cloud and Infrastructure Industry at large.   Data Center Knowledge had an article covering the impact of some of the Yahoo Alumni a few years ago. Many of those folks are friends of mine and deserve great credit.  In fact, Tom Furlong now works side by side with Najam at Facebook.    The purpose of my thoughts are not to take away from their achievements and impacts on the industry but rather to really highlight the impact of some of the amazing people and alumni from the Microsoft program.  Its a long overdue acknowledgement of the legacy of that program and how it has been a real driving force in large scale infrastructure.   The list of folks below is by no means comprehensive and doesnt talk about the talented people Microsoft maintains in their deep stable that continue to drive the innovative boundaries of our industry.  

Christian Belady of Microsoft – Here we go, first person mentioned and I already blow my own rule.   I know Christian is still there at Microsoft but its hard not to mention him as he is the public face of the program today.  He was an innovative thinker before he joined the program at Microsoft and was a driving thought leader and thought provoker while I was there.  While his industry level engagements have been greatly sidelined as he steers the program into the future – he continues to be someone willing to throw everything we know and accept today into the wind to explore new directions.
Najam Ahmad of Facbook - You thought  I was done talking about this incredible guy?  Not in the least, few people have solved network infrastructure problems at scale like Najam has.   With his recent work on the OCP front finally coming to the fore, he continues to drive the capabilities of what is possible forward.  I remember long meetings with Network vendors where Najam tried to influence capabilities and features with the box manufacturers within the paradigm of the time, and his work at Facebook is likely to end him up in a position where he is both loved and revilved by the Industry at large.  If that doesn’t say your an industry heavy weight…nothing does.
James Hamilton of Amazon - There is no question that James continues to drive deep thinking in our industry. I remain an avid reader of his blog and follower of his talks.    Back in my Microsoft days we would sit  and argue philosophical issues around the approach to our growth, towards compute, towards just about everything.   Those conversations either changed or strengthed my positions as the program evolved.   His work in the industry while at Microsoft and beyond has continued to shape thinking around data centers, power, compute, networking and more.
Dan Costello of Google - Dan Costello now works at Google, but his impacts on the Generation 3 and Generation 4 data center approaches and the modular DC industry direction overall  will be felt for a very long time to come whether Google goes that route or not.   Incredibly well balanced in his approach between technology and business his ideas and talks continue to shape infrastructre at scale.  I will spare people the story of how I hired him away from his previous employer but if you ever catch me at a conference, its a pretty funny story. Not to mention the fact that he is the second best break dancer in the Data Center Industry.
Nic Bustamonte of Google – Nic is another guy who has had some serious impact on the industry as it relates to innovating the running and operating of large scale facilities.   His focus on the various aspects of the operating environments of large scale data centers, monioring, and internal technology has shifted the industry and really set the infancy for DCIM in motion.   Yes, BMS systems have been around forever, and DCIM is the next interation and blending of that data, but his early work here has continued to influence thinking around the industry.
Arne Josefsberg of ServiceNow - Today Arne is the CTO of Service Now, and focusing on infrastructure and management for enterprises to the big players alike and if their overall success is any measure, he continues to impact the industry through results.  He is *THE* guy who had the foresight of building an organiation to adapt to this growing change of building and operating at scale.   He the is the architect of building an amazing team that would eventually change the industry.
Joel Stone of Savvis/CenturyLink – Previously the guy who ran global operations for Microsoft, he has continued to drive excellence in Operations at Global Switch and now at Savvis.   An early adopter and implmenter of blending facilities and IT organizations he mastered issues a decade ago that most companies are still struggling with today.
Sean Farney of Ubiquity – Truly the first Data center professional who ever had to productize and operationalize data center containers at scale.   Sean has recently taken on the challenge of diversifying data center site selection and placement at Ubquity repurposing old neighorbood retail spaces (Sears, etc) in the industry.   Given the general challenges of finding places with a confluence of large scale power and network, this approach may prove to be quite interesting as markets continue to drive demand.   
Chris Brown of Opscode – One of the chief automation architects at my time at Microsoft, he has moved on to become the CTO of Opscode.  Everyone on the planet who is adopting and embracing a DevOps has heard of, and is probably using, Chef.  In fact if you are doing any kind of automation at large scale you are likely using his code.
None of these people would be comfortable with the attention but I do feel credit should be given to these amazing individuals who are changing our industry every day.    I am so very proud to have worked the trenches with these people. Life is always better when you are surrounded by those who challenge and support you and in my opinion these folks have taken it to the next level.
\Mm

Bippity Boppity Boom! The Impact of Enchanted Objects on Development, Infrastructure and the Cloud

I have been spending a bunch of my time recently thinking through the impact on what David Rose of Ditto Labs and MIT Media Lab romantically calls ‘Enchanted Objects’.  What are enchanted objects?   Enchanted Objects are devices, appliances, tools, dishware, anything that is ultimately connected to the Internet (or any connected network) and become to some degree aware of the world around them.   Imagine an Umbrella that has a light on its hilt that lights up if it may rain today, reminding you that you might want to bring it along on your travels.   Imagine your pantry and refrigerator communicating with your grocery cart at the store while you shop, letting you know the things you are running low on or even bypasses the part where you have to shop, and automatically just orders it to your home.  This approach is going to fundamentally change everything you know in life from credit cards to having a barbeque with friends. These things and their capabilities are going to change our world in ways that we cannot even fathom today.   Our Technology Industry calls this emerging field, the Internet of Things.   Ugh!  How absolutely boring. Our industry has this way of sucking all the fun out of things don’t we?   I personally feel that ‘Enchanted Objects’ is a far more compelling classification, as it speaks to the possibilities, wonderment and possibly terror that lies in store for us.  If we must make it sound ‘technical’ maybe we can call it the Enchantosphere.

While I may someday do a post about all of the interesting things I have found out there already, or the ideas that I have come up with for this new enchanted world,  I wanted to to reflect a bit on what it means for the things that I normally write about.  You know, things like The cloud, big infrastructure, and scaled software development.   So go grab your walking staff of traffic conditions and come on an interesting journey into the not-so-distant world of Cloud powered magic…

The first thing you need to understand is, if you work in this industry, you are not an idle player in this magical realm.  You are, for lack of a better term, a wizard or an enchanter.   Your role will be pivotal in creating magic items, maintaining the magic around us, or ensuring that the magic used by everyone stays strong. While the Dungeons and Dragons and fantasy book references are almost limitless for this conversation I am going to try and bring it back to the world we know today.  I promise.  I am really just trying to tease out a glimpse of the world to come and the importance of the cloud, data center infrastructure, and the significant impacts on software development and how software based services may have to evolve. 

The Magical Weaves Surround Us

Every device and enchanted item will be connected.  Whether via through WIFI in your work and home, over mobile networks, or all of the above and more, these Enchanted Objects will be connected to the magical weaves all around us.  If you happen to be a network engineer you know that I am talking to you.  All of these objects are going to have to connect to something.   If you are one of those folks who are stuck in IPv4, you better upgrade yourself. There just isn’t enough address space there to connect everything in our magical world of the future.  IPv6 will be a must. In fact, these devices could just be that ‘killer app’ that drives global adoption of the standard even faster.   But its not just about address space, these kind of connected objects are going to open up and challenge whole new areas in security, spectrum management, routing, and a host of other areas.   I am personally thinking through some very interesting source-based routing applications in the Enchantosphere as well.   The short of it is, this new magical world is going to stress the limits of how things are connected today and Network Engineers will be charged with keeping our magical weaves flowing to allow our charmed existences to continue.  You are the Keepers of the Magical Weave and I am not talking about a tricked out hairpiece either.

While just briefly mentioned above – Security Engineers are going to have to evolve significantly as well.   It will lead into whole new areas and fields of privacy protection hard to even conceive at this point.  Even things like Health and Safety will need to be considered.  Imagine a stove that starts pre-heating itself based on where you are on your commute home and the dinner menu you have planned.  While some of those controls will need to be programmed into the software itself, there is no doubt that those capabilities will need to be well guarded.  Why, I can almost see the Wards and Glyphs of Protection you will have to create.

The Wizard’s Tower

imageAs cool as all these enchanted objects could be, they would all be worthless IP-enabled husks without the advent of the construct that we now call The Cloud.  When I talk about ‘The Cloud’ I am talking about more than just virtualized server instances and marketing-laden terminology.  I am talking about Data Centers.  I am talking about automation.  I am talking about ubiquitous compute capabilities all around the world.  The actual physical places where the magical services live! The Data Centers which include the technologies of both IT and facilities infrastructure and automation, The proverbial Wizards Tower!  This is where our enchanted objects will come to discover who they, how they work, what they should do, and retrieve any new capabilities they may yet magically receive.  This new world is going to drive the need for more compute centers across the globe.  This growth will not just be driven by demand, although the demand will admittedly be huge, but by other more mundane ‘muggle’ matters such as regulatory requirements, privacy enforcement, taxation and revenue.  I bet you were figuring  that with all this new found magical power flying around we would be able to finally rid ourselves of lawyers, legislators, government hacks, and the like.   Alas, it is after all still the real world.  Cloud Computing capacity will continue to grow, the demand for services increasing, and the development of an entire eco-system of software and services that sit atop the various cloud providers will be birthed.

I don’t know if many of you have read Robert Jordan’s fantasy series called ‘The Wheel of Time’, but in that series he has a a classification of enchanted objects called the Terangreal.  These are single purpose or limited power artifacts that anyone can use.   Like my example of the umbrella that lights up if its going to rain after it checks with Weatherbug for weather conditions in your area, or a ring that lights up to let you know that there is a new Loosebolts post available to read, or a garden gnome whose hat lights up when it detects evidence of plant eating bugs in your garden.  These are devices that require no technical knowledge to use, configure, but give some value to its owner.   They do their function and that is it.   By the way, I am an engineer not a marketing guy, if you don’t like my examples of special purpose enchanted objects you can tweet me better ones at @mjmanos. 

These devices will reach out, download their software, learn their capabilities, and just work as advertised.   Software in this model may seem very similar to todays software development techniques and environments but I believe we will begin to see fundamental changes in how software works and is distributed.   Software will be portable. Services will be portable.   Allowing for truly amazing “Multi-purpose” enchanted objects.  The ability to download “apps” to these objects can become common place.   Even something as a common place as a credit card could evolve to a piece of software or code that could be transported around in various devices.  Simply wave that RFID enabled stick (ok, wand) that contains your credit card app at the register and as long as you are wearing your necklace which stores your digital ID the transaction goes through.  Two factor authentication in the real world.  Or instead of a wand, maybe its just your wallet.  When thinking about this app enabled platform it gives a whole new meaning to the Capital One catchphrase Whats in your wallet?  The bottom line here is that a whole host of software, services, and other capabilities will become incredibly portable, and allow for some very interesting enchanted objects indeed.

The bottom line here is that we are just beginning to see into a new world of the Internet of Things… of Enchanted Objects.   The simpler things become the more complex they truly are.   Those of us who deal with large scale infrastructure, software and service development, and cloud based technologies have a heck of a ride ahead of us.  We are the keepers of the complex, Masters of the Arcane, and needers of a good bath.

\Mm

Through an idea and force of will, he created an industry…

This week the Data Center Industry got the terrible news it knew might be coming for some time.   That Ken Brill, founder of the Uptime Institute had passed away.  Many of us knew that Ken had been ill for some time and although it may sound silly, were hoping he could somehow pull through it.   Even as ill as he was, Ken was still sending and receiving emails and staying in touch with this industry that quite frankly he helped give birth to.  

I was recently asked about Ken and his legacy for a Computerworld article and it really caused me to stop and re-think his overall legacy and gift to the rest of us in the industry.  Ken Brill was a pioneering, courageous, tenacious, visionary who through his own force of will saw the inefficiencies in a nascent industry and helped craft it into what it is today.

Throughout his early career experience Ken was able to see the absolute silo’ing of information, best practices, and approaches that different enterprises were developing around managing their mission critical IT spaces.    While certainly not alone in the effort, he became the strongest voice and champion to break down those walls, help others through the process and build a network of people who would share these ideas amongst each other.  Before long an industry was born.   Sewn together through his sometimes delicate, sometimes not so delicate cajoling and through it all his absolute passion for the Data Center industry at large.

One of the last times Ken and I got to speak in person.In that effort he also created and permeated the language that the industry uses as commonplace.   Seeing a huge gap in terms of how people communicated and compared mission critical capabilities he became the klaxon of the Tiering system which essentially normalized the those conversations across the Data Center Industry.   While some (including myself) have come to think it’s a time to re-define how we classify our mission critical spaces, we all have to pay homage to the fact that Ken’s insistence and drive for the Tiering system created a place and a platform to even have such conversations.  

One of Ken’s greatest strengths was his adaptability.   For example, Ken and I did not always agree.   I remember an Uptime Fellows meeting back in 2005 or 2006 or so in Arizona.  In this meeting I started talking about the benefits of modularization and reduced infrastructure requirements augmented by better software.   Ken was incredulous and we had significant conversations around the feasibility of such an approach.   At another meeting we discussed the relative importance or non-importance of a new organization called ‘The Green Grid’ (Smile)and if Uptime should closely align itself with those efforts.   Through it all Ken was ultimately adaptable. Whether it was giving those ideas light for conversation amongst the rest of the Uptime community via audio blogs, or other means, Ken was there to have a conversation.

In an industry where complacency has become commonplace, where people rarely question established norms, it was always comforting to know that Ken was there acting the firebrand, causing the conversation to happen.   This week we lost one of the ‘Great Ones’ and I for one will truly miss him.  To his family my deepest sympathies, to our industry I ask, “Who will take his place?”

 

\Mm

Private Clouds – Not just a Cost and Technology issue, Its all about trust, the family jewels, corporate value, and identity

I recently read a post by my good friend James Hamilton at Amazon regarding Private Clouds.   James and I worked closely together at Microsoft and he was always a good source for out of the box thinking and challenging the status quo.    While James post found here, speaks to the Private Cloud initiative being what amounts to be an evolutionary dead end, I would have to respectfully disagree.

James’ post starts out by correctly pointing out that at scale the large cloud players have the resources and incentive to achieve some pretty incredible cost savings.  From an infrastructure perspective he is dead on.  But I don’t necessarily agree that this innovation will never reach the little guy.  In my role at Digital Realty Trust I think I might have a pretty unique perspective on the infrastructure developments both at the “big” guys along with what most corporate enterprises have available to them from a leasing or commercial perspective.  

Companies like Digital Realty Trust,  Equinix, Terramark, Dupont Fabros, and a host of others in the commercial data center space are making huge advancements in this space as well.   The free market economy has now placed an importance on low PUE highly efficient buildings.   You are starting to see these firms commission buildings with Commission PUEs Sub 1.4.   Compared to most existing data center facilities this is a huge improvement.  Likewise these firms are incented to hire mechanical and electrical experts.  This means that this same expertise is available to the enterprise through leasing arrangements.  Where James is potentially correct is at that next layer of IT specific equipment.

This is an area where there is an amazing amount of innovation happening by Amazon, Google, and Microsoft.   But even here in this space there are firms stepping up to provide solutions to bring extensive virtualization and cloud-like capabilities to bear.    Companies like Hexagrid have software solutions offerings that are being marketed to typical co-location and hosting firms to do the same thing.  Hexagrid and others are focusing on the software and hardware combinations to deliver full service solutions for those companies in this space.    In fact (as some comments on James’ blog mention) there is a lack of standards and a fear of vendor lock-in by choosing one of the big firms.  Its an interesting thought to ponder if a software+hardware solution offered to the hundreds of co-location players and hosting firms might be more of a universal solution without fear of lockdown.  Time will tell.

But this brings up one of the key criticisms that this is not just about cost and technology.   I believe what is really at stake here is much more than that.   James makes great points on greater resource utilization of the big cloud players and how much more efficient they are at utilizing their infrastructure.   To which i will snarkly (and somewhat tongue-in-cheek) say to that, “SO WHAT!”  :)   Do enterprises really care about this?  Do they really optimize for this?  I mean if you pull back that fine veneer of politically correct answers  and “green-suitable” responses is that what their behavior in REAL LIFE is indicative of?    NO.

This was a huge revelation for me when I moved into my role at Digital.  When I was at Microsoft, I optimized for all of the things that James mentions because it made sense to do when you owned the whole pie.   In my role at Digital I have visibility into tens of data centers, across hundreds of customers that span just about every industry.  There is not, nor has there been a massive move (or any move for that matter) to become more efficient in the utilization of their resources.   We have had years of people bantering about how wonderful, cool, and how revolutionary a lot of this stuff is, but world wide Data center utilization levels have remained abysmally low.   Some providers bank on this.  Over subscription of their facilities is part of their business plan.  They know companies will lease and take down what they think they need, and never take it down in REALITY.   

So if this technology issue is not a motivating factor what is?  Well cost is always part of the equation.   The big cloud providers will definitely deliver cost savings, but private clouds could also deliver cost savings as well.   More importantly however, Private clouds will allow companies to retain their identity and uniqueness, and keep what makes them competitively them –Them.

I don’t so much see it as a Private cloud or Public cloud kind of thing but more of a Private Cloud AND Public cloud kind of thing.   To me it looks more an exercise of data abstraction.   The Public offerings will clearly offer infrastructure benefits in terms of cost, but will undoubtedly lock a company into that single solution.  The IT world has been bit before by putting all their eggs in a single basket and the need for flexibility will remain more key.    Therefore you might begin to see Systems Integrators, Co-location and hosting firms, and others build their own platforms, or much more likely, build platforms that umbrella over the big cloud players to give enterprises the best of both worlds. 

Additionally we must keep in mind that  the biggest resistance to the adoption of the cloud is not technology or cost but RISK and TRUST.  Do you, Mr. CIO, trust Google to run all of your infrastructure? your applications?  Do you Mrs. CIO, Trust Microsoft or Amazon to do the same for you?    The answer is not a blind yes or no.   Its a complicated set of minor yes responses and no responses.   They might feel comfortable outsourcing mail operations, but not the data warehouse manifesting decades of customer information.     The Private cloud approach will allow you to spread your risk.   It will allow you to maintain those aspects of the business that are core to the company. 

The cloud is an interesting place, today.  It is dominated by technologists.  Extremely smart engineering people who like to optimize and solve for technological challenges.  The actual business adoption of this technology set has yet to be fully explored.   Just wait until the “Business” side of the companies get their hooks into this technology set and start placing other artificial constraints, or optimizations around other factors.  There are thousands of different motivators out in the world.  Once they starts to happen earnest.  I think what you will find is a solution that looks more like a hybrid solution than the pure plays we dream about today.

Even if you think my ideas and thoughts on this topic is complete BS, I would remind you of something that I have told my teams for a very long time.  “There is no such thing as a temporary data center.”  This same mantra will hold true for the cloud.  If you believe that the Private Cloud will be a passing and temporary thing, just keep in mind that there will be systems and solutions build to this technology approach thus imbuing it with a very very long life.  

\Mm

Modular Evolution, Uptime Resolution, and Software Revolution

Its a very little known fact but software developers are costing enterprises millions of dollars and I don’t think in many cases either party realizes it.   I am not referring to the actual cost of purchase for the programs and applications or even the resulting support costs.   Those are easily calculated and can be hard bounded by budgets.   But what of the resulting costs of the facility in which it resides?

The Tier System introduced by the Uptime Institute was an important step in our industry in that it gave us a common language or nomenclature in which to actually begin having a dialog on the characteristics of the facilities that were being built. It created formal definitions and classifications from a technical perspective that grouped up redundancy and resiliency targets, and ultimately defined a hierarchy in which to talk about those facilities that were designed to those targets.   For its time it was revolutionary and to a large degree even today the body of work is still relevant. 

There is a lot of criticism that its relevancy is fading fast due to the model’s greatest weakness which resides in its lack of significant treatment of the application.    The basic premise of the Tier System is essentially to take your most restrictive and constrained application requirements (i.e. the one that’s least robust) and augment that resiliency with infrastructure and what I call big iron.   If only 5% of your applications are this restrictive, then the other 95% of your applications which might be able to live with less resiliency will still reside in the castle built for the minority of needs.  But before you you call out an indictment of the Uptime Institute or this “most restrictive” design approach you must first look at your own organization.   The Uptime Institute was coming at this from a purely facilities perspective.  The mysterious workload and wizardry of the application is a world mostly foreign to them.   Ask yourself this question – ‘In my organization, how often does IT and facilities talk to one another around end to end requirements?’  My guess based on asking this question hundreds of times of customers and colleagues ranges between not often to not at all.  But the winds of change are starting to blow.

In fact, I think the general assault on the Tier System really represents a maturing of the industry to look at our problem space more combined wisdom.   I often laughed at the fact that human nature (or at least management human nature) used to hold a belief that a Tier 4 Data Center was better than a Tier 2 Data Center.  Effectively because the number was higher and it was built with more redundancy.   More Redundancy essentially equaled better facility.    A company might not have had the need for that level of physical systems redundancy (if one were to look at it from an application perspective) but Tier 4 was better than Tier 3, therefore we should build the best.   Its not better, just different. 

By the way, that’s not a myth that the design firms and construction firms were all that interested in dispelling either.   Besides Tier 4 having the higher number, and more redundancy, it also cost more to build, required significantly more engineering and took longer to work out the kinks.   So the myth of Tier 4 being the best has propagated for quite a long time.  Ill say it again.  Its not better, its just different.

One of the benefits of the recent economic downturn (there are not many I know), is that the definition of ‘better’ is starting to change.  With Capital budgets frozen or shrinking the willingness of enterprises to re-define ‘better’ is also changing significantly.   Better today means a smarter more economical approach.   This has given rise to the boom in Modular data center approach and its not surprising that this approach begins with what I call an Application level inventory.   

This application level inventory first specifically looks at the make up and resiliency of the software and applications within the data center environments.  Does this application need the level of physical fault tolerance that my Enterprise CRM needs?  Do servers that support testing or internal labs need the same level of redundancy?  This is the right behavior and the one that I would argue should have been used since the beginning.  The Data Center doesn’t drive the software, its the software that drives the Data Center. 

One interesting and good side effect of this is that the enterprise firms are now pushing harder on the software development firms.    They are beginning to ask some very interesting questions that the software providers have never been asked before.    For example, I sat in one meeting where and end customer asked their Financial Systems Application provider a series of questions on the inter-server latency requirements and transaction timeout lengths for data base access of their solution suite.  The reason behind this line of questioning was a setup for the next series of questions.   Once the numbers were provided it became abundantly clear that this application would only truly work from one location, from one data center and could not be redundant across multiple facilities.  This led to questions around the providers intentions to build more geo-diverse and extra facility capabilities into their product.   I am now even seeing these questions in official Requests for Information (RFI’s) and Requests for Proposal (RFPs).   The market is maturing and is starting to ask an important question – why should your sub-million dollar (euro) software application drive 10s of millions of capital investment by me?  Why aren’t you architecting your software to solve this issue.  The power of software can be brought to bear to easily solve this issue, and my money is on the fact this will be a real battlefield in software development in the coming years.

Blending software expertise with operational and facility knowledge will be at the center of a whole new train of software development in my opinion.  One that really doesn’t exist today and given the dollar amounts involved, I believe it will be a very impactful and fruitful line of development as well.    But it has a long way to go.    Most programmers coming out of universities today rarely question the impact of their code outside of the functions they are providing and the number of colleges and universities that teach a holistic approach can be counted on less than one hands worth of fingers world-wide.   But that’s up a finger or two from last year so I am hopeful. 

Regardless, while there will continue to be work on data center technologies at the physical layer, there is a looming body of work yet to be tackled facing the development community.  Companies like Oracle, Microsoft, SAP, and hosts of others will be thrust into the fray to solve these issues as well.   If they fail to adapt to the changing face and economics of the data center, they may just find themselves as an interesting footnote in data center texts of the future.

 

\Mm