AOL’s Data Center Independence Day

Yesterday we celebrated Independence Day here in the United States.   It’s a day where we embrace the freedoms we enjoy as a country, look back on where we have come, and celebrate the promise of the future.   Yesterday was also a different kind of Independence Day for my teams at AOL.  A Data Center Independence Day, if you will. 

You may or may not have been following the progress of the work that we have been doing here at AOL over the last 14 or so months but the pace of change has been simply breathtaking.  One of the first things I did when I entered into the company was deeply review all of the aspects of Operations.  From Data Centers to Network Engineering, to the engineering teams supporting the products and services and everything in between.   The net of the exercise was that AOL was probably similar to most companies out there in terms of technology mix, from the CRUFT that I mentioned in a previous post, to latest technologies.  There were some incredible technologies built over the last three decades, some outdated processes and procedures, and if I am honest traces of a culture where the past had more meaning of the present or future.

In a very short period of time all of that changed.  We aggressively made changes to the organization,  re-aligned priorities, and perhaps most of all we created and defined a powerful collection of changes and evolutions we would need to bring about with very aggressive timelines.    These changes were part of a defined Technology Roadmap that broke the work we needed to accomplish into three categories of work.   The categorization focused on the internal technical challenges and tools we needed to make to enhance our own internal efficiencies.  The second categorization focused on the technical challenges and aggressive things we could do to enhance and bring greater scalability to our products and services.   This would include things like additional services and technology suites to our internally developed cloud infrastructure, and other items that would allow for more rapid product delivery of our products and services.   The last categorization of work, was for the incredibly aggressive “wish list” types of changes.  Items that could be so disruptive, so incredibly game-changing for us, that they could redefine our work on the whole.  In fact we named this group of work “Nibiru” after a mythical planet that is said to cross into our solar system and wreaks havoc and brings about great change. 

On July 4, 2012, one of our Nibiru items arrived and I am extremely ecstatic to state that we achieved our “Data Center Independence Day”.  Our primary “Nibiru” goal was to develop and deliver a data center environment without the need of a physical building.  The environment needed to require as minimal amount of physical “touch” as possible and allow us the ultimate flexibility in terms of how we delivered capacity for our products and services. We called this effort the Micro Data Center.   If you think about the amount of things that need to change to evolve to this type of strategy it’s a bit mind-boggling. 

image

Here is just a few of the things required to look at/change/and automate to even make this kind of achievement possible:

  • Developing an entirely new Technology Suite and the ability to deliver that capacity anywhere in the world with minimal to no staffing.
  • Delivering extremely dense compute capacity (think the latest technology) to give us the longest possible use of these assets once deployed into the field.
  • The ability to deliver a “Microdata Center” anywhere on the planet regardless of temperature and humidity settings
  • The ability to support/maintain/and administer remotely.
  • The ability to fit into the power envelope of a normal office building
  • Participation in our cloud environment and capabilities
  • The processes by which these facilities are maintained and serviced
  • and much much more…

In my mind, Its one thing to claim a technical achievement, its quite another to operationalize that achievement and make the process of supporting it repeatable. That’s my measure as to when you can REALLY declare victory.  Science Experiments don’t count.   It has to just plain work.    To that end our first “beta” site for the technology was the AOL campus in Dulles, Virginia.  Out on a lonely slab of concrete in the back of one of the buildings our future has taken shape.

Thanks in part to a lot of the work going on in the data center containerization imagespace, we were able to jump start much of the work in a relatively quick fashion.  In fact the pace set the Data Center and Technology Operations teams to deliver this achievement is more than a bit astounding.   Most, if not all, of the existing AOL Data Centers would fall somewhere around a traditional Tier III / Tier II Uptime Institute definition.   The teams really pushed ahead way outside their comfort zones to deliver some incredibly evolutions in a very short period of time.   Of course there were steps along the way to get here.  But those steps now seem to be in double time.  A few months back we announced the launching of ATC, Our first completely automated facility.   The work that went into ATC, was foundational to our achievement yesterday.   It allowed us to really start working on the hard stuff first.   That is to say the ‘Operationalization’ of these kinds of environments.   It set the stage of how we could evolve to this next tier of evolution.   Below is a summary of some of the achievements of our ATC launch, but if you were curious for the specifics on our work there feel free to click the ‘Breaking the Chrysalis’ post I did at that time.  You can see how the work that we have been driving in our own internal cloud environments, the changes in operational procedure, the change in thought is additive and fundamental to our latest achievement.   Its especially interesting to note that with all of the interesting blips and hiccups occurring in the ‘cloud industry’ like the leap second and  the terrible storms on the East Coast this week which affected many data centers, that ATC, our completely unmanned facility just kept humming along with no issues (To be fair neither did our traditional facilities) despite much of the initial negative feedback we had received was solely based around the reliability of such moves.   It goes to show how important engineering FOR Operation is.  For AOL we have built this in from the start.

What does this actually buy AOL?

Ok, we stuck some computers in a box and we made sure it requires very little care and feeding – what does this buy us?  Quite a bit actually.  Jay Moran, the Distinguished Engineer who was in charge of driving this effort is always quick to point out that the problem space here is not just about the Technology.  It has to be a marriage with the business side as well.  Obviously the inherent flexibility of the design allows us a greater number of places around the planet we can deploy capacity to and that in and of itself is pretty revolutionary.   We are no longer tied to traditional data center facilities or colocation markets.   That doesn’t mean we wont use them, it means we now have a choice.  Of course this is only possible because of the internally developed cloud infrastructure but we have freed ourselves from having to be bolted onto or into existing big infrastructure.   It allows us to have an incredible amount geo-distributed capacity at a very low cost point in terms of upfront capital and ongoing operational expense.   This is a huge game changer.  So much so, allow me to do a bit of the ‘back of the napkin math’ with you.   Lets call our global capacity in terms of compute, storage, etc. that we have today in our traditional environments – the Total Compute Capability or TCC. Its essentially the bandwidth for the work that we can get done.   Inside the cost for TCC you have operating costs such power, lease costs, Data Center facility maintenance costs, support staff, etc.  You additionally have the imagedepreciation for the facilities themselves (or the specific buildouts – if colocating), the server and other equipment depreciation, and the rest.   Lets call that baseline X.   The MicroData Center strategy built out with the latest, our most dense server standards and infrastructure would allow us to have 5X the amount of total TCC in less than 10% of the cost and physical footprint.   If you think about how this will allow us to aggregate and grow over time it ultimately drives us to a VERY LOW operational cost structure for delivering our products and services.   Additionally it positions us for the future in very significant ways.

  • It redefines software architecture for greater resiliency
  • It allows us an incredibly flexible platform for driving and addressing privacy laws, regulatory oversight, and other such concerns allowing us to respond rapidly.
  • It further reduces energy consumption and carbon footprint emissions (important as taxation evolves around the world, as well as ongoing operational costs)
  • Gives us the ability to drive Edge Computing delivery to potentially bypass CDNs for certain content.
  • Gives us the capability to drive ‘Community-in-a-box’ whereby we can quickly launch new products in markets, quickly expand existing footprints like Patch in a low cost, but still hyper-local platform, allow the Huffington Post a platform to rapidly partner and enter new markets with minimal cost turn ups.
  • The fact that the technology mix in our SKUs is comprised of compute, storage, and network capacity maximizes the amount of products and services we can deploy to it.  

As Always its really about the People

I cannot let a post about this huge win for us to go by without mentioning the teams involved in delivering this capability.  This is not just a win for AOL, or to a lesser degree the industry at large in another proof-point that it cant evolve if it puts its mind to changing, but rather the Technology Teams at AOL.  When I was first approached about joining AOL, my slightly sarcastic and comedic response was probably much like yours – ‘Are they still around?’ But the fact of the matter is that AOL has a vision of where they want to go, and what they want to be.   That was compelling for me personally, compelling enough for me to make the move.   What has truly amazed me however is the dedication and tenacity of its employees.  These achievements would not be possible without the outright aggressiveness the organization has taken to moving the company forward.  Its always hard to assess from the outside just how hard an effort is internally to achieve.  In the case of our micro Data Center Strategy, the teams had just about every kind of barrier to deliver this capacity.  Every kind of excuse to not make it, or even not to try.   They put all of those things aside and just plain executed.  If you allow me a small moment of bravado – Not only did my teams simply kick ass, they did it in a way that moved the needle for the company, and in my mind once again catapulted themselves into the forefront of operations and technology at scale.   We still have a bunch of Nibiru projects to deliver, so my guess is we haven’t heard the last of some of these big wins.

\Mm

Advertisements

Budget Challenged States, Data Center Site Selection, and the Dangers of Pay to Play

Site Selection can be a tricky thing.  You spend a ton of time upfront looking for that perfect location.   The confluence of dozens of criteria, digging through fiber maps, looking at real estate, income and other state taxes.   Even the best laid plans, and most thoughtful of approaches can be waylaid by changes in government, the emergence of new laws, and other regulatory changes which can put your selection at risk.  I was recently made aware of yet another cautionary artifact you might want to pay attention to: Pay to Play laws and budget challenged States.  

As many of my frequent readers know, I am from Chicago.  In Chicago, and Illinois at large “Pay to Play” has much different connotations than the topic I am about to bring up right now.  In fact the Chicago version broke out into an all out National and International Scandal.  There is a great book about it if you are interested, aptly entitled, Pay to Play.

The Pay to Play that I am referring to is an emerging set of regulations and litigation techniques that require companies to pay tax bills upfront (without any kind of recourse or mediation) which then forces companies to litigate to try and recover those taxes if unfair.   Increasingly I am seeing this in states where the budgets are challenged and governments are looking for additional funds and are targeting Internet based products and services.   In fact, I was surprised to learn that AOL has been going through this very challenge.  While I will not comment on the specifics of our case (its not specifically related to Data Centers anyway) it may highlight potential pitfalls and longer term items to take into effect when performing Data Center Site Selection.    You can learn more about the AOL case here, if you are interested.

For me it highlights that lack of understanding of Internet services by federal and local governments combined with a lack of inhibition in aggressively pursuing revenue despite that lack of understanding can be dangerous and impactful to companies in this space.   These can pose real dangers especially in where one site selects for their facility.    These types of challenges can come into play whether you are building your own facility, selecting a colocation facility and hosting partner, or if stretched eventually where your cloud provider may have located their facility.  

It does beg the question as to whether or not you have checked into the financial health of the States you may be hosting your data and services in.   Have you looked at the risk that this may pose to your business?  It may be something to take a look at!

 

\Mm

Sites and Sounds of DataCentre2012: Thoughts and my Personal Favorite presentations Day 1

We wrapped our first full day of talks here at DataCentre2012 and I have to say the content was incredibly good.    A couple of the key highlights that really stuck out in my mind were the talk given by Christian Belady who covered some interesting bits of the Microsoft Data Center Strategy moving forward.   Of course I have a personal interest in that program having been there for Generation1 through Generation4 of the evolutions of the program.   ms-beladyChristian covered some of the technology trends that they are incorporating into their Generation 5 facilities.  It was some very interesting stuff and he went into deeper detail than I have heard so far around the concept of co-generation of power at data center locations.   While I personally have some doubts about the all-in costs and immediacy of its applicability it was great to see some deep meaningful thought and differentiation out of the Microsoft program.  He also went into a some interesting “future” visions which talked about data being the next energy source.  While he took this concept to an entirely new level  I do feel he is directionally correct.  His correlations between the delivery of “data” in a utility model rang very true to me as I have long preached about the fact that we are at the dawning of the Information Utility for over 5 years.

Another fascinating talk came from Oliver J Jones of a company called Chayora.   Few people and companies really understand the complexities and idiosyncrasies of doing business let alone dealing with the development and deployment of large scale infrastructure there.    The presentation done by Mr. Jones was incredibly well done.  Articulating the size, opportunity, and challenges of working in China through the lens of the data center market he nimbly worked in the benefits of working with a company with this kind of expertise.   It was a great way to quietly sell Chayora’s value proposition and looking around the room I could tell the room was enthralled.   His thoughts and data points had me thinking and running through scenarios all day long.  Having been to many infrastructure conferences and seeing hundreds if not thousands of presentations, anyone who can capture that much of my mindshare for the day is a clear winner. 

Tom Furlong and Jay Park of Facebook gave a great talk on OCP with a great focus on their new facility in Sweden.  They also talked  a bit about their other facilities in Prineville and North Carolina as well.   With Furlong taking the Mechanical innovations and Park going through the electrical it was a great talk to created lots of interesting questions.  fb-parkAn incredibly captivating portion of the talk was around calculating data center availability.   In all honesty it was the first time I had ever seen this topic taken head on at a data center conference. In my experience, like PUE, Availability calculations can fall under the spell of marketing departments who truly don’t understand that there SHOULD be real math behind the calculation.   There were two interesting take aways for me.  The first was just how impactful this portion of the talk had on the room in general.   There was an incredible amount of people taking notes as Jay Park went through the equation and way to think about it.   It led me to my second revelation – There are large parts of our industry who don’t know how to do this.   fb-furlongIn private conversations after their talk some people confided that had never truly understood how to calculate this.   It was an interesting wake-up call for me to ensure I covered the basics even in my own talks.

After the Facebook talk it was time for me to mount the stage for Global Thought Leadership Panel.   I was joined on stage by some great industry thinkers including Christian Belady of Microsoft, Len Bosack (founder of Cisco Systems) now CEO XKL Systems, Jack Tison-CTO of Panduit, Kfir Godrich-VP and Chief Technologist at HP, John Corcoran-Executive Chairman of Global Switch, and Paul-Francois Cattier-Global VP of Data Centers  at Schneider Electric.   That’s a lot of people and brainpower to fit on a single stage.  We really needed three times the amount of time allotted for this panel, but that is the way these things go.   Perhaps one of the most interesting recurring themes from question to question was the general agreement that at the end of the day – great technology means nothing without the will do something different.   There was an interesting debate on the differences between enterprise users and large scale users like Microsoft, Google, Facebook, Amazon, and AOL.  I was quite chagrined and a little proud to hear AOL named in that list of luminaries (it wasn’t me who brought it up).   But I was quick to point out that AOL is a bit different in that it has been around for 30 years and our challenges are EXACTLY like Enterprise data center environments.   More on that tomorrow in my keynote I guess.

All in all, it was a good day – there were lots of moments of brilliance in the panel discussions throughout the day.  One regret I have was on the panel regarding DCIM.   They ran out of time for questions from the audience which was unfortunate.   People continue to confuse DCIM as BMS version 2.0 and really miss capturing the work and soft costs, let alone the ongoing commitment to the effort once started.   Additionally there is the question of once you have mountains of collected data, what do you do with that.   I had a bunch of questions on this topic for the panel, including if any of the major manufacturers were thinking about building a decision engine over the data collection.  To me it’s a natural outgrowth and next phase of DCIM.  The one case study they discussed was InterXion.  It was a great effort but I think in the end maintained the confusion around a BMS with a web interface versus true Facilities and IT integration.     Another panel on Modularization got some really lively discussion on feature/functionality and differentiation, and lack of adoption.  To a real degree it highlighted an interesting gulf between manufacturers (mostly represented by the panel) who need to differentiate their products and the users who require vendor interoperability of the solution space.   It probably doesn’t help to have Microsoft or myself in the audience when it comes to discussions around modular capacity.   On to tomorrow!

\Mm

Uptime, Cowgirls, and Success in California

This week my teams have descended upon the Uptime Institute Symposium in Santa Clara.  The moment is extremely bittersweet for me as this is the first Symposium in quite sometime I have been unable to attend.  With my responsibilities expanding at AOL beginning this week there was simply too much going on for me to make the trip out.  It’s a down right shame too.  Why?

We (AOL) will be featured in two key parts at Symposiums this time around for some incredibly ground breaking work happening at the company.   The first is a recap of the incredible work going on in the development of our own cloud platforms.  Last year you may recall that we were asked to talk about some of the wins and achievements we were able to accomplish with the development of our cloud platform.   The session was extremely well received.   We were asked to come back, one year on, to discuss about how that work has progressed even more.   Aaron Lake, the primary developer of our cloud platforms and my Infrastructure Development Team, will be talking on the continued success, features, and functionality, and the launch of our ATC Cloud Only Data Center.   Its been an incredible break neck pace for Aaron and his team and they have delivered world-class capabilities for us internally.

Much of Aaron’s work has also enabled us to win the Uptime Institutes First Annual Server Round Up Award.  I am especially proud of this particular honor as it is the result of an amazing amount of hard work within the organization on a problem faced by companies all over the planet.   Essentially this is Operations Hygiene at a huge scale, getting rid of old servers, driving consolidation, moving platforms to our cloud environments and more.  This talk will be lead by Julie Edwards, our Director of Business Operations and Christy Abramson, our Director of Service Management.  Together these two teams led the effort to drive out “Operational Absurdities” and our “Power Hog” programs.  We have sent along Lee Ann Macerelli and Rachel Paiva who were the primary project managers instrumental in making this initiative such a huge success.  These “Cowgirls” drove an insane amount of work across the company resulting in over 5 million dollars of un-forecasted operational savings, proving that there is always room for good operational practices.  They even starred in a funny internal video to celebrate their win which can be found here using the AOL Studio Now service.

If you happen to be attending Symposium this year feel free to stop by and say hello to these amazing individuals.   I am incredibly proud of the work that they have driven within the company.

 

\Mm

IPvSexy, Yes You to can get there too.

Today we celebrated going live with IPV6 versions of many of our top rated sites.  The work was done in advance of our participation in the IPV6 Launch Day.  For the uninitiated, IPv6 Launch Day is the date where major sites will begin to have their websites publicly available in the new Internet numbering scheme and is currently set for June 6, 2012.  As many of you likely know the IPv4 space which has served the Internet so well since its inception is running out of unique addresses.  I am especially proud of the fact that we are the first of the largest Internet players to achieve this unique feat.   In fact three of our sites occupy slots in the Top 25 Sites in the ranking including www.aol.com, www.engadget.com, and www.mapquest.com.  As with all things there are some interesting caveats.  For example – Google is IPv6 enabled for some ISPs, but not all.  I am specifically highlighting global availability.  

 

clip_image001

 

The journey has been an interesting one and there are many lessons learned going through these exercises.   In many conversations on this topic with other large firms exploring making this move, I often hear how difficult this process appears to be and a general reluctance to even begin.  Like the old saying goes even the longest journey begins with the first step.  

This work was far from easy and our internal team had some great learnings in our efforts to take our sites live beginning with the World IPV6 day in 2011.   Although our overall traffic levels remain pretty tiny (sustained about 4-5Mb/s) it is likely to grow as more ISPs convert their infrastructure to IPv6.

Perhaps the most significant thing I would like to share is that while migrating to the numbering system – I was very pleased to find the number of options available to companies in staging their moves to the IPv6.  Companies have a host of options available to them outside of an outright full scale renumbering of their networks.  There are IPv4 to IPv6 Gateways, capabilities already built into your existing routing and switching equipment that could ease the way, and even some capabilities in external service providers like Akamai that could help ease your adoption into the new space.  Eventually everything will need to get migrated and you will need to have a comprehensive plan to get you there, but its nice to know that firms have a bunch of options available to assist in this technical journey. 

 

\Mm

Open Sourcing our Operational Scale Tools–Meet Trigger

Not that long ago we made a decision to begin Open Sourcing some of our internal products and tools into the community at large.   There are some really interesting benefits for open sourcing some of these internally developed tools and as a company we begun to do some very interesting work in this space.  Some of our work has gotten quite a bit of attention such as SocketStream which is a very fast, real time web framework. 

Today I am very pleased to announce that we are open-sourcing one of the tools that we use to manage and maintain our network infrastructure.   We call it Trigger.  

Trigger is a Python framework and suite of tools for interfacing with network devices and managing network configuration and security policy. Trigger was specifically internally designed to increase the speed and efficiency of network configuration management.

Trigger’s core device interaction utilizes the freely available Twisted event-driven networking engine. The libraries can connect to network devices by any available method (e.g. telnet, SSH), communicate with them in their native interface (e.g. Juniper JunoScript, Cisco IOS), and return output. Trigger is able to manage any number of jobs in parallel and handle output or errors as they return.

If you think a tool like this would be interesting for you or your company feel free to give it a try.  The Open Source repository for it can be found here at :https://github.com/aol/trigger

For the complete set of documentation is hosted at ReadtheDocs:

http://readthedocs.org/docs/trigger

This is just the first of many internal efforts and tools that we plan to Open Source.  I will announce more tools in the coming months that have helped AOL to scale over the years.  Stay tuned!

\Mm  

Attacking the Cruft

Today the Uptime Institute announced that AOL won the Server Roundup Award.  The achievement has gotten some press already (At Computerworld, PCWorld, and related sites) and I cannot begin to tell you how proud I am of my teams.   One of the more personal transitions and journeys I have made since my experience scaling the Microsoft environments from tens of thousands of servers to hundreds of thousands of servers has been truly understanding the complexity facing a problem most larger established IT departments have been dealing with for years.  In some respects, scaling infrastructure, while incredibly challenging and hard, is in large part a uni-directional problem space.   You are faced with growth and more growth followed by even more growth.  All sorts of interesting things break when you get to big scale. Processes, methodologies, technologies, all quickly fall to the wayside as you climb ever up the ladder of scale.

At AOL I faced a multi-directional problem space in that, as a company and as a technology platform we were still growing.  Added to that there was 27 years of what I call “Cruft”.   I define “Cruft” as years of build-up of technology, processes, politics, fiscal orphaning and poor operational hygiene.  This cruft can act as a huge boat anchor and barrier to an organization to drive agility in its online and IT operations.  On top of this Cruft a layer of what can best be described as lethargy or perhaps apathy can sometimes develop and add even more difficulty to the problem space.

One of the first things I encountered at AOL was the cruft.  In any organization, everyone always wants to work on the new, cool, interesting things. Mainly because they are new and interesting..out of the norm.  Essentially the fun stuff!  But the ability for the organization to really drive the adoption of new technologies and methods was always slowed, gated or in some cases altogether prevented by years interconnected systems, lost owners, servers of unknown purpose lost in the distant historical memory and the like.   This I found in healthy populations at AOL. 

We initially set about building a plan to attack this cruft.   To earnestly remove as much of the cruft  as possible and drive the organization towards agility.  Initially we called this list of properties, servers, equipment and the like the Operations $/-\!+ list. As this name was not very user-friendly it migrated into a series of initiatives grouped the name of Ops-Surdities.   These programs attacked different types of cruft and were at a high level grouped into three main categories:

The Absurdity List – A list of projects/properties/applications that had a questionable value, lack of owner, lack of direction, or the like but was still drawing load and resources from our data centers.   The plan here was to develop action plans for each of the items that appeared on this list.

Power Hog – An effort to audit our data center facilities, equipment, and the like looking for inefficient servers, installations, and /or technology and migrating them to new more efficient platforms or our AOL Cloud infrastructure.  You knew you were in trouble when you had a trophy of a bronze pig appear on your desk or office and that you were marked. 

Ops Hygiene – The sometimes tedious task of tracking down older machines and systems that may have been decomissioned in the past, marked for removal, or were fully depreciated and were never truly removed.  Pure Vampiric load.  You may or may not be surprised how much of this exists in modern data centers.  It’s a common issue I have had with most data center management professionals in the industry.

So here we are, in a timeline measured in under a year, and being told all along the way by“crufty old-timers” that we would never make any progress, my teams have de-comissioned almost 10,000 servers from our environments. (Actually this number is greater now, but the submission deadline for the award was earlier in the year).  What an amazing accomplishment.  What an amazing team!

So how did we do it?

As we will be presenting this in a lot more detail at the Uptime Symposium, I am not going to give away all of our secrets in a blog post and give you a good reason to head to the Uptime event and listen to and ask the primary leaders of this effort how they did it in person.  It may be a good use of that Travel budget your company has been sitting on this year.

What I will share is some guidelines on approach and some things to be wary of if you are facing similar challenges in your organization.

FOCUS AND ATTENTION

I cannot tell you how many I have spoken with that have tried to go after ‘cruft’ like this time and time again and failed.   One of the key drivers for success in my mind is ensuring that there is focus and attention on this kind of project at all levels, across all organizations, and most importantly from the TOP.   To often executives give out blind directives with little to no follow through and assume this kind of thing gets done.   They are generally unaware of the natural resistance to this kind of work there is in most IT organizations.    Having a motivated, engaged, and focused leadership on these types of efforts goes and extraordinarily long way to making headway here.  

BEWARE of ORGANIZATIONAL APATHY

The human factors that stack up against a project like this are impressive.  While they may not be openly in revolt over such projects there is a natural resistance to getting things done.  This work is not sexy.  This work is hard.  This work is tedious.  This likely means going back and touching equipment and kit that has not been messed with for a long time.   You may have competing organizational priorities which place this kind of work at the bottom of the workload priority list.   In addition to having Executive buy in and focus, make sure you have some really driven people running these programs.  You are looking for CAN DO people, not MAKE DO people.

TECHNOLOGY CAN HELP, BUT ITS NOT YOUR HEAVY LIFTER

Probably a bit strange for a technology blog to say, but its true.  We have an incredible CMDB and Asset System at AOL.  This was hugely helpful to the effort in really getting to the bottom of the list.   However no amount of Technology in place will be able to perform the myriad of tasks required to actually make material movement on this kind of work.   Some of it requires negotiation, some of it requires strength of will, some of it takes pure persistence in running these issues down…working with the people.  Understanding what is still required, what can be moved.  This requires people.   We had great technologies in place from the perspective of knowing where are stuff was, what it did, and what it was connected to.  We had great technologies like our Cloud to move some of these platforms to ultimately.    However, you need to make sure you don’t go to far down the people trap.  I have a saying in my organization – There is a perfect number of project managers and security people in any organization.  Where the work output and value delivered is highest.   What is that number?  Depends – but you definitely know when you have one too many of each.

MAKE IT FUN IF YOU CAN

From the brass pigs, to minor celebrations each month as we worked through the process we ensured that the attention given the effort was not negative. Sure it can be tough work, but you are at the end of the day substantially investing in the overall agility of your organization.  Its something to be celebrated.    In fact at the completion of our aggressive goals the primary project leads involved did a great video (which you can see here) to highlight and celebrate the win.   Everyone had a great laugh and a ton of fun doing what was ultimately a tough grind of work.  If you are headed to Symposium I strongly encourage you to reach out to my incredible project leads.  You will be able to recognize them from the video….without the mustaches of course!

\Mm