On the Move: First Data

Well after a bit of time in stealth I am finally able to announce that I have taken the position of Chief Technology Officer at First Data. 

After being asked to join the Turn-Around team at AOL and driving some amazing results over the past four years, it was time for a change.  I absolutely loved my time there and the people were amazing.  Success has a quality all on its own and it was an incredibly personally rewarding experience for me to be a part of something that unique.

The move to First Data is an incredibly exciting move for me for many different reasons but one of the key drivers for me is that I feel that this industry is ripe for change.  It’s a move for me from running and building large scale Internet products and infrastructure to the Financial Services Industry.  

For those of you who may be be unaware of who First Data is, or what they do, its probably easiest to think of it this way – one out of every two credit card or debit card transactions across the world touches our infrastructure at some point in the transaction process. From a transactional scale perspective its very similar to what I have been used to companies likes AOL, Microsoft, and Disney.  The difference being of course that these transactions are a little more important than checking your favorite sports scores, or getting your e-mail. 

My challenge of course is to drive automation. To build a platform that makes infrastructure a decisive and differentiating platform to launch products and services.  To create a learning infrastructure and software eco-system that gets smarter over time.   In large part how do you blend the agility of the Internet with the maturity and complexity of the Financial Services Industry.   Sure it’s a complex technical problem space, but it has some very interesting business and regulatory challenges as well.   In many respects dealing with Safe Harbor, Regulatory and tax has been part of my job for many years.  The challenge now is to take that automation to the next level.  

To that end I have to say that First Data is assembling an amazingly formidable team to drive this change.  I will be reporting to the President of First Data, Guy Chiarello.  Guy is a universally respected Technology leader in the Financial Services industry with top posts at Morgan Stanley and JP Morgan Chase.  Technology will be key to the success of the company and the leadership team is a unique blend of technology savvy leaders from across the world. 

The new adventure begins!

You can follow the link to the official press announcement.

Along with the initial pickup from the Wall Street Journal.

\Mm

Whiffs of Wisdom #6 – Managing People Managing Tough Decisions

I am getting a lot of encouragement to share more of my “Whiffs of Wisdom”.  Most are related to Managing Technical People, Technology situations and Managing Managers.  All of them have tongue firmly placed in cheek.  🙂    Heres another one that came up recently:

Whiffs of Wisdom #6 

On Managing People who Have to Manage Tough Decisions

Never underestimate ones ability to avoid making a tough decision or having a tough conversation.  There is a nigh-limitless fountain of creativity in the work created, conference calls had, or meetings to attend –  to avoid having a tough conversation.  If only you could harness that creativity for good.  Vigiliance, personal support, and every once in a while a swift kick in the rear are the only known remedies.

\Mm

 

 

Whiffs of Wisdom #18 – Project Managers and Security People

I am not sure why but for some reason this topic has come up for me like 8 times this week.  Rather than continue to talk about it I figured I would just post one of my “Whiffs of Wisdom” some people call them “Manosisms”.  Apparently I am the worst person in the world at coming up with anecdotes but people get my drift so in my book that means success.

Whiffs of Wisdom #18

On Project Managers and Security People

Every Technology organization needs Project Managers and Security-focused Engineers.  There ACTUALLY IS a magic number of these individuals to have in your organization.  I don’t know what that number is, but I know when I have one too many of either.   These folks bring order to chaos (Engineers are notoriously terrible at project management) but the moment is starts becoming more about the process versus the END RESULTS I know we have gotten off track.  There is nothing more effective than a great project manager and nothing more destructive than an overbearing rule-nazi project manager.    You need to watch it closely because left to their own well-meaning devices these groups tend to create Bureaus of Business Prevention.

\Mm

 

 

Industry Impact : Brothers from Different Mothers and Beyond…

Screen Shot 2013-11-15 at 12.19.43 PM

My reading material and video watching habits these past two weeks have brought me some incredible joy and happiness. Why?  Because Najam Ahmad of Facebook is finally getting some credit for the amazing work that he has done and been doing in the world of Software Defined Networking.  In my opinion Najam is a Force Majeure in the networking world.   He is passionate.  He is focused. He just gets things done.  Najam and I worked very closely at Microsoft as we built out and managed the company’s global infrastructure. So closely in fact that we were frequently referred to as brothers from different mothers.   Wherever Najam was-I was not far behind, and vice versa. We laughed. We cried.  We fought.  We had alot of fun while delivered some pretty serious stuff.  To find out that he is behind the incredible Open Compute Project advances in Networking is not surprising at all.   Always a forward thinking guy he has never been satisfied with the status quo.    
If you have missed any of that coverage you I strongly encourage you to have a read at the links below.   



This got me to thinking about the legacy of the Microsoft program on the Cloud and Infrastructure Industry at large.   Data Center Knowledge had an article covering the impact of some of the Yahoo Alumni a few years ago. Many of those folks are friends of mine and deserve great credit.  In fact, Tom Furlong now works side by side with Najam at Facebook.    The purpose of my thoughts are not to take away from their achievements and impacts on the industry but rather to really highlight the impact of some of the amazing people and alumni from the Microsoft program.  Its a long overdue acknowledgement of the legacy of that program and how it has been a real driving force in large scale infrastructure.   The list of folks below is by no means comprehensive and doesnt talk about the talented people Microsoft maintains in their deep stable that continue to drive the innovative boundaries of our industry.  

Christian Belady of Microsoft – Here we go, first person mentioned and I already blow my own rule.   I know Christian is still there at Microsoft but its hard not to mention him as he is the public face of the program today.  He was an innovative thinker before he joined the program at Microsoft and was a driving thought leader and thought provoker while I was there.  While his industry level engagements have been greatly sidelined as he steers the program into the future – he continues to be someone willing to throw everything we know and accept today into the wind to explore new directions.
Najam Ahmad of Facbook – You thought  I was done talking about this incredible guy?  Not in the least, few people have solved network infrastructure problems at scale like Najam has.   With his recent work on the OCP front finally coming to the fore, he continues to drive the capabilities of what is possible forward.  I remember long meetings with Network vendors where Najam tried to influence capabilities and features with the box manufacturers within the paradigm of the time, and his work at Facebook is likely to end him up in a position where he is both loved and revilved by the Industry at large.  If that doesn’t say your an industry heavy weight…nothing does.
James Hamilton of Amazon – There is no question that James continues to drive deep thinking in our industry. I remain an avid reader of his blog and follower of his talks.    Back in my Microsoft days we would sit  and argue philosophical issues around the approach to our growth, towards compute, towards just about everything.   Those conversations either changed or strengthed my positions as the program evolved.   His work in the industry while at Microsoft and beyond has continued to shape thinking around data centers, power, compute, networking and more.
Dan Costello of Google – Dan Costello now works at Google, but his impacts on the Generation 3 and Generation 4 data center approaches and the modular DC industry direction overall  will be felt for a very long time to come whether Google goes that route or not.   Incredibly well balanced in his approach between technology and business his ideas and talks continue to shape infrastructre at scale.  I will spare people the story of how I hired him away from his previous employer but if you ever catch me at a conference, its a pretty funny story. Not to mention the fact that he is the second best break dancer in the Data Center Industry.
Nic Bustamonte of Google – Nic is another guy who has had some serious impact on the industry as it relates to innovating the running and operating of large scale facilities.   His focus on the various aspects of the operating environments of large scale data centers, monioring, and internal technology has shifted the industry and really set the infancy for DCIM in motion.   Yes, BMS systems have been around forever, and DCIM is the next interation and blending of that data, but his early work here has continued to influence thinking around the industry.
Arne Josefsberg of ServiceNow – Today Arne is the CTO of Service Now, and focusing on infrastructure and management for enterprises to the big players alike and if their overall success is any measure, he continues to impact the industry through results.  He is *THE* guy who had the foresight of building an organiation to adapt to this growing change of building and operating at scale.   He the is the architect of building an amazing team that would eventually change the industry.
Joel Stone of Savvis/CenturyLink – Previously the guy who ran global operations for Microsoft, he has continued to drive excellence in Operations at Global Switch and now at Savvis.   An early adopter and implmenter of blending facilities and IT organizations he mastered issues a decade ago that most companies are still struggling with today.
Sean Farney of Ubiquity – Truly the first Data center professional who ever had to productize and operationalize data center containers at scale.   Sean has recently taken on the challenge of diversifying data center site selection and placement at Ubquity repurposing old neighorbood retail spaces (Sears, etc) in the industry.   Given the general challenges of finding places with a confluence of large scale power and network, this approach may prove to be quite interesting as markets continue to drive demand.   
Chris Brown of Opscode – One of the chief automation architects at my time at Microsoft, he has moved on to become the CTO of Opscode.  Everyone on the planet who is adopting and embracing a DevOps has heard of, and is probably using, Chef.  In fact if you are doing any kind of automation at large scale you are likely using his code.
None of these people would be comfortable with the attention but I do feel credit should be given to these amazing individuals who are changing our industry every day.    I am so very proud to have worked the trenches with these people. Life is always better when you are surrounded by those who challenge and support you and in my opinion these folks have taken it to the next level.
\Mm

And the winner is …. The Results of my Linux Laptop Search

76-5

 

After I posted my about my personal exploration into purchasing a pre-built, no-assembly required Linux laptop I have to admit that I was a bit overwhelmed at the response to the post.  In fact I have been inundated with emails, private messages, direct messages, and just about every communication method you could think of to post the results of my search and reveal which laptop I ended up going with.

While I purchased the machine weeks ago  I did not want to just simply answer the question with a slight note or addendum saying I ended up buying Brand X. I think every one would agree that the experience does not really end at the purchase.  I wanted to make sure I had some quality time on the machine and give some perspective after some real-world heavy duty use.  

So without a further ado, after some considerable analysis I ended up purchasing the System 76 Gazelle Professional.   The specs on my specific purchase are listed here below:

System 76 Gazelle

  • Base System
  • Ubuntu 13.04
  • 15.6” 1080p Full High Definition ColorPro IPS Display with a Matte Surface
  • Intel High Definition Graphics 4600
  • 4th Generation Intel® Core™ i7-4900MQ Processor ( 2.8 GHz 8MB L3 Cache – 4 Cores plus Hyperthreading )
  • 16 GB Dual Channel DDR3 SDRAM at 1600MHz – 2 X 8 GB
  • 960 GB Crucial M500 Series Solid State Drive
  • 8X DVD±R/RW/4X +DL Super-Multi Drive
  • Intel Centrino Advanced-N 6235 – 802.11A/B/G/N Wireless LAN + Bluetooth
  • US Keyboard

Please keep in mind that the reason for the purchase was to have an every day Linux machine that I could use for a mix of work, pleasure, and hobby.   I was not aiming to build the greatest development machine, or a gaming machine, or anything of the like.   Although I would argue after considerable use that this machine could be used in any of those configurations and perform exceptionally. But I am getting ahead of myself.

 

The Ordering Process

I ordered my machine through the website which was a pretty standard click-through configurator.  At the end of the process and submitted all my payment information in, I got the confirmation from System 76 pretty quickly.   I was also told that due to unavailability of some of the parts it would not ship for at least two weeks.   The instant satisfaction guy in me was disappointed, but having been around this industry long enough I know this happens a lot.   To my surprise I did not have to wait very long.  I got a note from them letting me know that my machine actually shipped out sooner than expected.   THAT is something does not usually happen.  

I should probably break away to let you know that the Customer Service Experience during the order process was exceptional (in my opinion).  Upon ordering the machine, you get a link to your order that constantly updates your status.  It tracks when you created your configuration, When you ordered it, When the teams at System76 start building it, and even features a ‘chat’ mechanism in case you have any questions.  Its not really live chat, but you can send questions, comments, or converse with them as part of the order process and they actually respond back to you fairly quickly.  All communication is tracked whether by System 76 “standard messages” like your system has started being assembled, to your order is shipped or any interactions you may have had with a Customer Service person.   The order interface also keeps track of all Serial numbers associated with your machine for ease of use later.

 

image

The Arrival

My machine arrived at the home in quick order and I was very happy to see quality of the shipping and related protection.  To be honest this was a bit of a concern as I had never ordered from these folks before, and I have had issues with shipping quality when order “clones” or “non-name brands” before.   But System76 did an outstanding job, rivaling if not hands-down beating the “bigger guys” in this department.  It may seem like a weird thing to comment on, but when you spend that kind of money on a machine it’s a terrible feeling if you cannot play with it immediately.

Regardless of how incredible the shipping material was, it was no match for my fingers as I tore into the box and removed all of it to get at the goodies inside. Please keep in mind that what you don’t see in the pictures below, is the box that the box the laptop actually came in!

76-176-276-376-4

 

Once free from its plastic and cardboard prison, the machine booted right up quickly and quietly (I love the SSD!).  Now that it was operational I got to work ensuring that the machine was well acquainted to its new home.   Which basically involves getting connected to my home networks, creating SSH keys, getting access to the servers and services in my home, (I have my own mini-data center in my house), mounting all of my cloud-base storage locations, downloading/installing the software that I use most often and getting connectivity to the variety of household peripherals scattered about the place.

The configurations were pretty straight forward and everything configured with ease. Everything of course except my scanner.   I am not sure why scanners have always been trouble for me, but even in the Windows world they are a pain in the….  I could probably do another whole post on getting my darn scanner to work. 

Anyway it all ended up well and it has become my main machine ever since.

Some of the more astute of those reading this post may have noticed that I bought this machine when the standard Operating System was Ubuntu 13.04.  I had completed all of the above configurations and had been using the machine heavily for some time before I had a slight panic moment after Saucy Salamander (Ubuntu 13.10) went general availability. So I set aside another weekend thinking that all of my hard work would have to be re-done when I did the upgrade.   In a very “Windows-like” notification I was given the option to automatically upgrade.  Intrigued, I clicked through to proceed and was pleasantly surprised that my machine upgraded with NO ADDITIONAL configurations from me.  It just worked.   It saved my weekend (or at least my Saturday).

I have been banging away on the machine solidly for over a month, and I have to say that I am extremely satisfied.  My only real complaint is that I wish it had a better keyboard.  Not that the keyboard itself is bad.  Its pretty standard actually.   I just think the industry as a whole could learn a few things from Lenovo about building a really great laptop keyboard.  

It’s definitely a powerful little workhorse of a machine!

\Mm

[This post is a follow up to my initial post looking for a pre-built Linux Laptop]

Google Purchase of Deep Earth Mining Equipment in Support of ‘Project Rabbit Ears’ and Worldwide WIFI availability…

(10/31/2013 – Mountain View, California) – Close examination of Google’s data center construction related purchases has revealed the procurement of large scale deep earth mining equipment.   While the actual need for the deep mining gear is unclear, many speculate that it has to do with a secretive internal project that has come to light known only as Project: Rabbit Ears. 

According to sources not at all familiar with Google technology infrastructure strategy, Project Rabbit ears is the natural outgrowth of Google’ desire to provide ubiquitous infrastructure world wide.   On the surface, these efforts seem consistent with other incorrectly speculated projects such as Project Loon, Google’s attempt to provide Internet services to residents in the upper atmosphere through the use of high altitude balloons, and a project that has only recently become visible and the source of much public debate – known as ‘Project Floating Herring’, where apparently a significantly sized floating barge with modular container-based data centers sitting in the San Francisco Bay has been spied. 

“You will notice there is no power or network infrastructure going to any of those data center shipping containers,” said John Knownothing, chief Engineer at Dubious Lee Technical Engineering Credibility Corp.  “That’s because they have mastered wireless electrical transfer at the large multi-megawatt scale.” 

Real Estate rates in the Bay Area have increased almost exponentially over the last ten years making the construction of large scale data center facilities an expensive endeavor.  During the same period, The Port of San Francisco has unfortunately seen a steady decline of its import export trade.  After a deep analysis it was discovered that docking fees in the Port of San Francisco are considerably undervalued and will provide Google with an incredibly cheap real estate option in one of the most expensive markets in the world. 

It will also allow them to expand their use of renewable energy through the use of tidal power generation built directly into the barges hull.   “They may be able to collect as much as 30 kilowatts of power sitting on the top of the water like that”, continues Knownothing, “and while none of that technology is actually visible, possible, or exists, we are certain that Google has it.”

While the technical intricacies of the project fascinate many, the initiative does have its critics like Compass Data Center CEO, Chris Crosby, who laments the potential social aspects of this approach, “Life at sea can be lonely, and no one wants to think about what might happen when a bunch of drunken data center engineers hit port.”  Additionally, Crosby mentions the potential for a backslide of human rights violations, “I think we can all agree that the prospect of being flogged or keel hauled really narrows down the possibility for those outage causing human errors. Of course, this sterner level of discipline does open up the possibility of mutiny.”

However, the public launch of Project Floating Herring will certainly need to await the delivery of the more shrouded Project Rabbit Ears for various reasons.  Most specifically the primary reason for the development of this technology is so that Google can ultimately drive the floating facility out past twelve miles into International waters where it can then dodge all national, regional, and local taxation, the safe harbor and privacy legislation of any country or national entity on the planet that would use its services.   In order to realize that vision, in the current network paradigm, Google would need exceedingly long network cables  to attach to Network Access Points and Carrier Connection points as the facilities drive through international waters.

This is where Project Rabbit Ears becomes critical to the Google Strategy.   Making use of the deep earth mining equipment, Google will be able to drill deep into the Earths crust, into the mantle, and ultimately build a large Network Access Point near the Earth’s core.  This Planetary WIFI solution will be centrally located to cover the entire earth without the use of regional WIFI repeaters.  Google’s floating facilities could then gain access to unlimited bandwidth and provide yet another consumer based monetization strategy for the company. 

Knownothing also speculates that such a move would allow Google to make use of enormous amounts of free geo-thermic power and almost singlehandedly become the greenest power user on the planet.   Speculation also abounds that Google could then sell that power through its as yet un-invented large scale multi-megawatt wireless power transfer technology as unseen on its floating data centers.

Much of the discussion around this kind of technology innovation driven by Google has been given credible amounts of veracity and discussed by many seemingly intelligent technology based news outlets and industry organizations who should intellectually know better, but prefer not to acknowledge the inconvenient lack of evidence.

 

\Mm

Editors Note: I have many close friends in the Google Infrastructure organization and firmly believe that they are doing some amazing, incredible work in moving the industry along especially solving problems at scale.   What I find simply amazing is in the search for innovation how often our industry creates things that may or may not be there and convince ourselves so firmly that it exists. 

2014 The Year Cloud Computing and Internet Services will be taxed. A.K.A Je déteste dire ça. Je vous l’avais dit.

 

france

Its one of those times I really hate to be right.  As many of you know I have been talking about the various grass roots efforts afoot across many of the Member EU countries to start driving a more significant tax regimen on Internet based companies.  My predictions for the last few years have more been cautionary tales based on what I saw happening from a regulatory perspective on a much smaller scale, country to country.

Today’s Wall Street Journal has an article discussing France’s movements to begin taxation on Internet related companies who derive revenue from users and companies across the entirety of the EU, but holding those companies responsible to the tax base in each country.   This could likely mean that such legislation is likely to become quite fractured and tough for Internet Companies to navigate.  The French proposition is asking the European Commission to draw up proposals by the Spring of 2014.

This is likely to have a very interesting (read as cost increases) across just about every aspect of Internet and Cloud Computing resources.  From a business perspective this is going to increase costs which will likely be passed on to consumers in small but interesting ways.  Internet advertising will need to be differentiated on a country by country basis, and advertisers will end up having different cost structures, Cloud Computing Companies will DEFINITELY need to understand where instances of customer instances were, and whether or not they were making money.  Potentially more impactful, customers of Cloud computing may be held to account for taxation accountability that they did not know they had!  Things like Data Center Site Selection are likely going to become even more complicated from a tax analysis perspective as countries with higher populations will likely become no-go zones (perhaps) or require the passage of even more restrictive laws around it.

Its not like the seeds of this haven’t been around since 2005, I think most people just preferred to keep a blind eye to the tax that the seed was sprouting into a full fledged tree.   Going back to my Cat and Mouse Papers from a few years ago…  The Cat has caught the mouse, its now the mouse’s move.

\Mm

 

Authors Note: If you don’t have a subscription to the WSJ, All Things Digital did a quick synopsis of the article here.

Looking to buy a Linux laptop are you?

I recently underwent an interesting adventure of trying to purchase a Linux-based laptop that I thought I would share as it lead me to some interesting revelations about the laptop industry in general.  First let me admit, that my home is kind of like the United Nations for Operating Systems and Tech Platforms.   I have Windows-based machines, Macbooks, Ubuntu and Mint flavors of Linux, and my home servers are a mix of Microsoft, CentOS and Fedora. 

I recently decided to go out and look for a high performance Linux Laptop to use for home purposes.  Up until this decision I did what everyone probably does when they want to use Linux, they go out and download the latest distribution depending upon whether or not they prefer .DEB or RPM variants and install it on an old or existing machine in their home.    I have done this over and over again.  This time, however, I was determined to go out an purchase a ready-made Linux laptop.  My love affair with Unix or Linux based laptops began when I ran into an engineer from Cisco earlier in my career who was sporting a clunky (but at the time amazing) HPUX based laptop. It was a geek thing of beauty and I was hooked.RDI Tadpole

If there is a name brand in Linux laptops its System 76.  These guys have been building special purpose system since 2005.  They have three models to choose from and all of them are rock solid.  Now to say they are a ‘name brand’ is a little bit misleading.  The hardware is generally sourced from firms like CLETO or MSI.  But what makes System 76 so special is that they really do try to replicate the normal laptop  (or desktop) purchasing and product experience that you are likely to find with traditional Windows based experiences.  They ensure optimal driver support for the hardware and generally deliver a very high customer experience.  I have always been jealous of people with System 76 gear when I have seen them at the odd trade show. It’s generally a rare sighting, because lets face it…with the proliferation of Windows based machines and Macbooks, seeing a Linux based laptop environment brings out my inner geek.

Another brand that I occasionally see around in Zareason.   Like System 76, they custom build their laptops and pre-load Linux on as well.  Where System 76 is limited to and specialized in Ubuntu loads, Zareason gives you many more options. 

Other firms like Linux Certified try and take a best of breed approach and try to find the balance of purchasing their own hardware and/or mixing it in with other manufacturer created platforms like a Linux Optimized Lenovo Thinkpad. They also give you a choice of which flavor of Linux you would prefer.

Now you could also go out and purchase all of the components yourself from CLETO, MSI, and others to build your own model, but as I was expressly going out to buy one, and not ‘build’ one I opted out of that effort.

But the Linux Certified approach got me thinking about what do the ‘Big Guy’ manufacturers offer in terms of Linux based laptops.   The answer in short was Nada.  Not off the shelf at least.   I had remembered that Dell was making a purpose built Linux machine and started digging in.  I found all kinds of great references to it -  the XPS 13 Developer Edition.   However when I went online to the Dell website to dig in a bit more, I found that the XPS 13 only had Microsoft based options on the Operating System.  I searched high and low, and somehow managed to get linked to www.Dell.Co.Uk, where Lo and Behold, I found the XPS 13 Developer Edition.  Apparently they are only selling it outside of the United States.   Huh?  This piqued my interest so I started up a chat on the Dell web page which confirmed my suspicion.  I had secretly hoped that there was some super secret way to get one here in the United States. But apparently not.  

dell_xps_chat

 

To be honest this kind of made me mad.  Why can people in Europe get it, and we can’t?  It probably sounds a lot like whining as I am sure there are tons of things Americans can get that Europeans can’t, but for now, I am atop my high horse and I get to complain.   Essentially I could get it.  However, like the old adage, there would be Some Assembly Required.   Purchase the hardware, blow away all of the preinstalled Microsoft Software, and install over it.   Surely HP must have a version with Linux.  Nope.  What about Lenovo?  Nope checked that too.  As I gazed at the Thinkpad website I relegated myself to a position that I would need to also think about the ‘some assembly required’ category.  Truth be told, having owned them in the past,  I absolutely love the Lenovo keyboard and solid case construction.  I do not think there is a better one anywhere on the planet.  

So I created my Some Assembly Required List as well and it was then I realized two things – First, that If I wanted anything in this category it was really no different than what I had been doing for the previous ten years.  It really highlights the need for better partnerships of the Linux community at large with the hardware manufacturers if they ever want to break into the consumer markets.   Most non-professionals I know would never go through that kind of trouble nor do they have as much affinity for the Operating System as I have.  The second thing I thought I had realized was that in going this route, you essentially have to pay a ‘TAX’ to Microsoft.  Every laptop you buy like this, comes with Windows 7 or Windows 8.  Which means that you are paying for it somewhere in the price of the equipment.   At first I was irked, but what’s more interesting is that generally speaking (with the exception of the Lenovo configuration) the Price points for the ‘Assembly’ options were generally lower by a significant margin than the ‘Optimized Linux’ counter parts.  Some of this was reflected in slight configuration differences.  Which led me to believe that the Microsoft ‘Tax’ gave little value to the machines overall.  Here’s an example of my process:

comparison_linux

Not all of the configurations are like for like above, but it gives you a flavor.  

My Revelations?  I would have thought by now that the OEM to Microsoft connection would have seriously weakened.  At least weakened to the point of offering a little more variety in Operating Systems or at least the ability to purchase equipment without an Operating System.  It could also be a factor of the people I hang around with and what we think is cool.  I guess, once a hobbyist, always a hobbyist.

\Mm

[You can find out which machine I ended up getting in my follow up post here]

Lots of interest in the MicroDC, but do you know what I am getting the most questions about?

 Scott Killian of AOL talks about the MicroDC

Last week I put up a post about how AOL.com has 25% of all traffic now running through our MicroDC infrastructure.   There was a great follow up post by James LaPlaine our VP of Operations on his blog Mental Effort, which goes into even greater detail.   While many of the email inquiries I get have been based around the technology itself, surprisingly a large majority of the notes have been questions around how to make your software. applications, and development efforts ready for such an infrastructure and what the timelines for realistically doing so would be.   

The general response of course is that it depends.  If you are a web-based platform or property focused solely on Internet based consumers, or a firm that needs diversified presence in different regions without the hefty price tag of renting and taking down additional space this may be an option.  However many of the enterprise based applications have been written in a way that is highly dependent upon localized infrastructure, short application based latency, and lack adequate scaling.  So for more corporate data center applications this may not be a great fit.  It will take sometime for those big traditional application firms to be able to truly build out their infrastructure to work in an environment like this (they may never do so).   I suspect most will take an easier approach and try to ‘cloudify’ their own applications and run it within their own infrastructure or data centers under their control.   This essentially will allow them to control the access portion of users needs, but continue to rely on the same kinds of infrastructure you might have in your own data center to support it.   Its much easier to build a web based application which then connects to a traditional IT based environment, than to truly build out infrastructure capable of accommodating scale.   I am happy to continue answer questions as they come up, but as I had an overwhelming response of questions about this I thought I would throw something quick up here that will hopefully help.

 

\Mm

On Micro Datacenters, Sandy, Supercomputing 2012, and Coding for Containerized Data Centers….

image

As everyone has been painfully aware last week the United States saw the devastation caused by the Superstorm Sandy.   My original intention was to talk about yet another milestone with our Micro Data Center approach.  As the storm slammed into the East Coast I felt it was probably a bad time to talk about achieving something significant especially as people were suffering through the storms outcome.  In fact, after the storm AOL kicked off an incredible supplies drive and sent truckloads of goods up to the worst of the affected areas.

So, here we are a week after the storm, and while people are still in need and suffering, it is clear that the worst is over and the clean up and healing has begun.   It turns out that Super Storm Sandy also allowed us to test another interesting case in the journey of the Micro Data Center as well that I will touch on.

25% of ALL AOL.COM Traffic runs through Micro Data Centers

I have talked about the potential value of our use of Micro Data Centers and the pure agility and economics the platform will provide for us.   Up until this point we had used this technology in pockets.  Think of our explorations as focusing on beta and demo environments.  But that all changed in October when we officially flipped the switch and began taking production traffic for AOL.COM with the Micro Data Center.  We are currently (and have been since flipping the switch) running about 25% of all traffic coming to our main web site.   This is an interesting achievement in many ways.  First, from a performance perspective we are manually limiting the platform (it could do more!) to ~65,000 requests per minute and a traffic volume of about 280mbits per second.   To date I haven’t seen many people post performance statistics about applications in modular use, so hopefully this is relevant and interesting to folks in terms of the volume of load an approach such as this could take.   We recently celebrated this at a recent All-Hands with an internal version of our MDC being plugged into the conference room.  To prove our point we added it to the global pool of capacity for AOL.com and started taking production traffic right there at the conference facility.   This proves in large part the value, agility and mobility a platform like this could bring to bear.

Scott Killian, AOL's Data Center guru talks about the deployment of AOLs Micro Data Center. An internal version went 'live' during the talk.

 

As I mentioned before, Super Storm Sandy threw us another curveball as the hurricane crashed into the Mid-Atlantic.   While Virginia was not hit anywhere near as hard as New York and New Jersey, there were incredible sustained winds, tumultuous rains, and storm related damage everywhere.  Through it all, our outdoor version of the MDC weathered the storm just fine and continued serving traffic for AOL.com without fail. 

 

This kind of Capability is not EASY or Turn-Key

That’s not to say there isn’t a ton of work to do to get an application to work in an environment like this.   If you take the problem space at different levels whether it be DNS, Load Balancing, network redundancy, configuration management, underlying application level timeouts, systems dependencies like databases, other information stores and the like the non-infrastructure related work and coding is not insignificant.   There is a huge amount of complexity in running a site like AOL.Com.  Lots of interdependencies, sophistication, advertising related collection and distribution and the like.   It’s safe to say that this is not as simple as throwing up an Apache/Tomcat instance into a VM. 

I have talked for quite awhile about what Netflix engineers originally coined as Chaos Monkeys.   The ability, development paradigm, or even rogue processes for your applications to survive significant infrastructure and application level outages.  Its essentially taking the redundancy out of the infrastructure and putting into the code. While extremely painful at the start, the savings long term are proving hugely beneficial.    For most companies, this is still something futuristic, very far out there.  They may be beholden to software manufacturers and developers to start thinking this way which may take a very very long time.  Infrastructure is the easy way to solve it.   It may be easy, but its not cheap.  Nor, if you care about the environmental angle on it, is it very ‘sustainable’ or green.   Limit the infrastructure. Limit the Waste.   While we haven’t really thought about in terms of rolling it up into our environmental positions, perhaps we should.  

The point is that getting to this level of redundancy is going to take work and to that end will continue to be a regulator or anchor slowing down a greater adoption of more modular approaches.  But at least in my mind, the future is set, directionally it will be hard to ignore the economics of this type of approach for long.   Of course as an industry we need to start training or re-training developers to think in this kind of model.   To build code in such a way that it takes into effect the Chaos Monkey Potential out there.

 

Want to see One Live?

image

We have been asked to provide an AOL MicroData Center for the Super Computing 12 conference next week in Salt Lake City, Utah with our partner Penguin Computing.  If you want to see one of our Internal versions live and up-close feel free to stop by and take a look.  Jay Moran (my Distinguished Engineer here at AOL) and Scott Killian (The leader of our data center operations teams) will be onsite to discuss the technologies and our use cases.

 

\Mm