Looking to buy a Linux laptop are you?

I recently underwent an interesting adventure of trying to purchase a Linux-based laptop that I thought I would share as it lead me to some interesting revelations about the laptop industry in general.  First let me admit, that my home is kind of like the United Nations for Operating Systems and Tech Platforms.   I have Windows-based machines, Macbooks, Ubuntu and Mint flavors of Linux, and my home servers are a mix of Microsoft, CentOS and Fedora. 

I recently decided to go out and look for a high performance Linux Laptop to use for home purposes.  Up until this decision I did what everyone probably does when they want to use Linux, they go out and download the latest distribution depending upon whether or not they prefer .DEB or RPM variants and install it on an old or existing machine in their home.    I have done this over and over again.  This time, however, I was determined to go out an purchase a ready-made Linux laptop.  My love affair with Unix or Linux based laptops began when I ran into an engineer from Cisco earlier in my career who was sporting a clunky (but at the time amazing) HPUX based laptop. It was a geek thing of beauty and I was hooked.RDI Tadpole

If there is a name brand in Linux laptops its System 76.  These guys have been building special purpose system since 2005.  They have three models to choose from and all of them are rock solid.  Now to say they are a ‘name brand’ is a little bit misleading.  The hardware is generally sourced from firms like CLETO or MSI.  But what makes System 76 so special is that they really do try to replicate the normal laptop  (or desktop) purchasing and product experience that you are likely to find with traditional Windows based experiences.  They ensure optimal driver support for the hardware and generally deliver a very high customer experience.  I have always been jealous of people with System 76 gear when I have seen them at the odd trade show. It’s generally a rare sighting, because lets face it…with the proliferation of Windows based machines and Macbooks, seeing a Linux based laptop environment brings out my inner geek.

Another brand that I occasionally see around in Zareason.   Like System 76, they custom build their laptops and pre-load Linux on as well.  Where System 76 is limited to and specialized in Ubuntu loads, Zareason gives you many more options. 

Other firms like Linux Certified try and take a best of breed approach and try to find the balance of purchasing their own hardware and/or mixing it in with other manufacturer created platforms like a Linux Optimized Lenovo Thinkpad. They also give you a choice of which flavor of Linux you would prefer.

Now you could also go out and purchase all of the components yourself from CLETO, MSI, and others to build your own model, but as I was expressly going out to buy one, and not ‘build’ one I opted out of that effort.

But the Linux Certified approach got me thinking about what do the ‘Big Guy’ manufacturers offer in terms of Linux based laptops.   The answer in short was Nada.  Not off the shelf at least.   I had remembered that Dell was making a purpose built Linux machine and started digging in.  I found all kinds of great references to it -  the XPS 13 Developer Edition.   However when I went online to the Dell website to dig in a bit more, I found that the XPS 13 only had Microsoft based options on the Operating System.  I searched high and low, and somehow managed to get linked to www.Dell.Co.Uk, where Lo and Behold, I found the XPS 13 Developer Edition.  Apparently they are only selling it outside of the United States.   Huh?  This piqued my interest so I started up a chat on the Dell web page which confirmed my suspicion.  I had secretly hoped that there was some super secret way to get one here in the United States. But apparently not.  

dell_xps_chat

 

To be honest this kind of made me mad.  Why can people in Europe get it, and we can’t?  It probably sounds a lot like whining as I am sure there are tons of things Americans can get that Europeans can’t, but for now, I am atop my high horse and I get to complain.   Essentially I could get it.  However, like the old adage, there would be Some Assembly Required.   Purchase the hardware, blow away all of the preinstalled Microsoft Software, and install over it.   Surely HP must have a version with Linux.  Nope.  What about Lenovo?  Nope checked that too.  As I gazed at the Thinkpad website I relegated myself to a position that I would need to also think about the ‘some assembly required’ category.  Truth be told, having owned them in the past,  I absolutely love the Lenovo keyboard and solid case construction.  I do not think there is a better one anywhere on the planet.  

So I created my Some Assembly Required List as well and it was then I realized two things – First, that If I wanted anything in this category it was really no different than what I had been doing for the previous ten years.  It really highlights the need for better partnerships of the Linux community at large with the hardware manufacturers if they ever want to break into the consumer markets.   Most non-professionals I know would never go through that kind of trouble nor do they have as much affinity for the Operating System as I have.  The second thing I thought I had realized was that in going this route, you essentially have to pay a ‘TAX’ to Microsoft.  Every laptop you buy like this, comes with Windows 7 or Windows 8.  Which means that you are paying for it somewhere in the price of the equipment.   At first I was irked, but what’s more interesting is that generally speaking (with the exception of the Lenovo configuration) the Price points for the ‘Assembly’ options were generally lower by a significant margin than the ‘Optimized Linux’ counter parts.  Some of this was reflected in slight configuration differences.  Which led me to believe that the Microsoft ‘Tax’ gave little value to the machines overall.  Here’s an example of my process:

comparison_linux

Not all of the configurations are like for like above, but it gives you a flavor.  

My Revelations?  I would have thought by now that the OEM to Microsoft connection would have seriously weakened.  At least weakened to the point of offering a little more variety in Operating Systems or at least the ability to purchase equipment without an Operating System.  It could also be a factor of the people I hang around with and what we think is cool.  I guess, once a hobbyist, always a hobbyist.

\Mm

[You can find out which machine I ended up getting in my follow up post here]

Lots of interest in the MicroDC, but do you know what I am getting the most questions about?

 Scott Killian of AOL talks about the MicroDC

Last week I put up a post about how AOL.com has 25% of all traffic now running through our MicroDC infrastructure.   There was a great follow up post by James LaPlaine our VP of Operations on his blog Mental Effort, which goes into even greater detail.   While many of the email inquiries I get have been based around the technology itself, surprisingly a large majority of the notes have been questions around how to make your software. applications, and development efforts ready for such an infrastructure and what the timelines for realistically doing so would be.   

The general response of course is that it depends.  If you are a web-based platform or property focused solely on Internet based consumers, or a firm that needs diversified presence in different regions without the hefty price tag of renting and taking down additional space this may be an option.  However many of the enterprise based applications have been written in a way that is highly dependent upon localized infrastructure, short application based latency, and lack adequate scaling.  So for more corporate data center applications this may not be a great fit.  It will take sometime for those big traditional application firms to be able to truly build out their infrastructure to work in an environment like this (they may never do so).   I suspect most will take an easier approach and try to ‘cloudify’ their own applications and run it within their own infrastructure or data centers under their control.   This essentially will allow them to control the access portion of users needs, but continue to rely on the same kinds of infrastructure you might have in your own data center to support it.   Its much easier to build a web based application which then connects to a traditional IT based environment, than to truly build out infrastructure capable of accommodating scale.   I am happy to continue answer questions as they come up, but as I had an overwhelming response of questions about this I thought I would throw something quick up here that will hopefully help.

 

\Mm

On Micro Datacenters, Sandy, Supercomputing 2012, and Coding for Containerized Data Centers….

image

As everyone has been painfully aware last week the United States saw the devastation caused by the Superstorm Sandy.   My original intention was to talk about yet another milestone with our Micro Data Center approach.  As the storm slammed into the East Coast I felt it was probably a bad time to talk about achieving something significant especially as people were suffering through the storms outcome.  In fact, after the storm AOL kicked off an incredible supplies drive and sent truckloads of goods up to the worst of the affected areas.

So, here we are a week after the storm, and while people are still in need and suffering, it is clear that the worst is over and the clean up and healing has begun.   It turns out that Super Storm Sandy also allowed us to test another interesting case in the journey of the Micro Data Center as well that I will touch on.

25% of ALL AOL.COM Traffic runs through Micro Data Centers

I have talked about the potential value of our use of Micro Data Centers and the pure agility and economics the platform will provide for us.   Up until this point we had used this technology in pockets.  Think of our explorations as focusing on beta and demo environments.  But that all changed in October when we officially flipped the switch and began taking production traffic for AOL.COM with the Micro Data Center.  We are currently (and have been since flipping the switch) running about 25% of all traffic coming to our main web site.   This is an interesting achievement in many ways.  First, from a performance perspective we are manually limiting the platform (it could do more!) to ~65,000 requests per minute and a traffic volume of about 280mbits per second.   To date I haven’t seen many people post performance statistics about applications in modular use, so hopefully this is relevant and interesting to folks in terms of the volume of load an approach such as this could take.   We recently celebrated this at a recent All-Hands with an internal version of our MDC being plugged into the conference room.  To prove our point we added it to the global pool of capacity for AOL.com and started taking production traffic right there at the conference facility.   This proves in large part the value, agility and mobility a platform like this could bring to bear.

Scott Killian, AOL's Data Center guru talks about the deployment of AOLs Micro Data Center. An internal version went 'live' during the talk.

 

As I mentioned before, Super Storm Sandy threw us another curveball as the hurricane crashed into the Mid-Atlantic.   While Virginia was not hit anywhere near as hard as New York and New Jersey, there were incredible sustained winds, tumultuous rains, and storm related damage everywhere.  Through it all, our outdoor version of the MDC weathered the storm just fine and continued serving traffic for AOL.com without fail. 

 

This kind of Capability is not EASY or Turn-Key

That’s not to say there isn’t a ton of work to do to get an application to work in an environment like this.   If you take the problem space at different levels whether it be DNS, Load Balancing, network redundancy, configuration management, underlying application level timeouts, systems dependencies like databases, other information stores and the like the non-infrastructure related work and coding is not insignificant.   There is a huge amount of complexity in running a site like AOL.Com.  Lots of interdependencies, sophistication, advertising related collection and distribution and the like.   It’s safe to say that this is not as simple as throwing up an Apache/Tomcat instance into a VM. 

I have talked for quite awhile about what Netflix engineers originally coined as Chaos Monkeys.   The ability, development paradigm, or even rogue processes for your applications to survive significant infrastructure and application level outages.  Its essentially taking the redundancy out of the infrastructure and putting into the code. While extremely painful at the start, the savings long term are proving hugely beneficial.    For most companies, this is still something futuristic, very far out there.  They may be beholden to software manufacturers and developers to start thinking this way which may take a very very long time.  Infrastructure is the easy way to solve it.   It may be easy, but its not cheap.  Nor, if you care about the environmental angle on it, is it very ‘sustainable’ or green.   Limit the infrastructure. Limit the Waste.   While we haven’t really thought about in terms of rolling it up into our environmental positions, perhaps we should.  

The point is that getting to this level of redundancy is going to take work and to that end will continue to be a regulator or anchor slowing down a greater adoption of more modular approaches.  But at least in my mind, the future is set, directionally it will be hard to ignore the economics of this type of approach for long.   Of course as an industry we need to start training or re-training developers to think in this kind of model.   To build code in such a way that it takes into effect the Chaos Monkey Potential out there.

 

Want to see One Live?

image

We have been asked to provide an AOL MicroData Center for the Super Computing 12 conference next week in Salt Lake City, Utah with our partner Penguin Computing.  If you want to see one of our Internal versions live and up-close feel free to stop by and take a look.  Jay Moran (my Distinguished Engineer here at AOL) and Scott Killian (The leader of our data center operations teams) will be onsite to discuss the technologies and our use cases.

 

\Mm

Pointy Elbows, Bags of Beans, and a little anthill excavation…A response to the New York Times Data Center Articles

I have been following with some interest the series of articles in the New York Times by Jim Glanz.  The series premiered on Sunday with an article entitled Power, Pollution and the Internet, which was followed up today with a deeper dive in some specific examples.  The examples today (Data  Barns in a farm town, Gobbling Power and Flexing muscle) focused on the Microsoft program, a program of which I have more than some familiarity since I ran it for many years.   After just two articles, reading the feedback in comments, and seeing some of the reaction in the blogosphere it is very clear that there is more than a significant amount of misunderstanding, over-simplification, and a lack of detail I think is probably important.   In doing so I want to be very clear that I am not representing AOL, Microsoft, or any other organization other than my own personal observations and opinions.  

As mentioned in both of the articles I was one of hundreds of people interviewed by the New York Times for this series.  In those conversations with Jim Glanz a few things became very apparent.  First – He has been on this story for a very long time, at least a year.   As far as journalists go, he was incredibly deeply engaged and armed with tons of facts.  In fact, he had a trove of internal emails, meeting minutes, and a mountain of data through government filings that must have taken him months to collect.  Secondly, he had the very hard job of turning this very complex space into a format where the uneducated masses can begin to understand it.  Therein lies much of the problem – This is an incredibly complex space to try and communicate it to those not tackling it day to day or even understand that technological, regulatory forces involved.  This is not an area or topic that can be sifted down to a sound bite.   If this were easy, there really wouldn’t be a story would there?

At issue for me is that the complexity of the powers involved seems to get scant attention aiming larger for the “Data Centers are big bad energy vampires hurting the environment” story.   Its clearly evident reading through the comments on the both of the articles so far.   Claiming that the sources and causes have everything to do from poor web page design to government or multi-national companies conspiracies to corner the market on energy. 

So I thought I would take a crack article by article to shed some light (the kind that doesn’t burn energy) on some of the topics and just call out where I disagree completely.     In full transparency  the “Data Barns” article doesn’t necessarily paint me as a “nice guy”.  Sometimes I am.  Sometimes I am not.  I am not an apologist, nor do I intend to do so in this post.  I am paid to get stuff done.  To execute. To deliver.  Quite frankly the PUD missed deadlines (the progenitor event to my email quoted in the piece) and sometimes people (even utility companies) have to live in the real world of consequences.   I think my industry reputation, work, and fundamental stances around driving energy efficiency and environmental conservancy in this industry can stand on its own both publicly and for those that have worked for me. 

There is an inherent irony here that these articles were published in both print and electronically to maximize the audience and readership.  To do that, these articles made “multiple trips” through a data center, and ultimately reside in one (or more).  They seem to denote that keeping things online is bad which seems to go against the availability and need of the articles themselves.  Doesn’t the New York times expect to make these articles available on-line for people to read?  Its posted online already.  Perhaps they expect that their micro-fiche experts would be able to serve the demand for these articles in the future?  I do not think so. 

This is a complex eco-system of users, suppliers, technology, software, platforms, content creators, data (both BIG and small), regulatory forces, utilities, governments, financials, energy consumption, people, personalities, politics, company operating tenets, community outreach to name a very few.  On top of managing through all these variables they also have to keep things running with no downtime.

\Mm

The AOL Micro-DC adds new capability

Back in July, I announced AOL’s Data Center Independence Day with the release of our new ‘Micro Data Center’ approach.   In that post we highlighted the terrific work that the teams put in to revolutionize our data center approach and align it completely to not only technology goals but business goals as well.   It was an incredible amount of engineering and work to get to that point and it would be foolish to think that the work represented a ‘One and Done’ type of effort.  

So today I am happy to announce the roll out of a new capability for our Micro-DC – An indoor version of the Micro-DC.

Aol MDC-Indoor2

While the first instantiations of our new capability were focused on outdoor environments, we were also hard at work at an indoor version with the same set of goals.   Why work on an indoor version as well?   Well you might recall in the original post I stated:

We are no longer tied to traditional data center facilities or colocation markets.   That doesn’t mean we wont use them, it means we now have a choice.  Of course this is only possible because of the internally developed cloud infrastructure but we have freed ourselves from having to be bolted onto or into existing big infrastructure.   It allows us to have an incredible amount geo-distributed capacity at a very low cost point in terms of upfront capital and ongoing operational expense.

We need to maintain a portfolio of options for our products and services.  In this case – having an indoor version of our capabilities to ensure that our solution can live absolutely anywhere.   This will allow our footprint, automation and all, to live inside any data center co-location environment or the interior of any office building anywhere around the planet, and retain the extremely low maintenance profile that we were targeting from an operational cost perspective.  In a sense you can think of it as “productizing” our infrastructure.  Could we have just deployed racks of servers, network kit, etc. like we have always done?  Sure.   But by continuing to productize our infrastructure we continue to drive down the costs relating to our short term and long term infrastructure costs.  In my mind, Productizing your infrastructure, is actually the next evolution in standardization of your infrastructure.   You can have infrastructure standards in place – Server Model, RAM, HD space, Access switches, Core switches, and the like.  But until you get to that next phase of standardizing, automating, and ‘productizing’ it into a discrete set of capabilities – you only get a partial win.

Some people have asked me, “Why didn’t you begin with the interior version to start with? It seems like it would be the easier one to accomplish.”  Indeed I cannot argue with them, it would have probably been easier as there were much less challenges to solve.  You can make basic assumptions around where this kind of indoor solution would live in, and reduce much of the complexity.   I guess it all nets out to a philosophy of solving the harder problems first.   Once you prove the more complicated use case, the easier ones come much faster.   This is definitely the situation here.  

While this new capability continues the success we are seeing in re-defining the cost and operations of our particular engineering environments, the real challenge here (as with all sorts infrastructure and cloud automation) is whether or not we can map similar success of our applications and services to work correctly in that space.   On that note, I should have more to post soon. Stay Tuned!  Smile

 

\Mm

Sites and Sounds of DataCentre2012: Thoughts and my Personal Favorite presentations Day 1

We wrapped our first full day of talks here at DataCentre2012 and I have to say the content was incredibly good.    A couple of the key highlights that really stuck out in my mind were the talk given by Christian Belady who covered some interesting bits of the Microsoft Data Center Strategy moving forward.   Of course I have a personal interest in that program having been there for Generation1 through Generation4 of the evolutions of the program.   ms-beladyChristian covered some of the technology trends that they are incorporating into their Generation 5 facilities.  It was some very interesting stuff and he went into deeper detail than I have heard so far around the concept of co-generation of power at data center locations.   While I personally have some doubts about the all-in costs and immediacy of its applicability it was great to see some deep meaningful thought and differentiation out of the Microsoft program.  He also went into a some interesting “future” visions which talked about data being the next energy source.  While he took this concept to an entirely new level  I do feel he is directionally correct.  His correlations between the delivery of “data” in a utility model rang very true to me as I have long preached about the fact that we are at the dawning of the Information Utility for over 5 years.

Another fascinating talk came from Oliver J Jones of a company called Chayora.   Few people and companies really understand the complexities and idiosyncrasies of doing business let alone dealing with the development and deployment of large scale infrastructure there.    The presentation done by Mr. Jones was incredibly well done.  Articulating the size, opportunity, and challenges of working in China through the lens of the data center market he nimbly worked in the benefits of working with a company with this kind of expertise.   It was a great way to quietly sell Chayora’s value proposition and looking around the room I could tell the room was enthralled.   His thoughts and data points had me thinking and running through scenarios all day long.  Having been to many infrastructure conferences and seeing hundreds if not thousands of presentations, anyone who can capture that much of my mindshare for the day is a clear winner. 

Tom Furlong and Jay Park of Facebook gave a great talk on OCP with a great focus on their new facility in Sweden.  They also talked  a bit about their other facilities in Prineville and North Carolina as well.   With Furlong taking the Mechanical innovations and Park going through the electrical it was a great talk to created lots of interesting questions.  fb-parkAn incredibly captivating portion of the talk was around calculating data center availability.   In all honesty it was the first time I had ever seen this topic taken head on at a data center conference. In my experience, like PUE, Availability calculations can fall under the spell of marketing departments who truly don’t understand that there SHOULD be real math behind the calculation.   There were two interesting take aways for me.  The first was just how impactful this portion of the talk had on the room in general.   There was an incredible amount of people taking notes as Jay Park went through the equation and way to think about it.   It led me to my second revelation – There are large parts of our industry who don’t know how to do this.   fb-furlongIn private conversations after their talk some people confided that had never truly understood how to calculate this.   It was an interesting wake-up call for me to ensure I covered the basics even in my own talks.

After the Facebook talk it was time for me to mount the stage for Global Thought Leadership Panel.   I was joined on stage by some great industry thinkers including Christian Belady of Microsoft, Len Bosack (founder of Cisco Systems) now CEO XKL Systems, Jack Tison-CTO of Panduit, Kfir Godrich-VP and Chief Technologist at HP, John Corcoran-Executive Chairman of Global Switch, and Paul-Francois Cattier-Global VP of Data Centers  at Schneider Electric.   That’s a lot of people and brainpower to fit on a single stage.  We really needed three times the amount of time allotted for this panel, but that is the way these things go.   Perhaps one of the most interesting recurring themes from question to question was the general agreement that at the end of the day – great technology means nothing without the will do something different.   There was an interesting debate on the differences between enterprise users and large scale users like Microsoft, Google, Facebook, Amazon, and AOL.  I was quite chagrined and a little proud to hear AOL named in that list of luminaries (it wasn’t me who brought it up).   But I was quick to point out that AOL is a bit different in that it has been around for 30 years and our challenges are EXACTLY like Enterprise data center environments.   More on that tomorrow in my keynote I guess.

All in all, it was a good day – there were lots of moments of brilliance in the panel discussions throughout the day.  One regret I have was on the panel regarding DCIM.   They ran out of time for questions from the audience which was unfortunate.   People continue to confuse DCIM as BMS version 2.0 and really miss capturing the work and soft costs, let alone the ongoing commitment to the effort once started.   Additionally there is the question of once you have mountains of collected data, what do you do with that.   I had a bunch of questions on this topic for the panel, including if any of the major manufacturers were thinking about building a decision engine over the data collection.  To me it’s a natural outgrowth and next phase of DCIM.  The one case study they discussed was InterXion.  It was a great effort but I think in the end maintained the confusion around a BMS with a web interface versus true Facilities and IT integration.     Another panel on Modularization got some really lively discussion on feature/functionality and differentiation, and lack of adoption.  To a real degree it highlighted an interesting gulf between manufacturers (mostly represented by the panel) who need to differentiate their products and the users who require vendor interoperability of the solution space.   It probably doesn’t help to have Microsoft or myself in the audience when it comes to discussions around modular capacity.   On to tomorrow!

\Mm

Patent Wars may Chill Data Center Innovation

Yahoo may have just sent a cold chill across the data center industry at large and begun a stifling of data center innovation.  In a May 3, 2012 article, Forbes did a quick and dirty analysis on the patent wars between Facebook and Yahoo. It’s a quick read but shines an interesting light on the potential impact something like this can have across the industry.   The article, found here,  highlights that :

In a new disclosure, Facebook added in the latest version of the filing that on April 23 Yahoo sent a letter to Facebook indicating that Yahoo believes it holds 16 patents that “may be relevant” to open source technology Yahoo asserts is being used in Facebook’s data centers and servers.

While these types of patent infringement cases happen all the time in the Corporate world, this one could have far greater ramifications on an industry that has only recently emerged into the light of sharing of ideas.    While details remain sketchy at the time of this writing, its clear that the specific call out of data center and servers is an allusion to more than just server technology or applications running in their facilities.  In fact, there is a specific call out of data centers and infrastructure. 

With this revelation one has to wonder about its impact on the Open Compute Project which is being led by Facebook.   It leads to some interesting questions. Has their effort to be more open in their designs and approaches to data center operations and design led them to a position of risk and exposure legally?  Will this open the flood gates for design firms to become more aggressive around functionality designed into their buildings?  Could companies use their patents to freeze competitors out of colocation facilities in certain markets by threatening colo providers with these types of lawsuits?  Perhaps I am reaching a bit but I never underestimate litigious fervor once the  proverbial blood gets in the water. 

In my own estimation, there is a ton of “prior art”, to use an intellectual property term, out there to settle this down long term, but the question remains – will firms go through that lengthy process to prove it out or opt to re-enter their shells of secrecy?  

After almost a decade of fighting to open up the collective industry to share technologies, designs, and techniques this is a very disheartening move.   The general Glasnost that has descended over the industry has led to real and material change for the industry.  

We have seen the mental shift of companies move from measuring facilities purely around “Up Time” measurements to one that is primarily more focused around efficiency as well.  We have seen more willingness to share best practices and find like minded firms to share in innovation.  One has to wonder, will this impact the larger “greening” of data centers in general.   Without that kind of pressure – will people move back to what is comfortable?

Time will certainly tell.   I was going to make a joke about the fact that until time proves out I may have to “lawyer” up just to be safe.  Its not really a joke however because I’m going to bet other firms do something similar and that, my dear friends, is how the innovation will start to freeze.

 

\Mm