And the winner is …. The Results of my Linux Laptop Search

76-5

 

After I posted my about my personal exploration into purchasing a pre-built, no-assembly required Linux laptop I have to admit that I was a bit overwhelmed at the response to the post.  In fact I have been inundated with emails, private messages, direct messages, and just about every communication method you could think of to post the results of my search and reveal which laptop I ended up going with.

While I purchased the machine weeks ago  I did not want to just simply answer the question with a slight note or addendum saying I ended up buying Brand X. I think every one would agree that the experience does not really end at the purchase.  I wanted to make sure I had some quality time on the machine and give some perspective after some real-world heavy duty use.  

So without a further ado, after some considerable analysis I ended up purchasing the System 76 Gazelle Professional.   The specs on my specific purchase are listed here below:

System 76 Gazelle

  • Base System
  • Ubuntu 13.04
  • 15.6” 1080p Full High Definition ColorPro IPS Display with a Matte Surface
  • Intel High Definition Graphics 4600
  • 4th Generation Intel® Core™ i7-4900MQ Processor ( 2.8 GHz 8MB L3 Cache – 4 Cores plus Hyperthreading )
  • 16 GB Dual Channel DDR3 SDRAM at 1600MHz – 2 X 8 GB
  • 960 GB Crucial M500 Series Solid State Drive
  • 8X DVD±R/RW/4X +DL Super-Multi Drive
  • Intel Centrino Advanced-N 6235 – 802.11A/B/G/N Wireless LAN + Bluetooth
  • US Keyboard

Please keep in mind that the reason for the purchase was to have an every day Linux machine that I could use for a mix of work, pleasure, and hobby.   I was not aiming to build the greatest development machine, or a gaming machine, or anything of the like.   Although I would argue after considerable use that this machine could be used in any of those configurations and perform exceptionally. But I am getting ahead of myself.

 

The Ordering Process

I ordered my machine through the website which was a pretty standard click-through configurator.  At the end of the process and submitted all my payment information in, I got the confirmation from System 76 pretty quickly.   I was also told that due to unavailability of some of the parts it would not ship for at least two weeks.   The instant satisfaction guy in me was disappointed, but having been around this industry long enough I know this happens a lot.   To my surprise I did not have to wait very long.  I got a note from them letting me know that my machine actually shipped out sooner than expected.   THAT is something does not usually happen.  

I should probably break away to let you know that the Customer Service Experience during the order process was exceptional (in my opinion).  Upon ordering the machine, you get a link to your order that constantly updates your status.  It tracks when you created your configuration, When you ordered it, When the teams at System76 start building it, and even features a ‘chat’ mechanism in case you have any questions.  Its not really live chat, but you can send questions, comments, or converse with them as part of the order process and they actually respond back to you fairly quickly.  All communication is tracked whether by System 76 “standard messages” like your system has started being assembled, to your order is shipped or any interactions you may have had with a Customer Service person.   The order interface also keeps track of all Serial numbers associated with your machine for ease of use later.

 

image

The Arrival

My machine arrived at the home in quick order and I was very happy to see quality of the shipping and related protection.  To be honest this was a bit of a concern as I had never ordered from these folks before, and I have had issues with shipping quality when order “clones” or “non-name brands” before.   But System76 did an outstanding job, rivaling if not hands-down beating the “bigger guys” in this department.  It may seem like a weird thing to comment on, but when you spend that kind of money on a machine it’s a terrible feeling if you cannot play with it immediately.

Regardless of how incredible the shipping material was, it was no match for my fingers as I tore into the box and removed all of it to get at the goodies inside. Please keep in mind that what you don’t see in the pictures below, is the box that the box the laptop actually came in!

76-176-276-376-4

 

Once free from its plastic and cardboard prison, the machine booted right up quickly and quietly (I love the SSD!).  Now that it was operational I got to work ensuring that the machine was well acquainted to its new home.   Which basically involves getting connected to my home networks, creating SSH keys, getting access to the servers and services in my home, (I have my own mini-data center in my house), mounting all of my cloud-base storage locations, downloading/installing the software that I use most often and getting connectivity to the variety of household peripherals scattered about the place.

The configurations were pretty straight forward and everything configured with ease. Everything of course except my scanner.   I am not sure why scanners have always been trouble for me, but even in the Windows world they are a pain in the….  I could probably do another whole post on getting my darn scanner to work. 

Anyway it all ended up well and it has become my main machine ever since.

Some of the more astute of those reading this post may have noticed that I bought this machine when the standard Operating System was Ubuntu 13.04.  I had completed all of the above configurations and had been using the machine heavily for some time before I had a slight panic moment after Saucy Salamander (Ubuntu 13.10) went general availability. So I set aside another weekend thinking that all of my hard work would have to be re-done when I did the upgrade.   In a very “Windows-like” notification I was given the option to automatically upgrade.  Intrigued, I clicked through to proceed and was pleasantly surprised that my machine upgraded with NO ADDITIONAL configurations from me.  It just worked.   It saved my weekend (or at least my Saturday).

I have been banging away on the machine solidly for over a month, and I have to say that I am extremely satisfied.  My only real complaint is that I wish it had a better keyboard.  Not that the keyboard itself is bad.  Its pretty standard actually.   I just think the industry as a whole could learn a few things from Lenovo about building a really great laptop keyboard.  

It’s definitely a powerful little workhorse of a machine!

\Mm

[This post is a follow up to my initial post looking for a pre-built Linux Laptop]

Looking to buy a Linux laptop are you?

I recently underwent an interesting adventure of trying to purchase a Linux-based laptop that I thought I would share as it lead me to some interesting revelations about the laptop industry in general.  First let me admit, that my home is kind of like the United Nations for Operating Systems and Tech Platforms.   I have Windows-based machines, Macbooks, Ubuntu and Mint flavors of Linux, and my home servers are a mix of Microsoft, CentOS and Fedora. 

I recently decided to go out and look for a high performance Linux Laptop to use for home purposes.  Up until this decision I did what everyone probably does when they want to use Linux, they go out and download the latest distribution depending upon whether or not they prefer .DEB or RPM variants and install it on an old or existing machine in their home.    I have done this over and over again.  This time, however, I was determined to go out an purchase a ready-made Linux laptop.  My love affair with Unix or Linux based laptops began when I ran into an engineer from Cisco earlier in my career who was sporting a clunky (but at the time amazing) HPUX based laptop. It was a geek thing of beauty and I was hooked.RDI Tadpole

If there is a name brand in Linux laptops its System 76.  These guys have been building special purpose system since 2005.  They have three models to choose from and all of them are rock solid.  Now to say they are a ‘name brand’ is a little bit misleading.  The hardware is generally sourced from firms like CLETO or MSI.  But what makes System 76 so special is that they really do try to replicate the normal laptop  (or desktop) purchasing and product experience that you are likely to find with traditional Windows based experiences.  They ensure optimal driver support for the hardware and generally deliver a very high customer experience.  I have always been jealous of people with System 76 gear when I have seen them at the odd trade show. It’s generally a rare sighting, because lets face it…with the proliferation of Windows based machines and Macbooks, seeing a Linux based laptop environment brings out my inner geek.

Another brand that I occasionally see around in Zareason.   Like System 76, they custom build their laptops and pre-load Linux on as well.  Where System 76 is limited to and specialized in Ubuntu loads, Zareason gives you many more options. 

Other firms like Linux Certified try and take a best of breed approach and try to find the balance of purchasing their own hardware and/or mixing it in with other manufacturer created platforms like a Linux Optimized Lenovo Thinkpad. They also give you a choice of which flavor of Linux you would prefer.

Now you could also go out and purchase all of the components yourself from CLETO, MSI, and others to build your own model, but as I was expressly going out to buy one, and not ‘build’ one I opted out of that effort.

But the Linux Certified approach got me thinking about what do the ‘Big Guy’ manufacturers offer in terms of Linux based laptops.   The answer in short was Nada.  Not off the shelf at least.   I had remembered that Dell was making a purpose built Linux machine and started digging in.  I found all kinds of great references to it -  the XPS 13 Developer Edition.   However when I went online to the Dell website to dig in a bit more, I found that the XPS 13 only had Microsoft based options on the Operating System.  I searched high and low, and somehow managed to get linked to www.Dell.Co.Uk, where Lo and Behold, I found the XPS 13 Developer Edition.  Apparently they are only selling it outside of the United States.   Huh?  This piqued my interest so I started up a chat on the Dell web page which confirmed my suspicion.  I had secretly hoped that there was some super secret way to get one here in the United States. But apparently not.  

dell_xps_chat

 

To be honest this kind of made me mad.  Why can people in Europe get it, and we can’t?  It probably sounds a lot like whining as I am sure there are tons of things Americans can get that Europeans can’t, but for now, I am atop my high horse and I get to complain.   Essentially I could get it.  However, like the old adage, there would be Some Assembly Required.   Purchase the hardware, blow away all of the preinstalled Microsoft Software, and install over it.   Surely HP must have a version with Linux.  Nope.  What about Lenovo?  Nope checked that too.  As I gazed at the Thinkpad website I relegated myself to a position that I would need to also think about the ‘some assembly required’ category.  Truth be told, having owned them in the past,  I absolutely love the Lenovo keyboard and solid case construction.  I do not think there is a better one anywhere on the planet.  

So I created my Some Assembly Required List as well and it was then I realized two things – First, that If I wanted anything in this category it was really no different than what I had been doing for the previous ten years.  It really highlights the need for better partnerships of the Linux community at large with the hardware manufacturers if they ever want to break into the consumer markets.   Most non-professionals I know would never go through that kind of trouble nor do they have as much affinity for the Operating System as I have.  The second thing I thought I had realized was that in going this route, you essentially have to pay a ‘TAX’ to Microsoft.  Every laptop you buy like this, comes with Windows 7 or Windows 8.  Which means that you are paying for it somewhere in the price of the equipment.   At first I was irked, but what’s more interesting is that generally speaking (with the exception of the Lenovo configuration) the Price points for the ‘Assembly’ options were generally lower by a significant margin than the ‘Optimized Linux’ counter parts.  Some of this was reflected in slight configuration differences.  Which led me to believe that the Microsoft ‘Tax’ gave little value to the machines overall.  Here’s an example of my process:

comparison_linux

Not all of the configurations are like for like above, but it gives you a flavor.  

My Revelations?  I would have thought by now that the OEM to Microsoft connection would have seriously weakened.  At least weakened to the point of offering a little more variety in Operating Systems or at least the ability to purchase equipment without an Operating System.  It could also be a factor of the people I hang around with and what we think is cool.  I guess, once a hobbyist, always a hobbyist.

\Mm

[You can find out which machine I ended up getting in my follow up post here]

Sites and Sounds of DataCentre2012: Thoughts and my Personal Favorite presentations Day 1

We wrapped our first full day of talks here at DataCentre2012 and I have to say the content was incredibly good.    A couple of the key highlights that really stuck out in my mind were the talk given by Christian Belady who covered some interesting bits of the Microsoft Data Center Strategy moving forward.   Of course I have a personal interest in that program having been there for Generation1 through Generation4 of the evolutions of the program.   ms-beladyChristian covered some of the technology trends that they are incorporating into their Generation 5 facilities.  It was some very interesting stuff and he went into deeper detail than I have heard so far around the concept of co-generation of power at data center locations.   While I personally have some doubts about the all-in costs and immediacy of its applicability it was great to see some deep meaningful thought and differentiation out of the Microsoft program.  He also went into a some interesting “future” visions which talked about data being the next energy source.  While he took this concept to an entirely new level  I do feel he is directionally correct.  His correlations between the delivery of “data” in a utility model rang very true to me as I have long preached about the fact that we are at the dawning of the Information Utility for over 5 years.

Another fascinating talk came from Oliver J Jones of a company called Chayora.   Few people and companies really understand the complexities and idiosyncrasies of doing business let alone dealing with the development and deployment of large scale infrastructure there.    The presentation done by Mr. Jones was incredibly well done.  Articulating the size, opportunity, and challenges of working in China through the lens of the data center market he nimbly worked in the benefits of working with a company with this kind of expertise.   It was a great way to quietly sell Chayora’s value proposition and looking around the room I could tell the room was enthralled.   His thoughts and data points had me thinking and running through scenarios all day long.  Having been to many infrastructure conferences and seeing hundreds if not thousands of presentations, anyone who can capture that much of my mindshare for the day is a clear winner. 

Tom Furlong and Jay Park of Facebook gave a great talk on OCP with a great focus on their new facility in Sweden.  They also talked  a bit about their other facilities in Prineville and North Carolina as well.   With Furlong taking the Mechanical innovations and Park going through the electrical it was a great talk to created lots of interesting questions.  fb-parkAn incredibly captivating portion of the talk was around calculating data center availability.   In all honesty it was the first time I had ever seen this topic taken head on at a data center conference. In my experience, like PUE, Availability calculations can fall under the spell of marketing departments who truly don’t understand that there SHOULD be real math behind the calculation.   There were two interesting take aways for me.  The first was just how impactful this portion of the talk had on the room in general.   There was an incredible amount of people taking notes as Jay Park went through the equation and way to think about it.   It led me to my second revelation – There are large parts of our industry who don’t know how to do this.   fb-furlongIn private conversations after their talk some people confided that had never truly understood how to calculate this.   It was an interesting wake-up call for me to ensure I covered the basics even in my own talks.

After the Facebook talk it was time for me to mount the stage for Global Thought Leadership Panel.   I was joined on stage by some great industry thinkers including Christian Belady of Microsoft, Len Bosack (founder of Cisco Systems) now CEO XKL Systems, Jack Tison-CTO of Panduit, Kfir Godrich-VP and Chief Technologist at HP, John Corcoran-Executive Chairman of Global Switch, and Paul-Francois Cattier-Global VP of Data Centers  at Schneider Electric.   That’s a lot of people and brainpower to fit on a single stage.  We really needed three times the amount of time allotted for this panel, but that is the way these things go.   Perhaps one of the most interesting recurring themes from question to question was the general agreement that at the end of the day – great technology means nothing without the will do something different.   There was an interesting debate on the differences between enterprise users and large scale users like Microsoft, Google, Facebook, Amazon, and AOL.  I was quite chagrined and a little proud to hear AOL named in that list of luminaries (it wasn’t me who brought it up).   But I was quick to point out that AOL is a bit different in that it has been around for 30 years and our challenges are EXACTLY like Enterprise data center environments.   More on that tomorrow in my keynote I guess.

All in all, it was a good day – there were lots of moments of brilliance in the panel discussions throughout the day.  One regret I have was on the panel regarding DCIM.   They ran out of time for questions from the audience which was unfortunate.   People continue to confuse DCIM as BMS version 2.0 and really miss capturing the work and soft costs, let alone the ongoing commitment to the effort once started.   Additionally there is the question of once you have mountains of collected data, what do you do with that.   I had a bunch of questions on this topic for the panel, including if any of the major manufacturers were thinking about building a decision engine over the data collection.  To me it’s a natural outgrowth and next phase of DCIM.  The one case study they discussed was InterXion.  It was a great effort but I think in the end maintained the confusion around a BMS with a web interface versus true Facilities and IT integration.     Another panel on Modularization got some really lively discussion on feature/functionality and differentiation, and lack of adoption.  To a real degree it highlighted an interesting gulf between manufacturers (mostly represented by the panel) who need to differentiate their products and the users who require vendor interoperability of the solution space.   It probably doesn’t help to have Microsoft or myself in the audience when it comes to discussions around modular capacity.   On to tomorrow!

\Mm

Chiller-Side Chats : Is Power Capping Ready for PrimeTime?

I was very pleased at the great many responses to my data center capacity planning chat.  They came in both public and private notes with more than a healthy population of those centered around my comments on power capping and their potential disagreement on why I don’t think the technology/applications/functionality is 100% there yet. So I decided to throw up an impromptu ad-hoc follow-on chat on Power Capping.  How’s that for service?

What’s your perspective?

In a nutshell my resistance can be summed up and defined in the exploration of two phrases.  The first is ‘prime time’ and how I define it from where I come at the problem from.  The second is the definition of the term ‘data center’ and in what context I am using it as it relates to Power Capping.

I think to adequately address my position I will answer it from the perspective of the three groups that these Chiller Side Chats are aimed at namely, the Facility side, the IT side, and ultimately the business side of the problem. 

Let’s start with the latter phrase : ‘data center’ first.  To the facility manager this term refers to the actual building, room, infrastructure that IT gear sits in.   His definition of Data Center includes things like remote power panels, power whips, power distribution units, Computer Room Air Handlers (CRAHs), generators, and cooling towers.   It all revolves around the distribution and management of power.

From an IT perspective the term is usually represented or thought of in terms of servers, applications, or network capabilities.  It sometimes blends in to include some aspects of the facility definition but only as it relates to servers and equipment.   I have even heard it used to applied to “information” which is even more ethereal.  Its base units could be servers, storage capacity, network capacity and the like.

From a business perspective the term ‘data center’ is usually lumped together to include both IT and facilities but at a very high level.  Where the currency for our previous two groups are technical in nature (power, servers, storage, etc) – the currency for the business side is cold hard cash.   It involves things like OPEX costs, CAPEX costs, and Return on Investment.

So from the very start, one has to ask, which data center are you referring to?  Power Capping is a technical issue, and can be implemented at either of the two technical perspectives.   It also will have an impact on the business aspect but it can also be a barrier to adoption.

We believe these truths to be self-evident

Here are some of the things that I believe to be inalienable truths about data centers today and in some of these probably forever if history is any indication.

  1. Data Centers are heterogeneous in the make up of facilities equipment with different brands of equipment across the functions.
  2. Data Centers are largely heterogeneous in the make up of their servers population, network population, etc.
  3. Data Centers house non-server equipment like routers, switches, tape storage devices and the like.
  4. Data Centers generally have differing designs, redundancy, floor layout, PDU distribution configurations.
  5. Today most racks are unintelligent, those that are not, are vendor specific and/or proprietary-also-Expensive versus bent steel.
  6. Except in a very few cases, there is NO integration between asset management, change management, incident management, problem management systems between IT *AND* facilities systems.

These will be important in a second so mark this spot on the page as it ties into my thoughts on the definition of prime time.  You see, to me in this context, Prime Time means that when a solution is deployed it will actually solve problems and reduce the number of things a Data Center Manager has to do or worry about.   This is important because notice I did not say anything about making something easier.  Sometimes, easier doesn’t solve the problem. 

There is some really incredible work going on at some of the server manufacturers in the area of power capping.   After all they know their products better than anyone.  For gratuitous purposes because he posts and comments here, I refer you to the Eye on Blades blog at HP by Tony Harvey.  On his post responding to the previous Chiller-side chat, he talked up the amazing work that HP is doing and is already available on some G5 boxes and all G6 boxes along with additional functionality available in the blade enclosures. 

Most of the manufacturers are doing a great job here.  The dynamic load stuff is incredibly cool as well.    However, the business side of my brain requires that I state that this level of super-cool wizardry usually comes at additional cost.   Lets compare that with Howard, the every day data center manager who does it today, who from a business perspective is a sunk cost.   Its essentially free.   Additionally, simple things like performing an SNMP poll for power draw on a box (which used to be available in some server products for free) have been removed or can only be accessed through additional operating licenses.  Read as more cost.    So the average business is faced with getting this capability for servers at an additional cost, or make Howard the Data Center manager do it for free and know that his general fear of losing his job if things blow up is a good incentive for doing it right. 

Aside from that, it still has challenges in Truth #2.  Extremely rare is the data center that uses only one server manufacturer.  While its the dream of most server manufacturers, its more common to find DELL Servers, along side HP Servers, alongside Rackable. Add to that fact that even in the same family you are likely to see multiple generations of gear.  Does the business have to buy into the proprietary solutions of each to get the functionality they need for power capping?  Is there an industry standard in Power Capping that ensures we can all live in peace and harmony?  No.  Again that pesky business part of my mind says, cost-cost-cost.  Hey Howard – Go do your normal manual thing.

Now lets tackle Truth #3 from a power capping perspective.   Solving the problem from the server side is only solving part of the problem.   How many network gear manufacturers have power capping features? You would be hard pressed to find a number on one hand.   In a related thought, one of the standard connectivity trends in the industry is top of rack switching.  Essentially for purposes of distribution, a network switch is placed at the top of the rack to handle server connectivity to the network.     Does our proprietary power capping software catch the power draw of that switch?  Any network gear for that matter?  Doubtful.  So while I may have super cool power capping on my servers I am still screwed at the rack layer –where data center managers manage from as one of their base units.   Howard may be able to have some level of Surety that his proprietary server power capping stuff is humming along swimmingly, he still has to do the work manually.  Its definitely simpler for Howard, to get that task done potentially quicker, but we have not actually reduced steps in the process.   Howard is still manually walking the floor.  

Which brings up a good point, Howard the Data Center manager manages by his base unit of rack.  In most data centers, racks can have different server manufacturers, different equipment types (servers, routers, switches, etc), and can even be of different sizes.    While some manufacturers have built state of the art racks specific for their equipment it doesn’t solve the problem.  We have now stumbled upon Truth #5.

Since we have been exploring how current power capping technologies meet at the intersection of IT and facilities it brings up the last point I will touch on regarding tools. I will get there by asking some basic questions as to the operations of a typical data center.  In terms of Operations does your IT asset management system provide for racks as an item of configuration?  Does your data center manager use the same system?  Does your system provide for multiple power variables?  does it track power at all?  Does the rack have power configuration associated with it?  Or does your version of Howard use spreadsheets?  I know where my bet is on your answers.  Tooling has a long way to go in this space.   Facilities vendors are trying to approach it from their perspective, IT tools providers are doing the same, along with tools and mechanisms from equipment manufacturers as well. There are a few tools that have been custom developed to do this kind of thing, but they have been done for use in very specific environments.  We have finally arrived at Power Capping and Truth #6. 

Please don’t get me wrong, I think that ultimately power capping will finally fulfill its great promise and do tremendous wonders.  Its one of those rare areas which will have a very big impact in this industry.   If you have the ability to deploy the vendor specific solutions (which are indeed very good), you should. It will make things a bit easier, even if it doesn’t remove steps.   However I think ultimately in order to have real effect its going to have to compete with the cost of free.   Today this work is done by the data center managers with no apparent additional cost from a business perspective.   If I had some kind of authority I would call for there to be a Standard to be put in place around Power Capping.  Even if its quite minimal it would have a huge impact.   It could be as simple as providing three things.  First provide for free and unfiltered access to an SNMP Mib that allows access to the current power usage information of any IT related device.  Second, provide a Mib, which through the use of a SET command could place a hard upper limit of power usage.  This setting could be read by the box and/or the operating system and start to slow things down or starve resources on the box for a time.  Lastly, the ability to read that same Mib.    This would allow for the poor cheap Howard’s to take advantage of at least simplifying their environments.  tremendously.  It would still provide software and hardware manufacturers to build and charge for the additional and dynamic features they would require. 

\Mm