Bippity Boppity Boom! The Impact of Enchanted Objects on Development, Infrastructure and the Cloud

I have been spending a bunch of my time recently thinking through the impact on what David Rose of Ditto Labs and MIT Media Lab romantically calls ‘Enchanted Objects’.  What are enchanted objects?   Enchanted Objects are devices, appliances, tools, dishware, anything that is ultimately connected to the Internet (or any connected network) and become to some degree aware of the world around them.   Imagine an Umbrella that has a light on its hilt that lights up if it may rain today, reminding you that you might want to bring it along on your travels.   Imagine your pantry and refrigerator communicating with your grocery cart at the store while you shop, letting you know the things you are running low on or even bypasses the part where you have to shop, and automatically just orders it to your home.  This approach is going to fundamentally change everything you know in life from credit cards to having a barbeque with friends. These things and their capabilities are going to change our world in ways that we cannot even fathom today.   Our Technology Industry calls this emerging field, the Internet of Things.   Ugh!  How absolutely boring. Our industry has this way of sucking all the fun out of things don’t we?   I personally feel that ‘Enchanted Objects’ is a far more compelling classification, as it speaks to the possibilities, wonderment and possibly terror that lies in store for us.  If we must make it sound ‘technical’ maybe we can call it the Enchantosphere.

While I may someday do a post about all of the interesting things I have found out there already, or the ideas that I have come up with for this new enchanted world,  I wanted to to reflect a bit on what it means for the things that I normally write about.  You know, things like The cloud, big infrastructure, and scaled software development.   So go grab your walking staff of traffic conditions and come on an interesting journey into the not-so-distant world of Cloud powered magic…

The first thing you need to understand is, if you work in this industry, you are not an idle player in this magical realm.  You are, for lack of a better term, a wizard or an enchanter.   Your role will be pivotal in creating magic items, maintaining the magic around us, or ensuring that the magic used by everyone stays strong. While the Dungeons and Dragons and fantasy book references are almost limitless for this conversation I am going to try and bring it back to the world we know today.  I promise.  I am really just trying to tease out a glimpse of the world to come and the importance of the cloud, data center infrastructure, and the significant impacts on software development and how software based services may have to evolve. 

The Magical Weaves Surround Us

Every device and enchanted item will be connected.  Whether via through WIFI in your work and home, over mobile networks, or all of the above and more, these Enchanted Objects will be connected to the magical weaves all around us.  If you happen to be a network engineer you know that I am talking to you.  All of these objects are going to have to connect to something.   If you are one of those folks who are stuck in IPv4, you better upgrade yourself. There just isn’t enough address space there to connect everything in our magical world of the future.  IPv6 will be a must. In fact, these devices could just be that ‘killer app’ that drives global adoption of the standard even faster.   But its not just about address space, these kind of connected objects are going to open up and challenge whole new areas in security, spectrum management, routing, and a host of other areas.   I am personally thinking through some very interesting source-based routing applications in the Enchantosphere as well.   The short of it is, this new magical world is going to stress the limits of how things are connected today and Network Engineers will be charged with keeping our magical weaves flowing to allow our charmed existences to continue.  You are the Keepers of the Magical Weave and I am not talking about a tricked out hairpiece either.

While just briefly mentioned above – Security Engineers are going to have to evolve significantly as well.   It will lead into whole new areas and fields of privacy protection hard to even conceive at this point.  Even things like Health and Safety will need to be considered.  Imagine a stove that starts pre-heating itself based on where you are on your commute home and the dinner menu you have planned.  While some of those controls will need to be programmed into the software itself, there is no doubt that those capabilities will need to be well guarded.  Why, I can almost see the Wards and Glyphs of Protection you will have to create.

The Wizard’s Tower

imageAs cool as all these enchanted objects could be, they would all be worthless IP-enabled husks without the advent of the construct that we now call The Cloud.  When I talk about ‘The Cloud’ I am talking about more than just virtualized server instances and marketing-laden terminology.  I am talking about Data Centers.  I am talking about automation.  I am talking about ubiquitous compute capabilities all around the world.  The actual physical places where the magical services live! The Data Centers which include the technologies of both IT and facilities infrastructure and automation, The proverbial Wizards Tower!  This is where our enchanted objects will come to discover who they, how they work, what they should do, and retrieve any new capabilities they may yet magically receive.  This new world is going to drive the need for more compute centers across the globe.  This growth will not just be driven by demand, although the demand will admittedly be huge, but by other more mundane ‘muggle’ matters such as regulatory requirements, privacy enforcement, taxation and revenue.  I bet you were figuring  that with all this new found magical power flying around we would be able to finally rid ourselves of lawyers, legislators, government hacks, and the like.   Alas, it is after all still the real world.  Cloud Computing capacity will continue to grow, the demand for services increasing, and the development of an entire eco-system of software and services that sit atop the various cloud providers will be birthed.

I don’t know if many of you have read Robert Jordan’s fantasy series called ‘The Wheel of Time’, but in that series he has a a classification of enchanted objects called the Terangreal.  These are single purpose or limited power artifacts that anyone can use.   Like my example of the umbrella that lights up if its going to rain after it checks with Weatherbug for weather conditions in your area, or a ring that lights up to let you know that there is a new Loosebolts post available to read, or a garden gnome whose hat lights up when it detects evidence of plant eating bugs in your garden.  These are devices that require no technical knowledge to use, configure, but give some value to its owner.   They do their function and that is it.   By the way, I am an engineer not a marketing guy, if you don’t like my examples of special purpose enchanted objects you can tweet me better ones at @mjmanos. 

These devices will reach out, download their software, learn their capabilities, and just work as advertised.   Software in this model may seem very similar to todays software development techniques and environments but I believe we will begin to see fundamental changes in how software works and is distributed.   Software will be portable. Services will be portable.   Allowing for truly amazing “Multi-purpose” enchanted objects.  The ability to download “apps” to these objects can become common place.   Even something as a common place as a credit card could evolve to a piece of software or code that could be transported around in various devices.  Simply wave that RFID enabled stick (ok, wand) that contains your credit card app at the register and as long as you are wearing your necklace which stores your digital ID the transaction goes through.  Two factor authentication in the real world.  Or instead of a wand, maybe its just your wallet.  When thinking about this app enabled platform it gives a whole new meaning to the Capital One catchphrase Whats in your wallet?  The bottom line here is that a whole host of software, services, and other capabilities will become incredibly portable, and allow for some very interesting enchanted objects indeed.

The bottom line here is that we are just beginning to see into a new world of the Internet of Things… of Enchanted Objects.   The simpler things become the more complex they truly are.   Those of us who deal with large scale infrastructure, software and service development, and cloud based technologies have a heck of a ride ahead of us.  We are the keepers of the complex, Masters of the Arcane, and needers of a good bath.

\Mm

Google Purchase of Deep Earth Mining Equipment in Support of ‘Project Rabbit Ears’ and Worldwide WIFI availability…

(10/31/2013 – Mountain View, California) – Close examination of Google’s data center construction related purchases has revealed the procurement of large scale deep earth mining equipment.   While the actual need for the deep mining gear is unclear, many speculate that it has to do with a secretive internal project that has come to light known only as Project: Rabbit Ears. 

According to sources not at all familiar with Google technology infrastructure strategy, Project Rabbit ears is the natural outgrowth of Google’ desire to provide ubiquitous infrastructure world wide.   On the surface, these efforts seem consistent with other incorrectly speculated projects such as Project Loon, Google’s attempt to provide Internet services to residents in the upper atmosphere through the use of high altitude balloons, and a project that has only recently become visible and the source of much public debate – known as ‘Project Floating Herring’, where apparently a significantly sized floating barge with modular container-based data centers sitting in the San Francisco Bay has been spied. 

“You will notice there is no power or network infrastructure going to any of those data center shipping containers,” said John Knownothing, chief Engineer at Dubious Lee Technical Engineering Credibility Corp.  “That’s because they have mastered wireless electrical transfer at the large multi-megawatt scale.” 

Real Estate rates in the Bay Area have increased almost exponentially over the last ten years making the construction of large scale data center facilities an expensive endeavor.  During the same period, The Port of San Francisco has unfortunately seen a steady decline of its import export trade.  After a deep analysis it was discovered that docking fees in the Port of San Francisco are considerably undervalued and will provide Google with an incredibly cheap real estate option in one of the most expensive markets in the world. 

It will also allow them to expand their use of renewable energy through the use of tidal power generation built directly into the barges hull.   “They may be able to collect as much as 30 kilowatts of power sitting on the top of the water like that”, continues Knownothing, “and while none of that technology is actually visible, possible, or exists, we are certain that Google has it.”

While the technical intricacies of the project fascinate many, the initiative does have its critics like Compass Data Center CEO, Chris Crosby, who laments the potential social aspects of this approach, “Life at sea can be lonely, and no one wants to think about what might happen when a bunch of drunken data center engineers hit port.”  Additionally, Crosby mentions the potential for a backslide of human rights violations, “I think we can all agree that the prospect of being flogged or keel hauled really narrows down the possibility for those outage causing human errors. Of course, this sterner level of discipline does open up the possibility of mutiny.”

However, the public launch of Project Floating Herring will certainly need to await the delivery of the more shrouded Project Rabbit Ears for various reasons.  Most specifically the primary reason for the development of this technology is so that Google can ultimately drive the floating facility out past twelve miles into International waters where it can then dodge all national, regional, and local taxation, the safe harbor and privacy legislation of any country or national entity on the planet that would use its services.   In order to realize that vision, in the current network paradigm, Google would need exceedingly long network cables  to attach to Network Access Points and Carrier Connection points as the facilities drive through international waters.

This is where Project Rabbit Ears becomes critical to the Google Strategy.   Making use of the deep earth mining equipment, Google will be able to drill deep into the Earths crust, into the mantle, and ultimately build a large Network Access Point near the Earth’s core.  This Planetary WIFI solution will be centrally located to cover the entire earth without the use of regional WIFI repeaters.  Google’s floating facilities could then gain access to unlimited bandwidth and provide yet another consumer based monetization strategy for the company. 

Knownothing also speculates that such a move would allow Google to make use of enormous amounts of free geo-thermic power and almost singlehandedly become the greenest power user on the planet.   Speculation also abounds that Google could then sell that power through its as yet un-invented large scale multi-megawatt wireless power transfer technology as unseen on its floating data centers.

Much of the discussion around this kind of technology innovation driven by Google has been given credible amounts of veracity and discussed by many seemingly intelligent technology based news outlets and industry organizations who should intellectually know better, but prefer not to acknowledge the inconvenient lack of evidence.

 

\Mm

Editors Note: I have many close friends in the Google Infrastructure organization and firmly believe that they are doing some amazing, incredible work in moving the industry along especially solving problems at scale.   What I find simply amazing is in the search for innovation how often our industry creates things that may or may not be there and convince ourselves so firmly that it exists. 

2014 The Year Cloud Computing and Internet Services will be taxed. A.K.A Je déteste dire ça. Je vous l’avais dit.

 

france

Its one of those times I really hate to be right.  As many of you know I have been talking about the various grass roots efforts afoot across many of the Member EU countries to start driving a more significant tax regimen on Internet based companies.  My predictions for the last few years have more been cautionary tales based on what I saw happening from a regulatory perspective on a much smaller scale, country to country.

Today’s Wall Street Journal has an article discussing France’s movements to begin taxation on Internet related companies who derive revenue from users and companies across the entirety of the EU, but holding those companies responsible to the tax base in each country.   This could likely mean that such legislation is likely to become quite fractured and tough for Internet Companies to navigate.  The French proposition is asking the European Commission to draw up proposals by the Spring of 2014.

This is likely to have a very interesting (read as cost increases) across just about every aspect of Internet and Cloud Computing resources.  From a business perspective this is going to increase costs which will likely be passed on to consumers in small but interesting ways.  Internet advertising will need to be differentiated on a country by country basis, and advertisers will end up having different cost structures, Cloud Computing Companies will DEFINITELY need to understand where instances of customer instances were, and whether or not they were making money.  Potentially more impactful, customers of Cloud computing may be held to account for taxation accountability that they did not know they had!  Things like Data Center Site Selection are likely going to become even more complicated from a tax analysis perspective as countries with higher populations will likely become no-go zones (perhaps) or require the passage of even more restrictive laws around it.

Its not like the seeds of this haven’t been around since 2005, I think most people just preferred to keep a blind eye to the tax that the seed was sprouting into a full fledged tree.   Going back to my Cat and Mouse Papers from a few years ago…  The Cat has caught the mouse, its now the mouse’s move.

\Mm

 

Authors Note: If you don’t have a subscription to the WSJ, All Things Digital did a quick synopsis of the article here.

The Cloud Cat and Mouse Papers–Site Selection Roulette and the Insurance Policies of Mobile infrastructure

cnm-roulette

Its always hard to pick exactly where to start in a conversation like this especially since this entire process really represents a changing life-cycle.   Its more of a circular spiral that moves out (or evolves) as new data is introduced than a traditional life-cycle because new data can fundamentally shift the technology or approach.   That being said I thought I would start our conversations at a logical starting point.   Where does one place your infrastructure?  Even in its embryonic “idea phase” the intersection of government and technology begins its delicate dance to a significant degree. These decisions will ultimately have an impact on more than just where the Capital investments a company decides to make are located.  It has affects on the products and services they offer, and as I propose, an impact ultimately on the customers that use the services at those locations.

As I think back to the early days of building out a global infrastructure, the Site Selection phase started at a very interesting place.   In some ways we approached it with a level of sophistication that has still to be matched today and in other ways, we were children playing a game whose rules had not yet been defined.

I remember sitting across numerous tables with government officials talking about making an investment (largely just land purchase decisions) in their local community.  Our Site Selection methodology had brought us to these areas.  A Site Selection process which continued to evolve as we got smarter, and as we started to truly understand the dynamics of the system were being introduced to.   In these meetings we always sat stealthily behind a third party real estate partner.  We never divulged who we were, nor were they allowed to ask us that directly.  We would pepper them with questions, and they in turn would return the favor.  It was all cloak and dagger with the Real Estate entity taking all action items to follow up with both parties.

Invariably during these early days -  these locales would always walk away with the firm belief that we were a bank or financial institution.   When they delved into our financial viability (for things like power loads, commitment to capital build-out etc.) we always stated that any capital commitments and longer term operational cost commitments were not a problem.    In large part the cloak and dagger aspect was to keep land costs down (as we matured, we discovered this was quite literally the last thing we needed to worry about) as we feared that once our name became attached to the deal our costs would go up.   These were the early days of seeding global infrastructure and it was not just us.  I still laugh at the fact that one of our competitors bound a locality up so much in secrecy – that the community referred to the data center as Voldemort – He who shall not be named, in deference to the Harry Potter book series.

This of course was not the only criteria that we used.  We had over 56 by the time I left that particular effort with various levels of importance and weighting.   Some Internet companies today use less, some about the same, and some don’t use any, they ride on the backs of others who have trail-blazed a certain market or locale.   I have long called this effect Data Center Clustering.    The rewards for being first mover are big, less so if you follow them ultimately still positive. 

If you think about most of the criteria used to find a location it almost always focuses on the current conditions, with some acknowledge in some of the criteria of the look forward.  This is true for example when looking at power costs.   Power costs today are important to siting a data center, but so is understanding the generation mix of that power, the corresponding price volatility, and modeling that ahead to predict (as best as possible) longer term power costs.

What many miss is understanding the more subtle political layer that occurs once a data center has been placed or a cluster has developed. Specifically that the political and regulatory landscape can change very quickly (in relationship to the life of a data center facility which is typically measured in 20, 30, or 40 year lifetimes).  It’s a risk that places a large amount of capital assets potentially in play and vulnerable to these kinds of changes.   Its something that is very hard to plan or model against.  That being said there are indicators and clues that one can use to at least play risk factors against or as some are doing – ensuring that the technology they deploy limits their exposure.    In cloud environments the question remains open – how liable are companies using cloud infrastructure in these facilities at risk?   We will explore this a little later.

That’s not to say that this process is all downside either.  As we matured in our approach, we came to realize that the governments (local or otherwise) were strongly incented to work with us on getting us a great deal and in fact competed over this kind of business.   Soon you started to see the offers changing materially.  It was little about the land or location and quickly evolved to what types of tax incentives, power deals, and other mechanisms could be put in play.   You saw (and continue to see) deals structured around sales tax breaks, real estate and real estate tax deals, economic incentives around breaks in power rates, specialized rate structures for Internet and Cloud companies and the like.   The goal here of course was to create the public equivalent of “golden handcuffs” for the Tech companies and try to marry them to particular region, state, or country.  In many cases – all three.  The benefits here are self apparent.  But can they (or more specifically will they) be passed on in some way to small companies who make use of cloud infrastructure in these facilities? While definitely not part of the package deals done today – I could easily see site selection negotiations evolving to incent local adoption of cloud technology in these facilities or provisions being put in place tying adoption and hosting to tax breaks and other deal structures in the mid to longer timeframe for hosting and cloud companies.

There is still a learning curve out there as most governments mistakenly try and tie these investments with jobs creation.   Data Centers, Operations, and the like represents the cost of goods sold (COGS) to the cloud business.  Therefore there is a constant drive towards efficiency and reduction of the highest cost components to deliver those products and services.   Generally speaking, people, are the primary targets in these environments.   Driving automation in these environments is job one for any global infrastructure player.  One of the big drivers for us investing and developing a 100% lights-out data center at AOL was eliminating those kinds of costs.  Those governments that generally highlight job creation targets over other types typically don’t get the site selection.    After having commissioned an economic study done after a few of my previous big data center builds I can tell you that the value to a region or a state does not come from the up front jobs the data center employs.  After a local radio stationed called into question the value of having such a facility in their backyard, we used a internationally recognized university to perform a third party “neutral” assessment of the economic benefits (sans direct people) and the numbers were telling.  We had surrendered all construction costs and other related material to them, and they investigated over the course of a year through regional interviews and the like of what the direct impacts of a data center was on the local community, and the overall impacts by the addition.  The results of that study are owned by a previous employer but I  can tell you with certainty – these facilities can be beneficial to local regions.

No one likes constraints and as such you are beginning to see Technology companies use their primary weapon – technology – to mitigate their risks even in these scenarios.   One cannot argue for example, that while container-based data centers offer some interesting benefits in terms of energy and cost efficiencies, there is a certain mobility to that kind of infrastructure that has never been available before.    Historically, data centers are viewed as large capital anchors to a location.    Once in place, hundreds of millions to billions (depending on the size of the company) of dollars of capital investment are tied to that region for its lifespan.   Its as close to permanent in the Tech Industry as building a factory was during the industrial revolution. 

In some ways Modularization of the data center industry is/can/will have the same effect as the shipping container did in manufacturing.   All puns intended.  If you are unaware of how the shipping container revolutionized the world, I would highly recommend the book “The Box” by Marc Levinson, it’s a quick read and very interesting if you read it through the lens of IT infrastructure and the parallels of modularization in the Data Center Industry at large.

It gives the infrastructure companies more exit options and mobility in the future than they would have had in the past under large capital build-outs.  Its an insurance policy if you will for potential changes is legislation or regulation that might negatively impact the Technology companies over time.  Just another move in the cat and mouse games that we will see evolving here over the next decade or so in terms of the interactions between governments and global infrastructure. 

So what about the consumers of cloud services?  How much of a concern should this represent for them?  You don’t have to be a big infrastructure player to understand that there are potential risks in where your products and services live.  Whether you are building a data center or hosting inside a real estate or co-location provider – these are issues that will affect you.  Even in cases where you only use the cloud provisioning capabilities within your chosen provider – you will typically be given options of what region or area would you like you gear hosted in.  Typically this is done for performance reasons – reaching your customers – but perhaps this information might cause you to think of the larger ramifications to your business.   It might even drive requirements into the infrastructure providers to make this more transparent in the future.

These evolutions in the relationship between governments and Technology and the technology options available to them will continue to shape site selection policy for years to come.   So too will it ultimately affect the those that use this infrastructure whether directly or indirectly remains to be seen.  In the next paper we will explore the this interaction more deeply as it relates to the customers of cloud services and the risks and challenges specifically for them in this environment.

\Mm

The Cloud Cat and Mouse Papers – The Primer

image_thumb1_thumb

Cat and Mouse with Multi-national infrastructure –The Participants to date

There is an ever-changing, game of cat and mouse developing in the world of cloud computing.   Its not a game that you as a consumer might see, but it is there.  An undercurrent that has been there from the beginning.   It pits Technology Companies and multi-national infrastructure against local and national governments.  For some years this game of cat and mouse has been quietly played out in backrooms, development and technology roadmap re-works, across negotiation tables, and only in the rarest of cases – have they come out for industry scrutiny or visibility.  To date the players have been limited to likes of Google, Microsoft, Amazon, and others who have scaled their technology infrastructure across the globe and in large measure those are the players that governments have moved against in an ever intricate chess game.  I myself have played apart in the measure/counter-measure give and take of this delicate dance.

The primary issues in this game have to do with realization of revenue for taxation purposes, Safe Harbor and issues pertaining to personally identifiable information, ownership of Big Data, the nature of what is stored, how it is stored, and where it is stored, the intersection where politics and technology meet.   A place where social issues and technology collide.   You might call them storm clouds, just out of sight, but there is thunder on the horizon that you the consumer and/or potential user of global cloud infrastructure will need to be aware of because eventually the users of cloud infrastructure will become players in the game as well. 

That is not to say that the issues I tease out here are all gloom and doom.  In fact, they are great opportunities for potential business models, additional product features, and even cloud eco-system companies or niche solutions unto themselves.  A way to drive significant value for all sides.   I have been toying with more than a few of these ideas myself here over the last few months.

To date these issues have mostly manifested in the global build up of infrastructure for the big Internet platforms.  The Products and Services the big guys in the space use as their core money-making platforms or primary service delivery platforms.  Rarely if ever do these companies use this same infrastructure for their Infrastructure as a service (IAAS) or Platform as a Services (PAAS) offerings.  However, as you will see, the same challenges will and do apply to these offerings as well.  In some cases they are even more acute and problematic in a situation where there may be a multi-tenancy with the potential to put even more burden on future cloud users.

If I may be blunt about this there is an interesting lifecycle to this food chain whereby the Big Technology companies consistently have the upper hand and governmental forces through the use of their primary tools – regulation and legislation – are constantly playing catch up.  This lifecycle is unlikely to change for at least five reasons.

  • The Technology Companies will always have the lens of the big picture of multi-national infrastructure.   Individual countries, states, and locales generally only have jurisdiction or governance over that territory, or population base that is germane to their authority.
  • Technology Companies can be of near singular purpose on a very technical depth of capability or bring to bear much more concentrated “brain power” to solve for evolutions in the changing socio-political landscape to continually evolve measures and counter-measures to address these changes.
  • By and large Governments rely upon technologies and approaches to become mainstream before there is enough of a base understanding of the developments and impacts before they can act.  This generally places them in a reactionary position. 
  • Governmental forces generally rely upon “consultants” or “industry experts” to assist in understanding these technologies, but very few of these industry experts have ever really dealt with multi-national infrastructure and fewer still have had to strategize and evolve plans around these types of changes. The expertise at that level is rare and almost exclusively retained by the big infrastructure providers.
  • Technology Companies have the ability to force a complete game-change to the rules and reality by completely changing out the technology used to deliver their products and services, change development and delivery logic and/or methodology to almost affect a complete negation of the previous method of governance, making it obsolete. 

That is not to say that governments are unwilling participants in this process forced into a subservient role in the lifecycle.  In fact they are active participants in attracting, cultivating, and even subsidizing these infrastructural investments in areas of under their authority and jurisdiction.  Using tools like Tax breaks, real estate and investment incentives, and private-public partnerships do have both initial and ongoing benefits for the Governments as well.  In many ways these  are “golden handcuffs” for Technology Companies who enter into this cycle, but like any kind of constraint – positive or negative – the planning and strategy to unfetter themselves begins almost immediately.

Watson, The Game is Afoot

Governments, Social Justice, Privacy, and Environmental forces have already begun to force changes in the Technology landscape for those engaged in multi-national infrastructure.  There are tons of articles freely available on the web which articulate the kinds of impacts these forces have had and will continue to have on the Technology Companies.  The one refrain through all of the stories is the resiliency of those same Technology Companies to persevere and thrive despite what might be crucial setbacks in other industries.

In some cases the technology changes and adapts to meet the new requirements, in some cases, approaches or even vacating “un-friendly” environs across any of these spectrums becomes an option, and in some cases, there is not an insignificant bet that any regulatory or compulsory requirements will be virtually impossible or too technically complex to enforce or even audit.

Lets take a look at a couple of the examples that have been made public that highlight this kind of thing.   Back in 2009, Microsoft migrated substantial portions of their Azure Cloud Services out of Washington State to its facilities located in San Antonio Texas.  While the article specifically talks about certain aspects of tax incentives being held back, there were of course other factors involved.   One doesn’t have to look far to understand that Washington State also has an B&O Tax (Business and Occupation Tax) which is defined as a gross receipts tax. It is measured on the value of products, gross proceeds of sale, or gross income of the business.  As you can imagine, interpreting this kind of tax as it relates to online and cloud income and the like could be very tricky and regardless would be complex and technical problem  to solve.  It could have the undesired impact of placing any kind of online business at an interesting disadvantage,  or at another level place an unknown tax burden on its users.   I am not saying this was a motivating factor in Microsoft’s decision but you can begin to see the potential exposure developing.   In this case, The Technology could rapidly change and move the locale of the hosted environments to minimize the exposure, thus thwarting any governmental action.  At least for the provider, but what of the implications if you were a user of the Microsoft cloud platform and found yourself with an additional or unknown tax burden.  I can almost guarantee that back in 2009 that this level of end user impact (or revenue potential from a state tax perspective) had not even been thought about.   But as with all things, time changes and we are already seeing examples of exposure occurring across the game board that is our planet.

We are already seeing interpretations or laws getting passed in countries around the globe where for example, a server is a taxable entity.   If revenue for a business is derived from a computer or server located in that country it falls under the jurisdiction of that countries tax authority.    Imagine yourself as a company using this wonderful global cloud infrastructure selling your widgets, products or services, and finding yourself with an unknown tax burden and liability in some “far flung” corner of the earth.   The Cloud providers today mostly provide Infrastructure services.  They do not go up the stack far enough to be able to effectively manage your entire system let alone be able to determine your tax liability.  The burden of proof to a large degree today would reside on the individual business running inside that infrastructure.  

In many ways those adopting these technologies are the least capable to deal with these kinds of challenges.  They are small to mid-sized companies who admittedly don’t have the capital, or operational sophistication to build out the kind of infrastructure needed to scale that quickly.   They are unlikely to have technologies such as robust configuration management databases to be able to to track virtual instances of their products and services, to tell what application ran, where it ran, how long it ran, and how much revenue was derived during the length of its life.   And this is just one example (Server as a taxable entity) of a law or legislative effort that could impact global users.  There are literally dozens of these kinds of bills/legislative initiatives/efforts (some well thought out, most not) winding their way through legislative bodies around the world.

You might think that you may be able to circumvent some of this by limiting your product or services deployment to the country closest to home, wherever home is for you.  However there are other efforts winding their way through or in large degree passed that impact the data you store, what you store, whose data are you storing, and the like. In most cases these initiatives are unrelated to the revenue legislations developing, but balanced they can give an interesting one – two punch.   For example many countries are requiring that for Safe Harbor purposes all information for any nationals of ‘Country X’ must be stored in ‘Country X’ to ensure that its citizenry is properly protected and under the jurisdiction of the law for those users.   In a cloud environment, with customers potentially from almost anywhere how do you ensure that this is the case?  How do you ensure you are compliant?   If you balance this requirement with the ‘server as a taxable entity’ example I just gave above there is an interesting exposure and liability for companies prove where and when revenue is derived.     Similarly there are some laws that are enacted as reactions against legislation in other countries.

In the post-911 era within the United States, the US Congress enacted a series of laws called the Patriot Act.   Due to some of the information search and seizure aspects of the law, Canada forbade that Canadian citizens data be stored in the United States in response.   To the best of my knowledge only a small number of companies actually even acknowledge this requirement and have architected solutions to address it, but the fact remains they are not in compliance with Canadian law.  Imagine you are a small business owner, using a cloud environment to grow your business, and suddenly you begin to grow your business significantly in Canada.  Does your lack of knowledge of Canadian law excuse you from your responsibilities there?  No.  Is this something that your infrastructure provider is offering to you? Today, no.  

I am only highlighting certain cases here to make the point that there is a world of complexity coming to the cloud space.  Thankfully these impacts have not been completely explored or investigated by most countries of the world, but its not hard to see a day/time where this becomes a very real thing where companies and the cloud eco-system in general will have to address.  At its most base level these are areas of potential revenue streams for governments and as such increase the likelihood of their eventual day in the sun.    I am currently personally tracking over 30 different legislative initiatives around the world (read as pre-de-facto laws) that will likely shape this Technology landscape for the big providers and potential cloud adopters some time in the future.

What is to come?

This first article was really just to bring out the basic premise of the conversation and topics I will be discussing and to lay the groundwork to a very real degree.  I have not even begun to touch on the extra-governmental impacts of social and environmental impacts that will likely change the shape even further.  This interaction of Technology, “The Cloud”, Political and Social Issues, exists today and although largely masked by the fact that eco-system of the cloud is not fully developed or matured, is no less a reality.   Any predictions that are made are extensions of existing patterns I see in the market already and do not necessarily represent a forgone conclusion, but rather the most likely developments based upon my interactions in this space.  As this Technology space continues to mature, the only certainty is uncertainty modulated against the backdrop of a world where increasingly geo-political forces will continue to shape the Technology of tomorrow.

\Mm

Cloud Détente – The Cloud Cat and Mouse Papers

image_thumb1

Over the last decade or so I have been lucky enough to be placed into a fairly unique position to work internationally deploying global infrastructure for cloud environments.  This work has spanned across some very large companies with a very dedicated focus on building out global infrastructure and managing through those unique challenges.   Strategies may have varied but the challenges faced by them all had some very common themes.   One of the more complex interactions when going through this process is what I call the rolling Cat and Mouse interactions between governments at all levels and these global companies.  

Having been a primary player in these negotiations and the development of measures and counter measures as a result of these interactions, I have come to believe there are some interesting potential outcomes that cloud adopters should think about and understand.   The coming struggle and complexity for managing regulating and policing multi-national infrastructure will not solely impact the large global players, but in a very real way begin to shape how their users will need to think through these socio-political  and geo-political realities. The potential impacts on their business, their adoption of cloud technologies, their resulting responsibilities and measure just how aggressively they look to the cloud for the growth of their businesses.

These observations and predictions are based upon my personal experiences.  So for whatever its worth (good or bad)  this is not the perspective of an academic writing from some ivory tower, rather they are  the observations of someone who has been there and done it.  I probably have enough material to write an entire book on my personal experiences and observations, but I have committed myself to writing a series of articles highlighting what I consider the big things that are being missed in the modern conversation of cloud adoption.  

The articles will highlight (with some personal experiences mixed in) the ongoing battle between Technocrats versus Bureaucrats.  I will try to cover a different angle on many of the big topics out there today such as :

  • Big Data versus Big Government
  • Rise of Nationalism as a factor in Technology and infrastructure distribution
  • The long struggle ahead for managing, regulating, and policing clouds
  • The Business, end-users, regulation and the cloud
  • Where does the data live? How long does it live? Why Does it Matter?
  • Logic versus Reality – The real difference between Governments and Technology companies.
  • The Responsibilities of data ownership
    • … regarding taxation exposure
    • … regarding PII impacts
    • … Safe Harbor

My hope is that this series and the topics I raise, while maybe a bit raw and direct, will cause you to think a bit more about the coming impacts on Technology industry at large, the potential coming impacts to small and medium size businesses looking to adopt these technologies, and the developing friction and complexity at the intersection of technology and government.

\Mm

Breaking the Chrysalis

What has come before

When I first took my position at AOL I knew I was going to be in for some very significant challenges.   This position, perhaps more-so than any other in my career was going to push the bounds of my abilities.  As a technologist, as an operations professional, as a leader, and as someone who would hold measurable accountability to the operational success of an expansive suite of products and services.  As many of you may know, AOL has been engaged in what used to be called internally as a “Start-Around”.  Essentially an effort to try and fundamentally change the company from its historic roots to the premium content provider for the Internet. 

We no longer refer to this term internally as it is no longer about forming or defining that vision.  It has shifted to something more visceral.  More tangible.  It’s a challenge that most companies should be familiar with, It’s called Execution.  Execution is a very simple word but as any good operations professional knows, the devil is in the details, and those details have layers and layers of nuances.    Its where the proverbial rubber meets the road.  For my responsibilities within the company,  execution revolves 100% around delivering the technologies and services to ensure our products and content remain available to the world.   It is also about fundamentally transforming the infrastructural technologies and platform systems our products and content are based upon and providing the most agility and mobility we can to our business lines. 

One fact that is often forgotten in the fast-paced world of Internet Darlings, is that AOL had achieved a huge scale of infrastructure and technology investment long before many of these companies were gleams in the eyes of their founders.   While it may be fun and “new” to look at the tens of thousands of machines at Facebook, Google, or Microsoft – it is often overlooked that AOL had tens of thousands of machines (and still does!) and solved many of the same problems years ago.  To be honest it was a personal revelation for me when I joined.  There are few companies who have had to grow and operate at this kind of scale and every approach is a bit unique and different.  It was an interesting lesson, even for one who had a ton of experience doing something similar in “Internet Darling” infrastructures.

AOL has been around for over 27 years.  In technology circles, that’s like going back almost ten generations.   Almost 3 decades of “stuff”.  The stuff was not only gear and equipment from the natural growth of the business, but included the expansion of features and functionality of long standing services, increased systems interdependencies, and operational, technological, and programmatic “Cruft” as new systems / processes/ technologies were  built upon or bolted onto older systems. 

This “cruft” adds significant complexity to your operating environment and can truly limit your organization’s agility.  As someone tasked with making all this better, it struck me that we actually had at least two problems to solve.   The platform and foundation for the future, and a method and/or strategy for addressing the older products, systems, and environments and increase our overall agility as a company.

These are hard problems.  People have asked why I haven’t blogged in awhile externally.   This is the kind of challenge with multiple layers of challenges underneath that can keep one up at night.   From a strategy perspective do you target the new first?  Do you target the legacy environments to reduce the operational drag?  Or – Do you try and define a unified strategy to address both.  Its a lot harder and generally more complex, but they potential payoff is huge.   Luckily I have a world class team at AOL and together we built and entered our own cocoon and busily went to work.  We have gone down the path of changing out technology platforms, operational processes, outdated ways of thinking about data centers, infrastructure, and overall approach. Every inch fighting forward on this idea of unified infrastructure.

It was during this process that I came to realize that our particular legacy challenge, while at “Internet” scale, was more closely related to the challenges of most corporate or government environments than the biggest Internet players.  Sure we had big scale, we had hundreds of products and services, but the underlying “how to get there from here” problems were more universally like IT challenges than scaling out similar applications across commoditized infrastructure.   It ties into all the marketing promises, technological snake oil, and other baloney about the “cloud”.  The difference being that we had to quickly deliver something that worked and would not impact the business.  Whether we wanted to or not, we would be walking down some similar roads facing most IT organizations today.

As I look at the challenges facing modern IT departments across the world, their ability to “go to the cloud” or make use of new approaches is also securely anchored behind by the “cruft”  of their past.  Sometimes that cruft is so thick that the organization cannot move forward.  We were there, we were in the same boat.  We aren’t out of it yet – but we have made some pretty interesting developments that I think are pretty significant and I intend to share those learnings where appropriate. 

 

ATC

ATC IS BORN

Last week we launched a brand new data center facility we call, ATC.  This facility is fundamentally built upon the work that we have been doing around our own internal cloud technologies, shifts in operational process and methodology, and targeting our ability to be extremely agile in our new business model.  It represents a model on how to migrate the old, prepare for the new, and provide a platform upon which to build our future. 

Most people ignore the soft costs when looking at adoption of different cloud offerings, operational impacts are typically considered as afterthoughts.   What if you built those requirements in from day one… how would that change your design? your implementation? Your overall strategy?  I believe that ATC represents that kind of shift of thinking.  At least for us internally.

One of the key foundations for our ATC facility is our cloud platform and automation layer.  I like to think about this layer as a little bit country and a little bit rock and roll.  There is tremendous value in the learning’s that have come before, and nowhere else is this self evident than at AOL.  As I mentioned, the great minds of the past (as well as those in the present) had invested in many great systems that made this company a giant in the industry.   There are many such systems here, but one of the key ones in my mind is the Configuration Management System.  All organizations invest significantly into this type of platform.  If done correctly, their uses can span from more than a rudimentary asset management system, to include cost allocation systems, dependency mapping, detailed configuration and environmental data, and in some cases like ours provide the base foundation of leading us into the cloud. 

Many companies I speak with abandon this work altogether or live in a strange split/hybrid model where they treat “Cloud” as different.  In our space – new government regulations, new safe harbor laws, etc are continuing to drive the relevance of a universal system to act as a central authority.   The fact that this technology actually sped our development efforts in this automation cannot be ignored.

We went from provisioning servers in days, to getting base virtual machines up and running in under 8 seconds.  Want Service and Application images (for established products)? Add another 8 seconds or so.   Want to roll it into production globally (changing global DNS/Load balancing/Security changes)?  Lets call that another minute to roll out.   We used Open Source products and added our own development glue into our own systems to make all  this happen.  I am incredibly proud of my Cloud teams here at AOL, because what they have been able to do in such a relatively short period of time is to roll out a world class cloud and service provisioning system that can be applied to new efforts and platforms or our older products.   Better yet, the provisioning systems were built to be universal so that if required we can do the same thing with stand-alone physical boxes or virtual machines.  No difference.  Same system. This technology platform was recently recognized by the Uptime Institute at its last Symposium in California. 

auto2

This technology was put to the test in the recently with the earthquake that hit the East Coast of the United States.  While thankfully the damage was minimal, the tremor of Internet traffic was incredible.   The AOL homepage, along with our news sites started to get hammered with traffic and requests.  In the past this would have required a massive people effort to provision more capacity for our users.  With the new technology in place we were able to start adding additional machines to take the load extremely quickly with very minimal impact to our users.  In this particular case these machines were provisioned from our systems in existing data centers (not ATC), but the technology is the same.

This kind of technology and agility has some interesting side effects too.   It allows your organization to move much more quickly and aggressively than ever before.   I have seen Jevon’s paradox manifest itself over and over again in the Technology world.    For those of you who need a refresher, Jevons paradox is is the proposition that technological progress that increases the efficiency with which a resource is used tends to increase (rather than decrease) the rate of consumption of that resource. 

Its like when car manufacturers started putting the Miles per Gallon (MPG) efficiency on autos, the direct result was not a reduction of driving, but rather an overall increase of travel.

For ATC, which officially launched on October 1, 2011.  It took all of an hour to have almost 100 virtual machines deployed to it as soon as it was “turned on”.   It has since long passed that mark and in fact this technology usage is happening faster than coordinating executive schedules to attend our executive ribbon cutting ceremony this week.

While the Cloud development and technology efforts are cornerstones of the facility, it is not this work alone that is providing for something unique. After all however slick our virtualization and provisioning systems are, however deeply integrated they are into our internal tools and configuration management systems, those characteristics in and of themselves does not reflect the true evolution that ATC represents.

ATC is a 100% lights out facility.  There are absolutely no employees stationed at the facility full time, contract, or otherwise.   The entire premise is that we have moved from a reactive support model to a proactive or planned work support model.  If you compare this with other facilities (including some I built myself in the past) there is always personnel on site even if contractor.   This has fundamentally led to significant changes in how we operate our data centers, how, what, and when we do our work, and has impacted (downward) the overall costs to operate our environments.  Many of these are efficiencies and approaches I have used before (100% pre-racked/vendor integrated gear and systems integration) to fundamentally brand new approaches.  These changes have not been easy and a ton of credit goes to our operations and engineering staff in the Data Centers and across the Technology Operations world here at AOL.  Its always culturally tough to being open to fundamentally changing business as usual.   Another key aspect of this facility and infrastructure is that from network perspective its nearly 100% non-blocking.   My network engineers being network engineers pointed out that its not completely non-blocking for a few reasons, but I can honestly say that the network topology is the closest I have seen to “completely” non blocking deployed in real network environments ever especially compared to the industry standard of 2:1. 

Another incredible aspect of this new data center facility and the technology deployed is our ability to Quick Launch Compute Capacity.  The total time it took to go from idea inception (no data center) to delivering active capacity to our internal users was  90 days.  In my mind this made even more incredible by the fact that this was the first time that all these work-streams came together including the unified operations deployment model and included all of the physical aspects of just getting iron to the floor.    This time frame was made possible by a standardized / modular way to build out our compute capacity in logical segments based upon the the infrastructure cloud type being deployed (low tier, mid-tier, etc.).   This approach has given us a predictability to speed of deployment and cost which in my opinion is unparalleled.

The culmination of all of this work is the result of some incredible teams devoted to the desire to affect change, a little dash of renegade engineering, a heaping helping of some new perspective, blood, sweat, tears and vision.   I am extremely proud of the teams here at AOL to deliver this ground-breaking achievement.   But then again, I am more than a bit biased.   I have seen the passion of these teams manifested in some incredible technology.

As with all things like this, it’s been a journey and there is still a bunch of work to do.  Still more to optimize.  Deeper analysis and ease of aggregation for stubborn legacy environments.   We have already set our sights on the next generation of cloud development.  But for today, we have successfully built a new foundation upon which even more will be built.  For those of you who were not able to attend the Uptime Symposium this year I will be putting up some videos that give you some flavor of our work with driving a low cost cloud compute and provisioning system from Open Source components.

 

\Mm

Chaos Monkeys, Donkeys and the Innovation of Action

Last week I once again had the pleasure of speaking at the Uptime Institute’s Symposium.  As one of the premiere events in the Data Center industry it is definitely one of those conferences that is a must attend to get a view into what’s new, what’s changing, and where we are going as an industry.  Having attended the event numerous times in the past, this year I set out on my adventure with a slightly different agenda.

Oh sure I would definitely attend the various sessions on technology, process, and approach.  But this time I was also going with the intent to listen equally to the presenters as well as the scuttlebutt, side conversations, and hushed whispers of the attendees.   Think of it as a cultural experiment in being a professional busy body.  As I wove my way around from session to session I was growing increasingly anxious that while the topics were of great quality, and discussed much needed areas of improvement in our technology sector – most of them were issues we have covered, talked about and have been dealing with as an industry for many years.   In fact I was hard pressed to find anything of real significance in the new category.   These thoughts were mirrored in those side conversations and hushed whispers I heard around the various rooms as well.

One of the new features of Symposium is that the 451 Group has opted to expand the scope of the event to be more far reaching covering all aspects of the issues facing our industry.   It has brought in speakers from Tier 1 Research and other groups that have added an incredible depth to the conference.    With that depth came some really good data.   In many respects the data reflected (in my interpretation) that while technology and processes are improving in small pockets, our industry ranges from stagnant to largely slow to act.  Despite mountains of data showing energy efficiency benefits, resulting cost benefits, and the like we just are not moving the proverbial ball down the field.

In a purely unscientific poll I was astounded to find out that some of the most popular sessions were directly related to those folks who have actually done something.  Those that took the new technologies (or old technologies) and put them into practice were roundly more interesting than more generic technology conversations.   Giving very specific attention to detail on the how they accomplished the tasks at hand, what they learned, what they would do differently.   Most of these “favorites” were not necessarily in those topics of “bleeding edge” thought leadership but specifically the implementation of technologies and approaches we have talked about the event for many years.   If I am honest, one of those sessions that surprised me the most was our own.   AOL had the honor of winning an IT Innovation Award from Uptime and as a result the teams responsible for driving our cloud and virtualization platforms were allowed to give a talk about what we did, what the impact was and how it all worked out.   I was surprised because I was not sure how many people would come to this side session and listen to presentation or find the presentation relevant.  Of course I thought it was relevant (We were after all going to get a nifty plaque for the achievement) but to my surprise the room was packed full, ran out of chairs, and had numerous people standing for the presentation.   During the talk we had a good interaction of questions from the audience and after the talk we were inundated with people coming up to specifically dig into more details.  We had many comments around the usefulness of the talk because we were giving real life experiences in making the kinds of changes that we as an industry have been talking about for years.  Our talk and adaption of technology even got a little conversation in some of the Industry press such as Data Center Dynamics.

Another session that got incredible reviews was the presentation by Andrew Stokes of Deutsche Bank who guided the audience through their adoption of 100% free air cooled data center in the middle of New York City.  Again, the technology here was not new (I had built large scale facilities using this in 2007) – but it was the fact that Andrew and the folks at Deutsche Bank actually went out and did something.   Not someone from those building large-scale cloud facilities, not some new experimental type of server infrastructure.  Someone who used this technology servicing IT equipment that everyone uses, in a fairly standard facility who actually went ahead and did something Innovative.  They put into practice something that others have not. Backed back facts, and data, and real life experiences the presentation went off incredibly and was roundly applauded by those I spoke with as one of the most eye-opening presentations of the event.

By listening the audiences, the hallway conversations, and the multitude of networking opportunities throughout the event a pattern started to emerge,  a pattern that reinforced the belief that I was already coming to in my mind.   Despite a myriad of talk on very cool technology, application, and evolving thought leadership innovations – the most popular and most impactful sessions seemed to center on those folks who actually did something, not with the new bleeding edge technologies, but utilizing those recurring themes that have carried from Symposium to Symposium over the years.   Air Side economization?  Not new.   Someone (outside Google, Microsoft, Yahoo, etc) doing it?  Very New-Very Exciting.  It was what I am calling the Innovation of ACTION.  Actually doing those things we have talked about for so long.

While this Innovation of Action had really gotten many people buzzing at the conference there was still a healthy population of people who were downplaying those technologies.  Downplaying their own ability to do those things.    Re-stating the perennial dogmatic chant that these types of things (essentially any new ideas post 2001 in my mind) would never work for their companies.

This got me thinking (and a little upset) about our industry.  If you listen to those general complaints, and combine it with the data that we have been mostly stagnant in adopting these new technologies – we really only have ourselves to blame.   There is a pervasive defeatist attitude amongst a large population of our industry who view anything new with suspicion, or surround it with the fear that it will ultimately take their jobs away.  Even when the technologies or “new things” aren’t even very new any more.  This phenomenon is clearly visible in any conversation around ‘The Cloud’ and its impact on our industry.    The data center professional should be front and center on any conversation on this topic but more often than not self-selects out of the conversation because they view it more as an application thing, or more IT than data center thing.   Which is of course complete bunk.   Listening to those in attendance complain that the ‘Cloud’ is going to take their jobs away, or that only big companies like Google , Amazon, Rackspace, or  Microsoft would ever need them in the future were driving me mad.   As my keynote at Uptime was to be centered around a Cloud survival guide – I had to change my presentation to account for what I was hearing at the conference.

In my talk I tried to focus on what I felt to be emerging camps at the conference.    To the first, I placed a slide prominently featuring Eeyore (from Winnie the Pooh fame) and captured many of the quotes I had heard at the conference referring to how the Cloud, and new technologies were something to be mistrusted rather than an opportunity to help drive the conversation.     I then stated that we as an industry were an industry of donkeys.  That fact seems to be backed up by data.   I have to admit, I was a bit nervous calling a room full of perhaps the most dedicated professionals in our industry a bunch of donkeys – but I always call it like I see it.

I contrasted this with those willing to evolve their thought forward, embrace that Innovation of Action by highlighting the Cloud example of Netflix.   When Netflix moved heavily into the cloud they clearly wanted to evolve past the normal IT environment and build real resiliency into their product.   They did so by creating a rogue process (on purpose) call the Chaos Monkey which randomly shut down processes and wreaked havoc in their environment.   At first the Chaos Monkey was painful, but as they architected around those impacts their environments got stronger.   This was no ordinary IT environment.  This was something similar, but new.  The Chaos Monkey creates Action, results in Action and on the whole moves the ball forward.

Interestingly after my talk I literally have dozens of people come up and admit they had been donkeys and offered to reconnect next year to demonstrate what they had done to evolve their operations.

My challenge to the audience at Uptime, and ultimately my challenge to you the industry is to stop being donkeys.   Lets embrace the Innovation of Action and evolve into our own versions of Chaos Monkeys.    Lets do more to put the technologies and approaches we have talked about for so long into action.    Next Year at Uptime (and across a host of other conferences) lets highlight those things that we are doing.  Lets put our Chaos Monkeys on display.

As you contemplate your own job – whether IT or Data Center professional….Are you a Donkey or Chaos Monkey?

\Mm

Preparing for the Cloud: A Data Center and Operations Survival Guide

image 

This May, I once again have the distinct honor of presenting at the Uptime Institute’s Symposium. This year it will be held in Santa Clara, CA from May 9 through the 12th.  This year my primary topic is entitled ‘Preparing for the Cloud: A Data Center Survival Guide.’   I am really looking forward to this presentation on two fronts.  

First, it will allow me to share some of the challenges, observations, and opportunities I have seen over the last few years and package it up for Data Center Operators and IT professionals in a way that’s truly relevant to how to start preparing for the impact on their production environments. The whole ‘cloud’ industry is now rife with competing definitions, confusing marketing, and a broad spectrum of products and services meant to cure all ills. To your organization’s business leaders the cloud means lower costs, quicker time to market, and an opportunity to streamline IT Operations and reduce or eliminate the need for home-run data center environments. But what is the true impact on the operational environments? What plans do you need to have in place to ensure this kind of move can be successful? Is you organization even ready to make this kind of move? Is the nature of your applications and environments ‘Cloud-Ready? There are some very significant things to keep in mind when looking into this approach and many companies have not thought them all through.  My hope is that this talk will help prepare the professional with the necessary background and questions to ensure they are armed with the correct information to be an asset to the conversation within their organizations.

The second front is to really dig into the types of services available in the market and how to build an internal scorecard to ensure that your organization is approaching the analysis in a true – apples to apples kind of comparison.   So often I have heard horror stories of companies

caught up in the buzz of the Cloud and pursuing devastating cloud strategies that are either far more expensive than what they had to begin with.  The cloud can be a powerful tool and approach to serve the business, but you definitely need to go in with both eyes wide open.

I will try to post some material in the weeks ahead of the event to set the stage for the talk.  As always, If you are planning on attending Symposium this year feel free to reach out to me if you see me walking the halls.  

\Mm

I’ve Got Mail….A new Aol.

You may have seen the announcement today about my recent decision and move to join the new leadership team at Aol.  To some of my friends in the Technorati, and most specifically the Valley, this move probably seems very contrarian.  Having built some of the largest cloud infrastructure’s in the world, re-aligning operational processes at massive scale, Aol at first stroke may seem an odd choice.  I have worked in some of the largest multi-national companies in the world, I have successfully (and unsuccessfully) launched start-ups, have been a cost center and carried a P&L.  I think I have a pretty good understanding of the range and complexity of challenges (especially from a technology perspective)  from small business to large.   Across the spectrum of these types and sized companies you get a different feel.   Different cultures.  Different attitudes.    Different Vibes.

Aol is aggressively moving to redefine itself in the industry, to significantly transform and morph itself into a world that Aol itself helped create and define over 25 years ago.   There is no arguing that the first true scale challenges in dealing with the Internet at large were experienced by those first AOL’ers as they had to deal with numbers of users never before seen in our industry.  They pushed the boundaries of technology, they pushed the boundary of operations, they created whole new paradigms.  To reinvent itself in a market with such competition, such diversity is a huge challenge.

One of the most surprising things to me is that Vibe-thing I talked about a few moements ago. When walking around the company you cannot help but notice that definitely has more of a technology start-up feel to it.   Its palpable.  One of the folks I ran into called it a start-around.   A combination of a Start-up and a Turn-around.  Perhaps thats the best description I have to describe that vibe.   Sure things have been tough, sure there is alot of legacy to work through, but the level of commitment to those folks that are here is incredible.  Moreso than that.  Its a culture of beleivers.  Its all the self-sacrafice and personal investment you find in a startup, but with a team of seasoned veterans.  Its quite unique in my experience.

As I mentioned, Aol has long held a place of respect in terms of Operational best practices at scale, and a culture that recognized the importance of technology in the delivery of its mission.  Tim Armstrong, the CEO and Google veteran, has built an incredible team of passionate technology veterans from places like Google, Microsoft, and others.  The mission is focused.  The mission is deliberate.  The mission is clear.   The mission is hard.   Its a huge challenge.   Its the kind of challenge I love.   If you think its impossible you are only encouraging my energy more.  I could have taken a safe bet.  But where is the excitement?  Where is the challenge?  As the saying goes, “A ship is safe in the harbour, but thats not what ships are for!”   This ship is setting sail and my commitment is that not only will we find a new world, we will define it!

In the coming days/weeks/months, I hope to share many of the exciting things we will be endeavoring to accomplish and give you a real taste of some of the big changes I will be attempting.   As always, technology and operational processes will be key to the success of the mission the company is on and I have some very definite ideas on how we can leap frog current thinking in this space and ensure that our technology and operational approach is no only a strategic value to the business, but also industry leading in execution.

\Mm

Tags: , , , ,