On the Move: First Data

Well after a bit of time in stealth I am finally able to announce that I have taken the position of Chief Technology Officer at First Data. 

After being asked to join the Turn-Around team at AOL and driving some amazing results over the past four years, it was time for a change.  I absolutely loved my time there and the people were amazing.  Success has a quality all on its own and it was an incredibly personally rewarding experience for me to be a part of something that unique.

The move to First Data is an incredibly exciting move for me for many different reasons but one of the key drivers for me is that I feel that this industry is ripe for change.  It’s a move for me from running and building large scale Internet products and infrastructure to the Financial Services Industry.  

For those of you who may be be unaware of who First Data is, or what they do, its probably easiest to think of it this way – one out of every two credit card or debit card transactions across the world touches our infrastructure at some point in the transaction process. From a transactional scale perspective its very similar to what I have been used to companies likes AOL, Microsoft, and Disney.  The difference being of course that these transactions are a little more important than checking your favorite sports scores, or getting your e-mail. 

My challenge of course is to drive automation. To build a platform that makes infrastructure a decisive and differentiating platform to launch products and services.  To create a learning infrastructure and software eco-system that gets smarter over time.   In large part how do you blend the agility of the Internet with the maturity and complexity of the Financial Services Industry.   Sure it’s a complex technical problem space, but it has some very interesting business and regulatory challenges as well.   In many respects dealing with Safe Harbor, Regulatory and tax has been part of my job for many years.  The challenge now is to take that automation to the next level.  

To that end I have to say that First Data is assembling an amazingly formidable team to drive this change.  I will be reporting to the President of First Data, Guy Chiarello.  Guy is a universally respected Technology leader in the Financial Services industry with top posts at Morgan Stanley and JP Morgan Chase.  Technology will be key to the success of the company and the leadership team is a unique blend of technology savvy leaders from across the world. 

The new adventure begins!

You can follow the link to the official press announcement.

Along with the initial pickup from the Wall Street Journal.

\Mm

Google Purchase of Deep Earth Mining Equipment in Support of ‘Project Rabbit Ears’ and Worldwide WIFI availability…

(10/31/2013 – Mountain View, California) – Close examination of Google’s data center construction related purchases has revealed the procurement of large scale deep earth mining equipment.   While the actual need for the deep mining gear is unclear, many speculate that it has to do with a secretive internal project that has come to light known only as Project: Rabbit Ears. 

According to sources not at all familiar with Google technology infrastructure strategy, Project Rabbit ears is the natural outgrowth of Google’ desire to provide ubiquitous infrastructure world wide.   On the surface, these efforts seem consistent with other incorrectly speculated projects such as Project Loon, Google’s attempt to provide Internet services to residents in the upper atmosphere through the use of high altitude balloons, and a project that has only recently become visible and the source of much public debate – known as ‘Project Floating Herring’, where apparently a significantly sized floating barge with modular container-based data centers sitting in the San Francisco Bay has been spied. 

“You will notice there is no power or network infrastructure going to any of those data center shipping containers,” said John Knownothing, chief Engineer at Dubious Lee Technical Engineering Credibility Corp.  “That’s because they have mastered wireless electrical transfer at the large multi-megawatt scale.” 

Real Estate rates in the Bay Area have increased almost exponentially over the last ten years making the construction of large scale data center facilities an expensive endeavor.  During the same period, The Port of San Francisco has unfortunately seen a steady decline of its import export trade.  After a deep analysis it was discovered that docking fees in the Port of San Francisco are considerably undervalued and will provide Google with an incredibly cheap real estate option in one of the most expensive markets in the world. 

It will also allow them to expand their use of renewable energy through the use of tidal power generation built directly into the barges hull.   “They may be able to collect as much as 30 kilowatts of power sitting on the top of the water like that”, continues Knownothing, “and while none of that technology is actually visible, possible, or exists, we are certain that Google has it.”

While the technical intricacies of the project fascinate many, the initiative does have its critics like Compass Data Center CEO, Chris Crosby, who laments the potential social aspects of this approach, “Life at sea can be lonely, and no one wants to think about what might happen when a bunch of drunken data center engineers hit port.”  Additionally, Crosby mentions the potential for a backslide of human rights violations, “I think we can all agree that the prospect of being flogged or keel hauled really narrows down the possibility for those outage causing human errors. Of course, this sterner level of discipline does open up the possibility of mutiny.”

However, the public launch of Project Floating Herring will certainly need to await the delivery of the more shrouded Project Rabbit Ears for various reasons.  Most specifically the primary reason for the development of this technology is so that Google can ultimately drive the floating facility out past twelve miles into International waters where it can then dodge all national, regional, and local taxation, the safe harbor and privacy legislation of any country or national entity on the planet that would use its services.   In order to realize that vision, in the current network paradigm, Google would need exceedingly long network cables  to attach to Network Access Points and Carrier Connection points as the facilities drive through international waters.

This is where Project Rabbit Ears becomes critical to the Google Strategy.   Making use of the deep earth mining equipment, Google will be able to drill deep into the Earths crust, into the mantle, and ultimately build a large Network Access Point near the Earth’s core.  This Planetary WIFI solution will be centrally located to cover the entire earth without the use of regional WIFI repeaters.  Google’s floating facilities could then gain access to unlimited bandwidth and provide yet another consumer based monetization strategy for the company. 

Knownothing also speculates that such a move would allow Google to make use of enormous amounts of free geo-thermic power and almost singlehandedly become the greenest power user on the planet.   Speculation also abounds that Google could then sell that power through its as yet un-invented large scale multi-megawatt wireless power transfer technology as unseen on its floating data centers.

Much of the discussion around this kind of technology innovation driven by Google has been given credible amounts of veracity and discussed by many seemingly intelligent technology based news outlets and industry organizations who should intellectually know better, but prefer not to acknowledge the inconvenient lack of evidence.

 

\Mm

Editors Note: I have many close friends in the Google Infrastructure organization and firmly believe that they are doing some amazing, incredible work in moving the industry along especially solving problems at scale.   What I find simply amazing is in the search for innovation how often our industry creates things that may or may not be there and convince ourselves so firmly that it exists. 

2014 The Year Cloud Computing and Internet Services will be taxed. A.K.A Je déteste dire ça. Je vous l’avais dit.

 

france

Its one of those times I really hate to be right.  As many of you know I have been talking about the various grass roots efforts afoot across many of the Member EU countries to start driving a more significant tax regimen on Internet based companies.  My predictions for the last few years have more been cautionary tales based on what I saw happening from a regulatory perspective on a much smaller scale, country to country.

Today’s Wall Street Journal has an article discussing France’s movements to begin taxation on Internet related companies who derive revenue from users and companies across the entirety of the EU, but holding those companies responsible to the tax base in each country.   This could likely mean that such legislation is likely to become quite fractured and tough for Internet Companies to navigate.  The French proposition is asking the European Commission to draw up proposals by the Spring of 2014.

This is likely to have a very interesting (read as cost increases) across just about every aspect of Internet and Cloud Computing resources.  From a business perspective this is going to increase costs which will likely be passed on to consumers in small but interesting ways.  Internet advertising will need to be differentiated on a country by country basis, and advertisers will end up having different cost structures, Cloud Computing Companies will DEFINITELY need to understand where instances of customer instances were, and whether or not they were making money.  Potentially more impactful, customers of Cloud computing may be held to account for taxation accountability that they did not know they had!  Things like Data Center Site Selection are likely going to become even more complicated from a tax analysis perspective as countries with higher populations will likely become no-go zones (perhaps) or require the passage of even more restrictive laws around it.

Its not like the seeds of this haven’t been around since 2005, I think most people just preferred to keep a blind eye to the tax that the seed was sprouting into a full fledged tree.   Going back to my Cat and Mouse Papers from a few years ago…  The Cat has caught the mouse, its now the mouse’s move.

\Mm

 

Authors Note: If you don’t have a subscription to the WSJ, All Things Digital did a quick synopsis of the article here.

Insider Redux: Data Barn in a Farm Town

I thought I would start my first post by addressing the second New York Times article first. Why? Because it specifically mentions activities and messages sourced from me at the time when I was responsible for running the Microsoft Data Center program. I will try to track the timeline mentioned in the article with my specific recollections of the events. As Paul Harvey used to say, so then you could know the ‘REST of the STORY’.

I remember my first visit to Quincy, Washington. It was a bit of a road trip for myself and a few other key members of the Microsoft site selection team. We had visited a few of the local communities and power utility districts doing our due diligence on the area at large. Our ‘Heat map’ process had led us to Eastern Washington state. Not very far (just a few hours) from the ‘mothership’ of Redmond, Washington. It was a bit of a crow eating exercise for me as just a few weeks earlier I had proudly exclaimed that our next facility would not be located on the West Coast of the United States. We were developing an interesting site selection model that would categorize and weight areas around the world. It would take in FEMA disaster data, fault zones, airport and logistics information, location of fiber optic and carrier presence, workforce distributions, regulatory and tax data, water sources, and power. This was going to be the first real construction effort undertaken by Microsoft. The cost of power was definitely a factor as the article calls out. But just as equal was the generation mix of the power in the area. In this case a predominance of hydroelectric. Low to No carbon footprint (Rivers it turns out actually give off carbon emissions I came to find out). Regardless the generation mix was and would continue to be a hallmark of site selection of the program when I was there. The crow-eating exercise began when we realized that the ‘greenest’ area per our methodology was actually located in Eastern Washington along the Columbia River.

We had a series of meetings with Real Estate folks, the local Grant County PUD, and the Economic Development folks of the area. Back in those days the secrecy around who we were was paramount, so we kept our identities and that of our company secret. Like geeky secret agents on an information gathering mission. We would not answer questions about where we were from, who we were, or even our names. We ‘hid’ behind third party agents who took everyone’s contact information and acted as brokers of information. That was early days…the cloak and dagger would soon come out as part of the process as it became a more advantageous tool to be known in tax negotiations with local and state governments.

During that trip we found the perfect parcel of land, 75 acres with great proximity to local sub stations, just down line from the Dams on the nearby Columbia River. It was November 2005. As we left that day and headed back it was clear that we felt we had found Site Selection gold. As we started to prepare a purchase offer we got wind that Yahoo! was planning on taking a trip out to the area as well. As the local folks seemingly thought that we were a bank or large financial institution they wanted to let us know that someone on the Internet was interested in the area as well. This acted like a lightning rod and we raced back to the area and locked up the land before they Yahoo had a chance to leave the Bay Area. In these early days the competition was fierce. I have tons of interesting tales of cloak and dagger intrigue between Google, Microsoft, and Yahoo. While it was work there was definitely an air of something big on the horizon. That we were all at the beginning of something. In many ways many of the Technology professionals involved regardless of company forged some deep relationships and competition with each other.

Manos on the Bean Field December 2005The article talks about how the ‘Gee-Whiz moment faded pretty fast’. While I am sure that it faded in time (as all things do), I also seem to recall the huge increase of local business as thousands of construction workers descended upon this wonderful little town, the tours we would give local folks and city council dignitaries, a spirit of true working together. Then of course there was the ultimate reduction in properties taxes resulting from even our first building and an increase in home values to boot at the time. Its an oft missed benefit that I am sure the town of Quincy and Grant County has continued to benefit from as the Data Center Cluster added Yahoo, Sabey, IAC, and others. I warmly remember the opening day ceremonies and ribbon cutting and a sense of pride that we did something good. Corny? Probably – but that was the feeling. There was no talk of generators. There were no picket signs, in fact the EPA of Washington state had no idea on how to deal with a facility of this size and I remember openly working in partnership on them. That of course eventually wore off to the realities of life. We had a business to run, the city moved on, and concerns eventually arose.

The article calls out a showdown between Microsoft and the Power Utility District (PUD) over a fine for missing capacity forecasting target. As this happened much after I left the company I cannot really comment on that specific matter. But I can see how that forecast could miss. Projecting power usage months ahead is more than a bit of science mixed with art. It gets into the complexity of understanding capacity planning in your data centers. How big will certain projects grow. Will they meet expectations?, fall short?, new product launches can be duds or massive successes. All of these things go into a model to try and forecast the growth. If you think this is easy I would submit that NOONE in the industry has been able to master the crystal ball. I would also submit that most small companies haven’t been able to figure it out either. At least at companies like Microsoft, Google, and others you can start using the law and averages of big numbers to get close. But you will always miss. Either too high, or too low. Guess to low and you impact internal budgeting figures and run rates. Not Good. Guess to high and you could fall victim to missing minimal contracts with utility companies and be subject to fines.

In the case mentioned in the article, the approach taken if true would not be the smartest method especially given the monthly electric bill for these facilities. It’s a cost of doing business and largely not consequential at the amount of consumption these buildings draw. Again, if true, it was a PR nightmare waiting to happen.

At this point the article breaks out and talks about how the Microsoft experience would feel more like dealing with old-school manufacturing rather than ‘modern magic’ and diverts to a situation at a Microsoft facility in Santa Clara, California.

The article references that this situation is still being dealt with inside California so I will not go into any detailed specifics, but I can tell you something does not smell right in the state of Denmark and I don’t mean the Diesel fumes. Microsoft purchased that facility from another company. As the usage of the facility ramped up to the levels it was certified to operate at, operators noticed a pretty serious issue developing. While the building was rated to run at certain load size, it was clear that the underground feeders were undersized and the by-product could have polluted the soil and gotten into the water system. This was an inherited problem and Microsoft did the right thing and took the high road to remedy it. It is my recollection that all sides were clearly in know of the risks, and agreed to the generator usage whenever needed while the larger issue was fixed. If this has come up as a ‘air quality issue’ I personally would guess that there is politics at play. I’m not trying to be an apologist but if true, it goes to show that no good deed goes unpunished.

At this point the article cuts back to Quincy. It’s a great town, with great people. To some degree it was the winner of the Internet Jackpot lottery because of the natural tech resources it is situated on. I thought that figures quoted around taxes were an interesting component missed in many of the reporting I read.

“Quincy’s revenue from property taxes, which data centers do pay, has risen from $815,250 in 2005 to a projected $3.6 million this year, paying for a library and repaved streets, among other benefits, according to Tim Snead, the city administrator.”

As I mentioned in yesterday’s post my job is ultimately to get things done and deliver results. When you are in charge of a capital program as large as Microsoft’s program was at the time – your mission is clear – deliver the capacity and start generating value to the company. As I was presented the last cropThe last bag of beans harvested in Quincy of beans harvested from the field at the ceremony we still had some ways to go before all construction and capacity was ready to go. One of the key missing components was the delivery and installation of a transformer for one of the substations required to bring the facility up to full service. The article denotes that I was upset that the PUD was slow to deliver the capacity. Capacity I would add that was promised along a certain set of timelines and promises and commitments were made and money was exchanged based upon those commitments. As you can see from the article, the money exchanged was not insignificant. If Mr. Culbertson felt that I was a bit arrogant in demanding a follow through on promises and commitments after monies and investments were made in a spirit of true partnership, my response would be ‘Welcome to the real world’. As far as being cooperative, by April the construction had already progressed 15 months since its start. Hardly a surprise, and if it was, perhaps the 11 acre building and large construction machinery driving around town could have been a clue to the sincerity of the investment and timelines. Harsh? Maybe. Have you ever built a house? If so, then you know you need to make sure that the process is tightly managed and controlled to ensure you make the delivery date.

The article then goes on to talk about the permitting for the Diesel generators. Through the admission of the Department of Ecology’s own statement, “At the time, we were in scramble mode to permit our first one of these data centers.” Additionally it also states that:

Although emissions containing diesel particulates are an environmental threat, they were was not yet classified as toxic pollutants in Washington. The original permit did not impose stringent limits, allowing Microsoft to operate its generators for a combined total of more than 6,000 hours a year for “emergency backup electrical power” or unspecified “maintenance purposes.”

At the time all this stuff was so new, everyone was learning together. I simply don’t buy that this was some kind Big Corporation versus Little Farmer thing. I cannot comment on the events of 2010 where Microsoft asked for itself to be disconnected from the Grid. Honestly that makes no sense to me even if the PUD was working on the substation and I would agree with the articles ‘experts’.

Well that’s my take on my recollection of events during those early days of the Quincy build out as it relates to the articles. Maybe someday I will write a book as the process and adventures of those early days of birth of Big Infrastructure was certainly exciting. The bottom line is that the data center industry is amazingly complex and the forces in play are as varied as technology to politics to people and everything in between. There is always a deeper story. More than meets the eye. More variables. Decisions are never black and white and are always weighted against a dizzying array of forces.

\Mm

The AOL Micro-DC adds new capability

Back in July, I announced AOL’s Data Center Independence Day with the release of our new ‘Micro Data Center’ approach.   In that post we highlighted the terrific work that the teams put in to revolutionize our data center approach and align it completely to not only technology goals but business goals as well.   It was an incredible amount of engineering and work to get to that point and it would be foolish to think that the work represented a ‘One and Done’ type of effort.  

So today I am happy to announce the roll out of a new capability for our Micro-DC – An indoor version of the Micro-DC.

Aol MDC-Indoor2

While the first instantiations of our new capability were focused on outdoor environments, we were also hard at work at an indoor version with the same set of goals.   Why work on an indoor version as well?   Well you might recall in the original post I stated:

We are no longer tied to traditional data center facilities or colocation markets.   That doesn’t mean we wont use them, it means we now have a choice.  Of course this is only possible because of the internally developed cloud infrastructure but we have freed ourselves from having to be bolted onto or into existing big infrastructure.   It allows us to have an incredible amount geo-distributed capacity at a very low cost point in terms of upfront capital and ongoing operational expense.

We need to maintain a portfolio of options for our products and services.  In this case – having an indoor version of our capabilities to ensure that our solution can live absolutely anywhere.   This will allow our footprint, automation and all, to live inside any data center co-location environment or the interior of any office building anywhere around the planet, and retain the extremely low maintenance profile that we were targeting from an operational cost perspective.  In a sense you can think of it as “productizing” our infrastructure.  Could we have just deployed racks of servers, network kit, etc. like we have always done?  Sure.   But by continuing to productize our infrastructure we continue to drive down the costs relating to our short term and long term infrastructure costs.  In my mind, Productizing your infrastructure, is actually the next evolution in standardization of your infrastructure.   You can have infrastructure standards in place – Server Model, RAM, HD space, Access switches, Core switches, and the like.  But until you get to that next phase of standardizing, automating, and ‘productizing’ it into a discrete set of capabilities – you only get a partial win.

Some people have asked me, “Why didn’t you begin with the interior version to start with? It seems like it would be the easier one to accomplish.”  Indeed I cannot argue with them, it would have probably been easier as there were much less challenges to solve.  You can make basic assumptions around where this kind of indoor solution would live in, and reduce much of the complexity.   I guess it all nets out to a philosophy of solving the harder problems first.   Once you prove the more complicated use case, the easier ones come much faster.   This is definitely the situation here.  

While this new capability continues the success we are seeing in re-defining the cost and operations of our particular engineering environments, the real challenge here (as with all sorts infrastructure and cloud automation) is whether or not we can map similar success of our applications and services to work correctly in that space.   On that note, I should have more to post soon. Stay Tuned!  Smile

 

\Mm

AOL’s Data Center Independence Day

Yesterday we celebrated Independence Day here in the United States.   It’s a day where we embrace the freedoms we enjoy as a country, look back on where we have come, and celebrate the promise of the future.   Yesterday was also a different kind of Independence Day for my teams at AOL.  A Data Center Independence Day, if you will. 

You may or may not have been following the progress of the work that we have been doing here at AOL over the last 14 or so months but the pace of change has been simply breathtaking.  One of the first things I did when I entered into the company was deeply review all of the aspects of Operations.  From Data Centers to Network Engineering, to the engineering teams supporting the products and services and everything in between.   The net of the exercise was that AOL was probably similar to most companies out there in terms of technology mix, from the CRUFT that I mentioned in a previous post, to latest technologies.  There were some incredible technologies built over the last three decades, some outdated processes and procedures, and if I am honest traces of a culture where the past had more meaning of the present or future.

In a very short period of time all of that changed.  We aggressively made changes to the organization,  re-aligned priorities, and perhaps most of all we created and defined a powerful collection of changes and evolutions we would need to bring about with very aggressive timelines.    These changes were part of a defined Technology Roadmap that broke the work we needed to accomplish into three categories of work.   The categorization focused on the internal technical challenges and tools we needed to make to enhance our own internal efficiencies.  The second categorization focused on the technical challenges and aggressive things we could do to enhance and bring greater scalability to our products and services.   This would include things like additional services and technology suites to our internally developed cloud infrastructure, and other items that would allow for more rapid product delivery of our products and services.   The last categorization of work, was for the incredibly aggressive “wish list” types of changes.  Items that could be so disruptive, so incredibly game-changing for us, that they could redefine our work on the whole.  In fact we named this group of work “Nibiru” after a mythical planet that is said to cross into our solar system and wreaks havoc and brings about great change. 

On July 4, 2012, one of our Nibiru items arrived and I am extremely ecstatic to state that we achieved our “Data Center Independence Day”.  Our primary “Nibiru” goal was to develop and deliver a data center environment without the need of a physical building.  The environment needed to require as minimal amount of physical “touch” as possible and allow us the ultimate flexibility in terms of how we delivered capacity for our products and services. We called this effort the Micro Data Center.   If you think about the amount of things that need to change to evolve to this type of strategy it’s a bit mind-boggling. 

image

Here is just a few of the things required to look at/change/and automate to even make this kind of achievement possible:

  • Developing an entirely new Technology Suite and the ability to deliver that capacity anywhere in the world with minimal to no staffing.
  • Delivering extremely dense compute capacity (think the latest technology) to give us the longest possible use of these assets once deployed into the field.
  • The ability to deliver a “Microdata Center” anywhere on the planet regardless of temperature and humidity settings
  • The ability to support/maintain/and administer remotely.
  • The ability to fit into the power envelope of a normal office building
  • Participation in our cloud environment and capabilities
  • The processes by which these facilities are maintained and serviced
  • and much much more…

In my mind, Its one thing to claim a technical achievement, its quite another to operationalize that achievement and make the process of supporting it repeatable. That’s my measure as to when you can REALLY declare victory.  Science Experiments don’t count.   It has to just plain work.    To that end our first “beta” site for the technology was the AOL campus in Dulles, Virginia.  Out on a lonely slab of concrete in the back of one of the buildings our future has taken shape.

Thanks in part to a lot of the work going on in the data center containerization imagespace, we were able to jump start much of the work in a relatively quick fashion.  In fact the pace set the Data Center and Technology Operations teams to deliver this achievement is more than a bit astounding.   Most, if not all, of the existing AOL Data Centers would fall somewhere around a traditional Tier III / Tier II Uptime Institute definition.   The teams really pushed ahead way outside their comfort zones to deliver some incredibly evolutions in a very short period of time.   Of course there were steps along the way to get here.  But those steps now seem to be in double time.  A few months back we announced the launching of ATC, Our first completely automated facility.   The work that went into ATC, was foundational to our achievement yesterday.   It allowed us to really start working on the hard stuff first.   That is to say the ‘Operationalization’ of these kinds of environments.   It set the stage of how we could evolve to this next tier of evolution.   Below is a summary of some of the achievements of our ATC launch, but if you were curious for the specifics on our work there feel free to click the ‘Breaking the Chrysalis’ post I did at that time.  You can see how the work that we have been driving in our own internal cloud environments, the changes in operational procedure, the change in thought is additive and fundamental to our latest achievement.   Its especially interesting to note that with all of the interesting blips and hiccups occurring in the ‘cloud industry’ like the leap second and  the terrible storms on the East Coast this week which affected many data centers, that ATC, our completely unmanned facility just kept humming along with no issues (To be fair neither did our traditional facilities) despite much of the initial negative feedback we had received was solely based around the reliability of such moves.   It goes to show how important engineering FOR Operation is.  For AOL we have built this in from the start.

What does this actually buy AOL?

Ok, we stuck some computers in a box and we made sure it requires very little care and feeding – what does this buy us?  Quite a bit actually.  Jay Moran, the Distinguished Engineer who was in charge of driving this effort is always quick to point out that the problem space here is not just about the Technology.  It has to be a marriage with the business side as well.  Obviously the inherent flexibility of the design allows us a greater number of places around the planet we can deploy capacity to and that in and of itself is pretty revolutionary.   We are no longer tied to traditional data center facilities or colocation markets.   That doesn’t mean we wont use them, it means we now have a choice.  Of course this is only possible because of the internally developed cloud infrastructure but we have freed ourselves from having to be bolted onto or into existing big infrastructure.   It allows us to have an incredible amount geo-distributed capacity at a very low cost point in terms of upfront capital and ongoing operational expense.   This is a huge game changer.  So much so, allow me to do a bit of the ‘back of the napkin math’ with you.   Lets call our global capacity in terms of compute, storage, etc. that we have today in our traditional environments – the Total Compute Capability or TCC. Its essentially the bandwidth for the work that we can get done.   Inside the cost for TCC you have operating costs such power, lease costs, Data Center facility maintenance costs, support staff, etc.  You additionally have the imagedepreciation for the facilities themselves (or the specific buildouts – if colocating), the server and other equipment depreciation, and the rest.   Lets call that baseline X.   The MicroData Center strategy built out with the latest, our most dense server standards and infrastructure would allow us to have 5X the amount of total TCC in less than 10% of the cost and physical footprint.   If you think about how this will allow us to aggregate and grow over time it ultimately drives us to a VERY LOW operational cost structure for delivering our products and services.   Additionally it positions us for the future in very significant ways.

  • It redefines software architecture for greater resiliency
  • It allows us an incredibly flexible platform for driving and addressing privacy laws, regulatory oversight, and other such concerns allowing us to respond rapidly.
  • It further reduces energy consumption and carbon footprint emissions (important as taxation evolves around the world, as well as ongoing operational costs)
  • Gives us the ability to drive Edge Computing delivery to potentially bypass CDNs for certain content.
  • Gives us the capability to drive ‘Community-in-a-box’ whereby we can quickly launch new products in markets, quickly expand existing footprints like Patch in a low cost, but still hyper-local platform, allow the Huffington Post a platform to rapidly partner and enter new markets with minimal cost turn ups.
  • The fact that the technology mix in our SKUs is comprised of compute, storage, and network capacity maximizes the amount of products and services we can deploy to it.  

As Always its really about the People

I cannot let a post about this huge win for us to go by without mentioning the teams involved in delivering this capability.  This is not just a win for AOL, or to a lesser degree the industry at large in another proof-point that it cant evolve if it puts its mind to changing, but rather the Technology Teams at AOL.  When I was first approached about joining AOL, my slightly sarcastic and comedic response was probably much like yours – ‘Are they still around?’ But the fact of the matter is that AOL has a vision of where they want to go, and what they want to be.   That was compelling for me personally, compelling enough for me to make the move.   What has truly amazed me however is the dedication and tenacity of its employees.  These achievements would not be possible without the outright aggressiveness the organization has taken to moving the company forward.  Its always hard to assess from the outside just how hard an effort is internally to achieve.  In the case of our micro Data Center Strategy, the teams had just about every kind of barrier to deliver this capacity.  Every kind of excuse to not make it, or even not to try.   They put all of those things aside and just plain executed.  If you allow me a small moment of bravado – Not only did my teams simply kick ass, they did it in a way that moved the needle for the company, and in my mind once again catapulted themselves into the forefront of operations and technology at scale.   We still have a bunch of Nibiru projects to deliver, so my guess is we haven’t heard the last of some of these big wins.

\Mm

Sites and Sounds of DataCentre2012: My Presentation, Day 2, and Final Observations

nice

Today marked the closing lot of sessions for DataCentres2012 and my keynote session to the attendees.    After sitting through a series of product, technology, and industry trend presentations over the last two days I was feeling that my conversation would at the very least be something different.   Before I get to that – I wanted to share some observations from the morning. 

It all began with an interesting run-down of the Data Center and infrastructure industry trends across Europe from Steve Wallage of The BroadGroup.   It contained some really compelling information and highlighted some interesting divergence between the European market and the US market in terms of adoption and trends of infrastructure.   It looks like they have a method for those interested to get their hand on the detailed data (for purchase) if you are interested.  The parts I found particularly industry was the significant slow down of the Wholesale data center market across Europe while Colocation providers continued to do well.   Additionally the percentages of change within the customer base of those providers by category was compelling and demonstrated a fundamental shift and move of content related customers across the board.

This presentation was followed by a panel of European Thought Leaders made up mostly of those same colocation providers.  Given the presentation by Wallage I was expecting some interesting data-points to emerge.  While there was a range of ideas and perspectives represented by the panel, I have to say it really got me worked up and not in a good way.   In many ways I felt the responses from many (not all) on the panel highlighted a continued resistance to change in thinking around everything from efficiency, to technology approach.  It represented the things I despise most about about our industry at large.  Namely the slow adoption of change. The warm embrace of the familiar.  The outright resistance to new ideas.    At one point, a woman in the front row whom I believe was from Germany got up to ask a question if the panelists had any plans to move their facilities outside of the major metros.  She referenced Christian Belady’s presentation around the idea of Data as Energy and remote locations like Quincy, Washington or Lulea, Sweden.   She referred to the overall approach and thinking differently as quite visionary.   Now the panel could have easily have referred to the fact that companies like Microsoft, Google, Facebook and the like have much greater software level control than a colo-provider could provide.   Or perhaps they could have referenced that most of their customers are limited by distance to existing infrastructure deployments due to inefficiencies in commercial or custom internally deployed applications. Databases with response times architected for in-rack or in-facility levels of response times.   They did at least reference that most customers tend to be server huggers and want their infrastructure close by.  

Instead the initial response was quite strange in my mind.  It was to go after the ideas as “innovative” and to imply that nothing was really innovative about what Microsoft had done and the fact that they built a “mega data center” in Dublin shows that there is nothing innovative really happening.  Really?   The adoption of 100% Air Side economization is something everyone does?   The deployment of containerized compute capacity is run of the mill?  The concepts about the industrialization of compute is old-hat?  I had to do a mental double take and question whether they were even listening during ANY of the earlier sessions.   Don’t get me wrong, I am not trying to be an apologist for the Microsoft program, in fact there are some tenets of the program I find myself not in agreement with.  However – You cannot deny that they are doing VERY different things.   It illustrated an interesting undercurrent I felt during the entire event (and maybe even our industry).  I definitely got the sensation of a growing gap between users requirements and their forward roadmaps and desires and what manufacturers and service providers are providing.  This panel, and a previous panel on modularization really highlighted these gulfs pretty demonstrably.   At a minimum I definitely walked away with an interesting new perspective on some of the companies represented.

It was then time for me to give my talk.   Every discussion up until this point had really focused on technology or industry trends.  I was going to talk about something else. Something more important.  The one thing seemingly missing from the entire event.   That is – the people attending.   All the technology in the world, all of the understanding of the trends in our industry are nothing unless the people in the room were willing to act. Willing to step up and take active roles in their companies to drive strategy.  As I have said before – to get out of the basement and into the penthouse.   The pressures on our industry and our job roles has never been more complicated.   So I walked through regulations, technologies, and cloud discussions.  Using the work that we did at AOL as a backdrop and example – I really tried to drive my main point.   That our industry – specifically the people doing all the work – were moving to a role of managing a complex portfolio of technologies, contracts, and a continuum of solutions.  Gone are the days where we can hide sheltered in our data center facilities.   Our resistance to embrace change, need to evolve with us, or it will evolve around us.   I walked through specific examples of how AOL has had to broaden its own perspective and approach to this widening view of our work roles at all levels.   I even pre-announced something we are calling Data Center Independence Day.   An aggressive adoption of modularized compute capacity that we call MicroData Centers  to help solve many of the issues we are facing as a business and the rough business case as to why it makes sense for us to move to this model.    I will speak more of that in the weeks to come with a greater degree of specifics, but stressed again the need for a wider perspective to manage a large portfolio of technologies and approaches to be successful in the future.

In closing – the event was fantastic.   The ability this event provides to network with leaders and professionals across the industry was first rate.   If I had any real constructive feedback it would be to either lengthen sessions, or reduce panel sizes to encourage more active and lively conversations.  Or both!

Perhaps at the end of the day, it’s truly the best measure of a good conference if you walk away wishing that more time could be spent on the topics.  As for me I am headed back Stateside and to digging into the challenges of my day job.    To the wonderful host city of Nice, I say Adieu!

 

\Mm

The Cloud Cat and Mouse Papers–Site Selection Roulette and the Insurance Policies of Mobile infrastructure

cnm-roulette

Its always hard to pick exactly where to start in a conversation like this especially since this entire process really represents a changing life-cycle.   Its more of a circular spiral that moves out (or evolves) as new data is introduced than a traditional life-cycle because new data can fundamentally shift the technology or approach.   That being said I thought I would start our conversations at a logical starting point.   Where does one place your infrastructure?  Even in its embryonic “idea phase” the intersection of government and technology begins its delicate dance to a significant degree. These decisions will ultimately have an impact on more than just where the Capital investments a company decides to make are located.  It has affects on the products and services they offer, and as I propose, an impact ultimately on the customers that use the services at those locations.

As I think back to the early days of building out a global infrastructure, the Site Selection phase started at a very interesting place.   In some ways we approached it with a level of sophistication that has still to be matched today and in other ways, we were children playing a game whose rules had not yet been defined.

I remember sitting across numerous tables with government officials talking about making an investment (largely just land purchase decisions) in their local community.  Our Site Selection methodology had brought us to these areas.  A Site Selection process which continued to evolve as we got smarter, and as we started to truly understand the dynamics of the system were being introduced to.   In these meetings we always sat stealthily behind a third party real estate partner.  We never divulged who we were, nor were they allowed to ask us that directly.  We would pepper them with questions, and they in turn would return the favor.  It was all cloak and dagger with the Real Estate entity taking all action items to follow up with both parties.

Invariably during these early days -  these locales would always walk away with the firm belief that we were a bank or financial institution.   When they delved into our financial viability (for things like power loads, commitment to capital build-out etc.) we always stated that any capital commitments and longer term operational cost commitments were not a problem.    In large part the cloak and dagger aspect was to keep land costs down (as we matured, we discovered this was quite literally the last thing we needed to worry about) as we feared that once our name became attached to the deal our costs would go up.   These were the early days of seeding global infrastructure and it was not just us.  I still laugh at the fact that one of our competitors bound a locality up so much in secrecy – that the community referred to the data center as Voldemort – He who shall not be named, in deference to the Harry Potter book series.

This of course was not the only criteria that we used.  We had over 56 by the time I left that particular effort with various levels of importance and weighting.   Some Internet companies today use less, some about the same, and some don’t use any, they ride on the backs of others who have trail-blazed a certain market or locale.   I have long called this effect Data Center Clustering.    The rewards for being first mover are big, less so if you follow them ultimately still positive. 

If you think about most of the criteria used to find a location it almost always focuses on the current conditions, with some acknowledge in some of the criteria of the look forward.  This is true for example when looking at power costs.   Power costs today are important to siting a data center, but so is understanding the generation mix of that power, the corresponding price volatility, and modeling that ahead to predict (as best as possible) longer term power costs.

What many miss is understanding the more subtle political layer that occurs once a data center has been placed or a cluster has developed. Specifically that the political and regulatory landscape can change very quickly (in relationship to the life of a data center facility which is typically measured in 20, 30, or 40 year lifetimes).  It’s a risk that places a large amount of capital assets potentially in play and vulnerable to these kinds of changes.   Its something that is very hard to plan or model against.  That being said there are indicators and clues that one can use to at least play risk factors against or as some are doing – ensuring that the technology they deploy limits their exposure.    In cloud environments the question remains open – how liable are companies using cloud infrastructure in these facilities at risk?   We will explore this a little later.

That’s not to say that this process is all downside either.  As we matured in our approach, we came to realize that the governments (local or otherwise) were strongly incented to work with us on getting us a great deal and in fact competed over this kind of business.   Soon you started to see the offers changing materially.  It was little about the land or location and quickly evolved to what types of tax incentives, power deals, and other mechanisms could be put in play.   You saw (and continue to see) deals structured around sales tax breaks, real estate and real estate tax deals, economic incentives around breaks in power rates, specialized rate structures for Internet and Cloud companies and the like.   The goal here of course was to create the public equivalent of “golden handcuffs” for the Tech companies and try to marry them to particular region, state, or country.  In many cases – all three.  The benefits here are self apparent.  But can they (or more specifically will they) be passed on in some way to small companies who make use of cloud infrastructure in these facilities? While definitely not part of the package deals done today – I could easily see site selection negotiations evolving to incent local adoption of cloud technology in these facilities or provisions being put in place tying adoption and hosting to tax breaks and other deal structures in the mid to longer timeframe for hosting and cloud companies.

There is still a learning curve out there as most governments mistakenly try and tie these investments with jobs creation.   Data Centers, Operations, and the like represents the cost of goods sold (COGS) to the cloud business.  Therefore there is a constant drive towards efficiency and reduction of the highest cost components to deliver those products and services.   Generally speaking, people, are the primary targets in these environments.   Driving automation in these environments is job one for any global infrastructure player.  One of the big drivers for us investing and developing a 100% lights-out data center at AOL was eliminating those kinds of costs.  Those governments that generally highlight job creation targets over other types typically don’t get the site selection.    After having commissioned an economic study done after a few of my previous big data center builds I can tell you that the value to a region or a state does not come from the up front jobs the data center employs.  After a local radio stationed called into question the value of having such a facility in their backyard, we used a internationally recognized university to perform a third party “neutral” assessment of the economic benefits (sans direct people) and the numbers were telling.  We had surrendered all construction costs and other related material to them, and they investigated over the course of a year through regional interviews and the like of what the direct impacts of a data center was on the local community, and the overall impacts by the addition.  The results of that study are owned by a previous employer but I  can tell you with certainty – these facilities can be beneficial to local regions.

No one likes constraints and as such you are beginning to see Technology companies use their primary weapon – technology – to mitigate their risks even in these scenarios.   One cannot argue for example, that while container-based data centers offer some interesting benefits in terms of energy and cost efficiencies, there is a certain mobility to that kind of infrastructure that has never been available before.    Historically, data centers are viewed as large capital anchors to a location.    Once in place, hundreds of millions to billions (depending on the size of the company) of dollars of capital investment are tied to that region for its lifespan.   Its as close to permanent in the Tech Industry as building a factory was during the industrial revolution. 

In some ways Modularization of the data center industry is/can/will have the same effect as the shipping container did in manufacturing.   All puns intended.  If you are unaware of how the shipping container revolutionized the world, I would highly recommend the book “The Box” by Marc Levinson, it’s a quick read and very interesting if you read it through the lens of IT infrastructure and the parallels of modularization in the Data Center Industry at large.

It gives the infrastructure companies more exit options and mobility in the future than they would have had in the past under large capital build-outs.  Its an insurance policy if you will for potential changes is legislation or regulation that might negatively impact the Technology companies over time.  Just another move in the cat and mouse games that we will see evolving here over the next decade or so in terms of the interactions between governments and global infrastructure. 

So what about the consumers of cloud services?  How much of a concern should this represent for them?  You don’t have to be a big infrastructure player to understand that there are potential risks in where your products and services live.  Whether you are building a data center or hosting inside a real estate or co-location provider – these are issues that will affect you.  Even in cases where you only use the cloud provisioning capabilities within your chosen provider – you will typically be given options of what region or area would you like you gear hosted in.  Typically this is done for performance reasons – reaching your customers – but perhaps this information might cause you to think of the larger ramifications to your business.   It might even drive requirements into the infrastructure providers to make this more transparent in the future.

These evolutions in the relationship between governments and Technology and the technology options available to them will continue to shape site selection policy for years to come.   So too will it ultimately affect the those that use this infrastructure whether directly or indirectly remains to be seen.  In the next paper we will explore the this interaction more deeply as it relates to the customers of cloud services and the risks and challenges specifically for them in this environment.

\Mm

The Cloud Cat and Mouse Papers – The Primer

image_thumb1_thumb

Cat and Mouse with Multi-national infrastructure –The Participants to date

There is an ever-changing, game of cat and mouse developing in the world of cloud computing.   Its not a game that you as a consumer might see, but it is there.  An undercurrent that has been there from the beginning.   It pits Technology Companies and multi-national infrastructure against local and national governments.  For some years this game of cat and mouse has been quietly played out in backrooms, development and technology roadmap re-works, across negotiation tables, and only in the rarest of cases – have they come out for industry scrutiny or visibility.  To date the players have been limited to likes of Google, Microsoft, Amazon, and others who have scaled their technology infrastructure across the globe and in large measure those are the players that governments have moved against in an ever intricate chess game.  I myself have played apart in the measure/counter-measure give and take of this delicate dance.

The primary issues in this game have to do with realization of revenue for taxation purposes, Safe Harbor and issues pertaining to personally identifiable information, ownership of Big Data, the nature of what is stored, how it is stored, and where it is stored, the intersection where politics and technology meet.   A place where social issues and technology collide.   You might call them storm clouds, just out of sight, but there is thunder on the horizon that you the consumer and/or potential user of global cloud infrastructure will need to be aware of because eventually the users of cloud infrastructure will become players in the game as well. 

That is not to say that the issues I tease out here are all gloom and doom.  In fact, they are great opportunities for potential business models, additional product features, and even cloud eco-system companies or niche solutions unto themselves.  A way to drive significant value for all sides.   I have been toying with more than a few of these ideas myself here over the last few months.

To date these issues have mostly manifested in the global build up of infrastructure for the big Internet platforms.  The Products and Services the big guys in the space use as their core money-making platforms or primary service delivery platforms.  Rarely if ever do these companies use this same infrastructure for their Infrastructure as a service (IAAS) or Platform as a Services (PAAS) offerings.  However, as you will see, the same challenges will and do apply to these offerings as well.  In some cases they are even more acute and problematic in a situation where there may be a multi-tenancy with the potential to put even more burden on future cloud users.

If I may be blunt about this there is an interesting lifecycle to this food chain whereby the Big Technology companies consistently have the upper hand and governmental forces through the use of their primary tools – regulation and legislation – are constantly playing catch up.  This lifecycle is unlikely to change for at least five reasons.

  • The Technology Companies will always have the lens of the big picture of multi-national infrastructure.   Individual countries, states, and locales generally only have jurisdiction or governance over that territory, or population base that is germane to their authority.
  • Technology Companies can be of near singular purpose on a very technical depth of capability or bring to bear much more concentrated “brain power” to solve for evolutions in the changing socio-political landscape to continually evolve measures and counter-measures to address these changes.
  • By and large Governments rely upon technologies and approaches to become mainstream before there is enough of a base understanding of the developments and impacts before they can act.  This generally places them in a reactionary position. 
  • Governmental forces generally rely upon “consultants” or “industry experts” to assist in understanding these technologies, but very few of these industry experts have ever really dealt with multi-national infrastructure and fewer still have had to strategize and evolve plans around these types of changes. The expertise at that level is rare and almost exclusively retained by the big infrastructure providers.
  • Technology Companies have the ability to force a complete game-change to the rules and reality by completely changing out the technology used to deliver their products and services, change development and delivery logic and/or methodology to almost affect a complete negation of the previous method of governance, making it obsolete. 

That is not to say that governments are unwilling participants in this process forced into a subservient role in the lifecycle.  In fact they are active participants in attracting, cultivating, and even subsidizing these infrastructural investments in areas of under their authority and jurisdiction.  Using tools like Tax breaks, real estate and investment incentives, and private-public partnerships do have both initial and ongoing benefits for the Governments as well.  In many ways these  are “golden handcuffs” for Technology Companies who enter into this cycle, but like any kind of constraint – positive or negative – the planning and strategy to unfetter themselves begins almost immediately.

Watson, The Game is Afoot

Governments, Social Justice, Privacy, and Environmental forces have already begun to force changes in the Technology landscape for those engaged in multi-national infrastructure.  There are tons of articles freely available on the web which articulate the kinds of impacts these forces have had and will continue to have on the Technology Companies.  The one refrain through all of the stories is the resiliency of those same Technology Companies to persevere and thrive despite what might be crucial setbacks in other industries.

In some cases the technology changes and adapts to meet the new requirements, in some cases, approaches or even vacating “un-friendly” environs across any of these spectrums becomes an option, and in some cases, there is not an insignificant bet that any regulatory or compulsory requirements will be virtually impossible or too technically complex to enforce or even audit.

Lets take a look at a couple of the examples that have been made public that highlight this kind of thing.   Back in 2009, Microsoft migrated substantial portions of their Azure Cloud Services out of Washington State to its facilities located in San Antonio Texas.  While the article specifically talks about certain aspects of tax incentives being held back, there were of course other factors involved.   One doesn’t have to look far to understand that Washington State also has an B&O Tax (Business and Occupation Tax) which is defined as a gross receipts tax. It is measured on the value of products, gross proceeds of sale, or gross income of the business.  As you can imagine, interpreting this kind of tax as it relates to online and cloud income and the like could be very tricky and regardless would be complex and technical problem  to solve.  It could have the undesired impact of placing any kind of online business at an interesting disadvantage,  or at another level place an unknown tax burden on its users.   I am not saying this was a motivating factor in Microsoft’s decision but you can begin to see the potential exposure developing.   In this case, The Technology could rapidly change and move the locale of the hosted environments to minimize the exposure, thus thwarting any governmental action.  At least for the provider, but what of the implications if you were a user of the Microsoft cloud platform and found yourself with an additional or unknown tax burden.  I can almost guarantee that back in 2009 that this level of end user impact (or revenue potential from a state tax perspective) had not even been thought about.   But as with all things, time changes and we are already seeing examples of exposure occurring across the game board that is our planet.

We are already seeing interpretations or laws getting passed in countries around the globe where for example, a server is a taxable entity.   If revenue for a business is derived from a computer or server located in that country it falls under the jurisdiction of that countries tax authority.    Imagine yourself as a company using this wonderful global cloud infrastructure selling your widgets, products or services, and finding yourself with an unknown tax burden and liability in some “far flung” corner of the earth.   The Cloud providers today mostly provide Infrastructure services.  They do not go up the stack far enough to be able to effectively manage your entire system let alone be able to determine your tax liability.  The burden of proof to a large degree today would reside on the individual business running inside that infrastructure.  

In many ways those adopting these technologies are the least capable to deal with these kinds of challenges.  They are small to mid-sized companies who admittedly don’t have the capital, or operational sophistication to build out the kind of infrastructure needed to scale that quickly.   They are unlikely to have technologies such as robust configuration management databases to be able to to track virtual instances of their products and services, to tell what application ran, where it ran, how long it ran, and how much revenue was derived during the length of its life.   And this is just one example (Server as a taxable entity) of a law or legislative effort that could impact global users.  There are literally dozens of these kinds of bills/legislative initiatives/efforts (some well thought out, most not) winding their way through legislative bodies around the world.

You might think that you may be able to circumvent some of this by limiting your product or services deployment to the country closest to home, wherever home is for you.  However there are other efforts winding their way through or in large degree passed that impact the data you store, what you store, whose data are you storing, and the like. In most cases these initiatives are unrelated to the revenue legislations developing, but balanced they can give an interesting one – two punch.   For example many countries are requiring that for Safe Harbor purposes all information for any nationals of ‘Country X’ must be stored in ‘Country X’ to ensure that its citizenry is properly protected and under the jurisdiction of the law for those users.   In a cloud environment, with customers potentially from almost anywhere how do you ensure that this is the case?  How do you ensure you are compliant?   If you balance this requirement with the ‘server as a taxable entity’ example I just gave above there is an interesting exposure and liability for companies prove where and when revenue is derived.     Similarly there are some laws that are enacted as reactions against legislation in other countries.

In the post-911 era within the United States, the US Congress enacted a series of laws called the Patriot Act.   Due to some of the information search and seizure aspects of the law, Canada forbade that Canadian citizens data be stored in the United States in response.   To the best of my knowledge only a small number of companies actually even acknowledge this requirement and have architected solutions to address it, but the fact remains they are not in compliance with Canadian law.  Imagine you are a small business owner, using a cloud environment to grow your business, and suddenly you begin to grow your business significantly in Canada.  Does your lack of knowledge of Canadian law excuse you from your responsibilities there?  No.  Is this something that your infrastructure provider is offering to you? Today, no.  

I am only highlighting certain cases here to make the point that there is a world of complexity coming to the cloud space.  Thankfully these impacts have not been completely explored or investigated by most countries of the world, but its not hard to see a day/time where this becomes a very real thing where companies and the cloud eco-system in general will have to address.  At its most base level these are areas of potential revenue streams for governments and as such increase the likelihood of their eventual day in the sun.    I am currently personally tracking over 30 different legislative initiatives around the world (read as pre-de-facto laws) that will likely shape this Technology landscape for the big providers and potential cloud adopters some time in the future.

What is to come?

This first article was really just to bring out the basic premise of the conversation and topics I will be discussing and to lay the groundwork to a very real degree.  I have not even begun to touch on the extra-governmental impacts of social and environmental impacts that will likely change the shape even further.  This interaction of Technology, “The Cloud”, Political and Social Issues, exists today and although largely masked by the fact that eco-system of the cloud is not fully developed or matured, is no less a reality.   Any predictions that are made are extensions of existing patterns I see in the market already and do not necessarily represent a forgone conclusion, but rather the most likely developments based upon my interactions in this space.  As this Technology space continues to mature, the only certainty is uncertainty modulated against the backdrop of a world where increasingly geo-political forces will continue to shape the Technology of tomorrow.

\Mm

Cloud Détente – The Cloud Cat and Mouse Papers

image_thumb1

Over the last decade or so I have been lucky enough to be placed into a fairly unique position to work internationally deploying global infrastructure for cloud environments.  This work has spanned across some very large companies with a very dedicated focus on building out global infrastructure and managing through those unique challenges.   Strategies may have varied but the challenges faced by them all had some very common themes.   One of the more complex interactions when going through this process is what I call the rolling Cat and Mouse interactions between governments at all levels and these global companies.  

Having been a primary player in these negotiations and the development of measures and counter measures as a result of these interactions, I have come to believe there are some interesting potential outcomes that cloud adopters should think about and understand.   The coming struggle and complexity for managing regulating and policing multi-national infrastructure will not solely impact the large global players, but in a very real way begin to shape how their users will need to think through these socio-political  and geo-political realities. The potential impacts on their business, their adoption of cloud technologies, their resulting responsibilities and measure just how aggressively they look to the cloud for the growth of their businesses.

These observations and predictions are based upon my personal experiences.  So for whatever its worth (good or bad)  this is not the perspective of an academic writing from some ivory tower, rather they are  the observations of someone who has been there and done it.  I probably have enough material to write an entire book on my personal experiences and observations, but I have committed myself to writing a series of articles highlighting what I consider the big things that are being missed in the modern conversation of cloud adoption.  

The articles will highlight (with some personal experiences mixed in) the ongoing battle between Technocrats versus Bureaucrats.  I will try to cover a different angle on many of the big topics out there today such as :

  • Big Data versus Big Government
  • Rise of Nationalism as a factor in Technology and infrastructure distribution
  • The long struggle ahead for managing, regulating, and policing clouds
  • The Business, end-users, regulation and the cloud
  • Where does the data live? How long does it live? Why Does it Matter?
  • Logic versus Reality – The real difference between Governments and Technology companies.
  • The Responsibilities of data ownership
    • … regarding taxation exposure
    • … regarding PII impacts
    • … Safe Harbor

My hope is that this series and the topics I raise, while maybe a bit raw and direct, will cause you to think a bit more about the coming impacts on Technology industry at large, the potential coming impacts to small and medium size businesses looking to adopt these technologies, and the developing friction and complexity at the intersection of technology and government.

\Mm