2014 The Year Cloud Computing and Internet Services will be taxed. A.K.A Je déteste dire ça. Je vous l’avais dit.

 

france

Its one of those times I really hate to be right.  As many of you know I have been talking about the various grass roots efforts afoot across many of the Member EU countries to start driving a more significant tax regimen on Internet based companies.  My predictions for the last few years have more been cautionary tales based on what I saw happening from a regulatory perspective on a much smaller scale, country to country.

Today’s Wall Street Journal has an article discussing France’s movements to begin taxation on Internet related companies who derive revenue from users and companies across the entirety of the EU, but holding those companies responsible to the tax base in each country.   This could likely mean that such legislation is likely to become quite fractured and tough for Internet Companies to navigate.  The French proposition is asking the European Commission to draw up proposals by the Spring of 2014.

This is likely to have a very interesting (read as cost increases) across just about every aspect of Internet and Cloud Computing resources.  From a business perspective this is going to increase costs which will likely be passed on to consumers in small but interesting ways.  Internet advertising will need to be differentiated on a country by country basis, and advertisers will end up having different cost structures, Cloud Computing Companies will DEFINITELY need to understand where instances of customer instances were, and whether or not they were making money.  Potentially more impactful, customers of Cloud computing may be held to account for taxation accountability that they did not know they had!  Things like Data Center Site Selection are likely going to become even more complicated from a tax analysis perspective as countries with higher populations will likely become no-go zones (perhaps) or require the passage of even more restrictive laws around it.

Its not like the seeds of this haven’t been around since 2005, I think most people just preferred to keep a blind eye to the tax that the seed was sprouting into a full fledged tree.   Going back to my Cat and Mouse Papers from a few years ago…  The Cat has caught the mouse, its now the mouse’s move.

\Mm

 

Authors Note: If you don’t have a subscription to the WSJ, All Things Digital did a quick synopsis of the article here.

The Soft Whisper that Big Data, Cloud Computing, and Infrastructure at Scale should treat as a Clarion Call.

The Cloud Jail

On Friday, August 23rd, the Chinese government quietly released Shi Tao from prison.   He was released a full fifteen months before his incarceration was supposed to end.  While certainly a relief to his family and friends, it’s likely a bittersweet ending to a sour turn of events.

Just who is Shi Tao and what the heck does he have to do with Big Data?  Why is he important to Cloud Computing and big infrastructure?   Is he a world-class engineer who understands technology at scale?  Is he a deep thinker of all things cloud?  Did he invent some new technology poised to revolutionize and leap frog our understanding?

No.  He is none of these things.  

He is a totem of sorts.   A living parable and a reminder of the realities that many in the Cloud Computing industry and those who deal with Big Data, rarely if ever address head on or give little mind to.   He represents the cautionary tale of what can happen if companies and firms don’t fully vet the impacts of their Technology Choices grounded by the real world.  The site selection of their data centers.   The impact of how data is stored.  Where that data is stored.   The methods used of storing the data.  In short a responsibility for the full accounting and consideration of their technological and informational artifacts. 

To an engineering mind that responsibility generally means the most efficient storage of data with the least amount of cost.  Using the most direct method or the highest performing algorithm.  In short…to continually build a better mouse trap.  

In site selecting of new data centers it would likely be limited to just the basic real estate and business drivers.   What is the power cost?  What is the land cost?  What is my access to water? Is there sufficient network nearby?  Can I negotiate tax breaks at the country and/or at local levels?

In selecting a cloud provider its generally about avoiding large capital costs and paying what I need, when I need it.

In the business landscape of tomorrow, these thoughts will prove short-sighted and may likely expose your company to significant cost and business risks they are not contemplating or worse!

Big Data is becoming a dangerous game.  To be fair content and information in general has always been a bit of a dangerous game.   In Technology, we just go on pretending we live under a Utopian illusion that fairness  ultimately rules the world.  It doesn’t.   Businesses have an inherent risk collecting, storing, analyzing , and using the data that they obtain.  Does that sound alarmist or  jaded?  Perhaps, but its spiced with some cold hard realities that are becoming ever more present every day and you ignore at your own peril.  

Shi was arrested in 2004 and sentenced to prison the following year on charges of disclosing state secrets.  His crime? He had sent details of a government memo restricting news coverage to a human rights group in the United States.  The Chinese government demanded that Yahoo! (his mail provider) turn over all mail records (Big Data) to the authorities. Something they ultimately did.  

Now Before you go and get your Western Democracy Sensibilities all in a bunch and cry foul-that ugly cold hard reality thing I was talking about plays a real part here.  As Yahoo was operating as a business inside China, they were bound by comply with Chinese law no matter how hard the action was to stomach for them.   Around that time Yahoo sold most of its stake in the Chinese market to Alibaba and as of the last month or so Yahoo has since left China altogether.  

Yahoo’s adventure in Data information risk and governmental oversight was not over however.  They were brought before the US Congress on charges of Human Rights Violations.   Placing them once again into a pot of boiling water from a governmental concern closer to home.  

These events took place back almost seven years ago and I would argue that the world of information, big data, and scaled infrastructure has actually gotten more convoluted and tricky to deal with.   With the advent of Amazon AWS and other cloud services, a lack of understanding of regional and local Safe Harbor practices amongst enterprises and startups alike,  concepts like chain of custody and complicated and recursive ownership rights can be obfuscated to the point of insanity if you don’t have a program to manage it.    We don’t have to use the example of China either, similar complexities are emerging across and internal to  Europe .  Is your company really thinking through Big Data?  Do you fully understand ownership in a clouded environment?  Who is responsible for taxation for you local business hosted internationally?  What if your cloud servers ,with your data, hosted by a cloud platform  were confiscated by local and regional governments without your direct involvement?  Are you strategically storing data in a way that protects yourself? Do you even have someone looking at these risks to your business? 

As a recovering network engineer I am reminded by an old joke referring to the OSI Model.   The OSI Model categorizes all functions of a communication system into seven logical layers.   It makes internetworking clear and efficient and easily categorized.  Of course, as every good network engineer knows, it doesn’t account for layers 8 and 9.  But wait!  You said there were only 7!  Well Layers 8 and 9 are Politics and Religion.    These layers exist in Cloud Computing and Big Data too and are potentially more impactful to the business overall.

All of these scenarios do not necessarily lend themselves to be the most direct or efficient, but its pretty clear that you can save yourself a whole lot of time and heartache if you think about them strategically.  The infrastructure of tomorrow is powerful, robust, and ubiquitous.   You simply cannot manage this complex eco-system the same ways you have in the past and just like the technology your thinking needs to evolve.   

\Mm

The Cloud Cat and Mouse Papers–Site Selection Roulette and the Insurance Policies of Mobile infrastructure

cnm-roulette

Its always hard to pick exactly where to start in a conversation like this especially since this entire process really represents a changing life-cycle.   Its more of a circular spiral that moves out (or evolves) as new data is introduced than a traditional life-cycle because new data can fundamentally shift the technology or approach.   That being said I thought I would start our conversations at a logical starting point.   Where does one place your infrastructure?  Even in its embryonic “idea phase” the intersection of government and technology begins its delicate dance to a significant degree. These decisions will ultimately have an impact on more than just where the Capital investments a company decides to make are located.  It has affects on the products and services they offer, and as I propose, an impact ultimately on the customers that use the services at those locations.

As I think back to the early days of building out a global infrastructure, the Site Selection phase started at a very interesting place.   In some ways we approached it with a level of sophistication that has still to be matched today and in other ways, we were children playing a game whose rules had not yet been defined.

I remember sitting across numerous tables with government officials talking about making an investment (largely just land purchase decisions) in their local community.  Our Site Selection methodology had brought us to these areas.  A Site Selection process which continued to evolve as we got smarter, and as we started to truly understand the dynamics of the system were being introduced to.   In these meetings we always sat stealthily behind a third party real estate partner.  We never divulged who we were, nor were they allowed to ask us that directly.  We would pepper them with questions, and they in turn would return the favor.  It was all cloak and dagger with the Real Estate entity taking all action items to follow up with both parties.

Invariably during these early days -  these locales would always walk away with the firm belief that we were a bank or financial institution.   When they delved into our financial viability (for things like power loads, commitment to capital build-out etc.) we always stated that any capital commitments and longer term operational cost commitments were not a problem.    In large part the cloak and dagger aspect was to keep land costs down (as we matured, we discovered this was quite literally the last thing we needed to worry about) as we feared that once our name became attached to the deal our costs would go up.   These were the early days of seeding global infrastructure and it was not just us.  I still laugh at the fact that one of our competitors bound a locality up so much in secrecy – that the community referred to the data center as Voldemort – He who shall not be named, in deference to the Harry Potter book series.

This of course was not the only criteria that we used.  We had over 56 by the time I left that particular effort with various levels of importance and weighting.   Some Internet companies today use less, some about the same, and some don’t use any, they ride on the backs of others who have trail-blazed a certain market or locale.   I have long called this effect Data Center Clustering.    The rewards for being first mover are big, less so if you follow them ultimately still positive. 

If you think about most of the criteria used to find a location it almost always focuses on the current conditions, with some acknowledge in some of the criteria of the look forward.  This is true for example when looking at power costs.   Power costs today are important to siting a data center, but so is understanding the generation mix of that power, the corresponding price volatility, and modeling that ahead to predict (as best as possible) longer term power costs.

What many miss is understanding the more subtle political layer that occurs once a data center has been placed or a cluster has developed. Specifically that the political and regulatory landscape can change very quickly (in relationship to the life of a data center facility which is typically measured in 20, 30, or 40 year lifetimes).  It’s a risk that places a large amount of capital assets potentially in play and vulnerable to these kinds of changes.   Its something that is very hard to plan or model against.  That being said there are indicators and clues that one can use to at least play risk factors against or as some are doing – ensuring that the technology they deploy limits their exposure.    In cloud environments the question remains open – how liable are companies using cloud infrastructure in these facilities at risk?   We will explore this a little later.

That’s not to say that this process is all downside either.  As we matured in our approach, we came to realize that the governments (local or otherwise) were strongly incented to work with us on getting us a great deal and in fact competed over this kind of business.   Soon you started to see the offers changing materially.  It was little about the land or location and quickly evolved to what types of tax incentives, power deals, and other mechanisms could be put in play.   You saw (and continue to see) deals structured around sales tax breaks, real estate and real estate tax deals, economic incentives around breaks in power rates, specialized rate structures for Internet and Cloud companies and the like.   The goal here of course was to create the public equivalent of “golden handcuffs” for the Tech companies and try to marry them to particular region, state, or country.  In many cases – all three.  The benefits here are self apparent.  But can they (or more specifically will they) be passed on in some way to small companies who make use of cloud infrastructure in these facilities? While definitely not part of the package deals done today – I could easily see site selection negotiations evolving to incent local adoption of cloud technology in these facilities or provisions being put in place tying adoption and hosting to tax breaks and other deal structures in the mid to longer timeframe for hosting and cloud companies.

There is still a learning curve out there as most governments mistakenly try and tie these investments with jobs creation.   Data Centers, Operations, and the like represents the cost of goods sold (COGS) to the cloud business.  Therefore there is a constant drive towards efficiency and reduction of the highest cost components to deliver those products and services.   Generally speaking, people, are the primary targets in these environments.   Driving automation in these environments is job one for any global infrastructure player.  One of the big drivers for us investing and developing a 100% lights-out data center at AOL was eliminating those kinds of costs.  Those governments that generally highlight job creation targets over other types typically don’t get the site selection.    After having commissioned an economic study done after a few of my previous big data center builds I can tell you that the value to a region or a state does not come from the up front jobs the data center employs.  After a local radio stationed called into question the value of having such a facility in their backyard, we used a internationally recognized university to perform a third party “neutral” assessment of the economic benefits (sans direct people) and the numbers were telling.  We had surrendered all construction costs and other related material to them, and they investigated over the course of a year through regional interviews and the like of what the direct impacts of a data center was on the local community, and the overall impacts by the addition.  The results of that study are owned by a previous employer but I  can tell you with certainty – these facilities can be beneficial to local regions.

No one likes constraints and as such you are beginning to see Technology companies use their primary weapon – technology – to mitigate their risks even in these scenarios.   One cannot argue for example, that while container-based data centers offer some interesting benefits in terms of energy and cost efficiencies, there is a certain mobility to that kind of infrastructure that has never been available before.    Historically, data centers are viewed as large capital anchors to a location.    Once in place, hundreds of millions to billions (depending on the size of the company) of dollars of capital investment are tied to that region for its lifespan.   Its as close to permanent in the Tech Industry as building a factory was during the industrial revolution. 

In some ways Modularization of the data center industry is/can/will have the same effect as the shipping container did in manufacturing.   All puns intended.  If you are unaware of how the shipping container revolutionized the world, I would highly recommend the book “The Box” by Marc Levinson, it’s a quick read and very interesting if you read it through the lens of IT infrastructure and the parallels of modularization in the Data Center Industry at large.

It gives the infrastructure companies more exit options and mobility in the future than they would have had in the past under large capital build-outs.  Its an insurance policy if you will for potential changes is legislation or regulation that might negatively impact the Technology companies over time.  Just another move in the cat and mouse games that we will see evolving here over the next decade or so in terms of the interactions between governments and global infrastructure. 

So what about the consumers of cloud services?  How much of a concern should this represent for them?  You don’t have to be a big infrastructure player to understand that there are potential risks in where your products and services live.  Whether you are building a data center or hosting inside a real estate or co-location provider – these are issues that will affect you.  Even in cases where you only use the cloud provisioning capabilities within your chosen provider – you will typically be given options of what region or area would you like you gear hosted in.  Typically this is done for performance reasons – reaching your customers – but perhaps this information might cause you to think of the larger ramifications to your business.   It might even drive requirements into the infrastructure providers to make this more transparent in the future.

These evolutions in the relationship between governments and Technology and the technology options available to them will continue to shape site selection policy for years to come.   So too will it ultimately affect the those that use this infrastructure whether directly or indirectly remains to be seen.  In the next paper we will explore the this interaction more deeply as it relates to the customers of cloud services and the risks and challenges specifically for them in this environment.

\Mm

The Cloud Cat and Mouse Papers – The Primer

image_thumb1_thumb

Cat and Mouse with Multi-national infrastructure –The Participants to date

There is an ever-changing, game of cat and mouse developing in the world of cloud computing.   Its not a game that you as a consumer might see, but it is there.  An undercurrent that has been there from the beginning.   It pits Technology Companies and multi-national infrastructure against local and national governments.  For some years this game of cat and mouse has been quietly played out in backrooms, development and technology roadmap re-works, across negotiation tables, and only in the rarest of cases – have they come out for industry scrutiny or visibility.  To date the players have been limited to likes of Google, Microsoft, Amazon, and others who have scaled their technology infrastructure across the globe and in large measure those are the players that governments have moved against in an ever intricate chess game.  I myself have played apart in the measure/counter-measure give and take of this delicate dance.

The primary issues in this game have to do with realization of revenue for taxation purposes, Safe Harbor and issues pertaining to personally identifiable information, ownership of Big Data, the nature of what is stored, how it is stored, and where it is stored, the intersection where politics and technology meet.   A place where social issues and technology collide.   You might call them storm clouds, just out of sight, but there is thunder on the horizon that you the consumer and/or potential user of global cloud infrastructure will need to be aware of because eventually the users of cloud infrastructure will become players in the game as well. 

That is not to say that the issues I tease out here are all gloom and doom.  In fact, they are great opportunities for potential business models, additional product features, and even cloud eco-system companies or niche solutions unto themselves.  A way to drive significant value for all sides.   I have been toying with more than a few of these ideas myself here over the last few months.

To date these issues have mostly manifested in the global build up of infrastructure for the big Internet platforms.  The Products and Services the big guys in the space use as their core money-making platforms or primary service delivery platforms.  Rarely if ever do these companies use this same infrastructure for their Infrastructure as a service (IAAS) or Platform as a Services (PAAS) offerings.  However, as you will see, the same challenges will and do apply to these offerings as well.  In some cases they are even more acute and problematic in a situation where there may be a multi-tenancy with the potential to put even more burden on future cloud users.

If I may be blunt about this there is an interesting lifecycle to this food chain whereby the Big Technology companies consistently have the upper hand and governmental forces through the use of their primary tools – regulation and legislation – are constantly playing catch up.  This lifecycle is unlikely to change for at least five reasons.

  • The Technology Companies will always have the lens of the big picture of multi-national infrastructure.   Individual countries, states, and locales generally only have jurisdiction or governance over that territory, or population base that is germane to their authority.
  • Technology Companies can be of near singular purpose on a very technical depth of capability or bring to bear much more concentrated “brain power” to solve for evolutions in the changing socio-political landscape to continually evolve measures and counter-measures to address these changes.
  • By and large Governments rely upon technologies and approaches to become mainstream before there is enough of a base understanding of the developments and impacts before they can act.  This generally places them in a reactionary position. 
  • Governmental forces generally rely upon “consultants” or “industry experts” to assist in understanding these technologies, but very few of these industry experts have ever really dealt with multi-national infrastructure and fewer still have had to strategize and evolve plans around these types of changes. The expertise at that level is rare and almost exclusively retained by the big infrastructure providers.
  • Technology Companies have the ability to force a complete game-change to the rules and reality by completely changing out the technology used to deliver their products and services, change development and delivery logic and/or methodology to almost affect a complete negation of the previous method of governance, making it obsolete. 

That is not to say that governments are unwilling participants in this process forced into a subservient role in the lifecycle.  In fact they are active participants in attracting, cultivating, and even subsidizing these infrastructural investments in areas of under their authority and jurisdiction.  Using tools like Tax breaks, real estate and investment incentives, and private-public partnerships do have both initial and ongoing benefits for the Governments as well.  In many ways these  are “golden handcuffs” for Technology Companies who enter into this cycle, but like any kind of constraint – positive or negative – the planning and strategy to unfetter themselves begins almost immediately.

Watson, The Game is Afoot

Governments, Social Justice, Privacy, and Environmental forces have already begun to force changes in the Technology landscape for those engaged in multi-national infrastructure.  There are tons of articles freely available on the web which articulate the kinds of impacts these forces have had and will continue to have on the Technology Companies.  The one refrain through all of the stories is the resiliency of those same Technology Companies to persevere and thrive despite what might be crucial setbacks in other industries.

In some cases the technology changes and adapts to meet the new requirements, in some cases, approaches or even vacating “un-friendly” environs across any of these spectrums becomes an option, and in some cases, there is not an insignificant bet that any regulatory or compulsory requirements will be virtually impossible or too technically complex to enforce or even audit.

Lets take a look at a couple of the examples that have been made public that highlight this kind of thing.   Back in 2009, Microsoft migrated substantial portions of their Azure Cloud Services out of Washington State to its facilities located in San Antonio Texas.  While the article specifically talks about certain aspects of tax incentives being held back, there were of course other factors involved.   One doesn’t have to look far to understand that Washington State also has an B&O Tax (Business and Occupation Tax) which is defined as a gross receipts tax. It is measured on the value of products, gross proceeds of sale, or gross income of the business.  As you can imagine, interpreting this kind of tax as it relates to online and cloud income and the like could be very tricky and regardless would be complex and technical problem  to solve.  It could have the undesired impact of placing any kind of online business at an interesting disadvantage,  or at another level place an unknown tax burden on its users.   I am not saying this was a motivating factor in Microsoft’s decision but you can begin to see the potential exposure developing.   In this case, The Technology could rapidly change and move the locale of the hosted environments to minimize the exposure, thus thwarting any governmental action.  At least for the provider, but what of the implications if you were a user of the Microsoft cloud platform and found yourself with an additional or unknown tax burden.  I can almost guarantee that back in 2009 that this level of end user impact (or revenue potential from a state tax perspective) had not even been thought about.   But as with all things, time changes and we are already seeing examples of exposure occurring across the game board that is our planet.

We are already seeing interpretations or laws getting passed in countries around the globe where for example, a server is a taxable entity.   If revenue for a business is derived from a computer or server located in that country it falls under the jurisdiction of that countries tax authority.    Imagine yourself as a company using this wonderful global cloud infrastructure selling your widgets, products or services, and finding yourself with an unknown tax burden and liability in some “far flung” corner of the earth.   The Cloud providers today mostly provide Infrastructure services.  They do not go up the stack far enough to be able to effectively manage your entire system let alone be able to determine your tax liability.  The burden of proof to a large degree today would reside on the individual business running inside that infrastructure.  

In many ways those adopting these technologies are the least capable to deal with these kinds of challenges.  They are small to mid-sized companies who admittedly don’t have the capital, or operational sophistication to build out the kind of infrastructure needed to scale that quickly.   They are unlikely to have technologies such as robust configuration management databases to be able to to track virtual instances of their products and services, to tell what application ran, where it ran, how long it ran, and how much revenue was derived during the length of its life.   And this is just one example (Server as a taxable entity) of a law or legislative effort that could impact global users.  There are literally dozens of these kinds of bills/legislative initiatives/efforts (some well thought out, most not) winding their way through legislative bodies around the world.

You might think that you may be able to circumvent some of this by limiting your product or services deployment to the country closest to home, wherever home is for you.  However there are other efforts winding their way through or in large degree passed that impact the data you store, what you store, whose data are you storing, and the like. In most cases these initiatives are unrelated to the revenue legislations developing, but balanced they can give an interesting one – two punch.   For example many countries are requiring that for Safe Harbor purposes all information for any nationals of ‘Country X’ must be stored in ‘Country X’ to ensure that its citizenry is properly protected and under the jurisdiction of the law for those users.   In a cloud environment, with customers potentially from almost anywhere how do you ensure that this is the case?  How do you ensure you are compliant?   If you balance this requirement with the ‘server as a taxable entity’ example I just gave above there is an interesting exposure and liability for companies prove where and when revenue is derived.     Similarly there are some laws that are enacted as reactions against legislation in other countries.

In the post-911 era within the United States, the US Congress enacted a series of laws called the Patriot Act.   Due to some of the information search and seizure aspects of the law, Canada forbade that Canadian citizens data be stored in the United States in response.   To the best of my knowledge only a small number of companies actually even acknowledge this requirement and have architected solutions to address it, but the fact remains they are not in compliance with Canadian law.  Imagine you are a small business owner, using a cloud environment to grow your business, and suddenly you begin to grow your business significantly in Canada.  Does your lack of knowledge of Canadian law excuse you from your responsibilities there?  No.  Is this something that your infrastructure provider is offering to you? Today, no.  

I am only highlighting certain cases here to make the point that there is a world of complexity coming to the cloud space.  Thankfully these impacts have not been completely explored or investigated by most countries of the world, but its not hard to see a day/time where this becomes a very real thing where companies and the cloud eco-system in general will have to address.  At its most base level these are areas of potential revenue streams for governments and as such increase the likelihood of their eventual day in the sun.    I am currently personally tracking over 30 different legislative initiatives around the world (read as pre-de-facto laws) that will likely shape this Technology landscape for the big providers and potential cloud adopters some time in the future.

What is to come?

This first article was really just to bring out the basic premise of the conversation and topics I will be discussing and to lay the groundwork to a very real degree.  I have not even begun to touch on the extra-governmental impacts of social and environmental impacts that will likely change the shape even further.  This interaction of Technology, “The Cloud”, Political and Social Issues, exists today and although largely masked by the fact that eco-system of the cloud is not fully developed or matured, is no less a reality.   Any predictions that are made are extensions of existing patterns I see in the market already and do not necessarily represent a forgone conclusion, but rather the most likely developments based upon my interactions in this space.  As this Technology space continues to mature, the only certainty is uncertainty modulated against the backdrop of a world where increasingly geo-political forces will continue to shape the Technology of tomorrow.

\Mm

Open Sourcing our Operational Scale Tools–Meet Trigger

Not that long ago we made a decision to begin Open Sourcing some of our internal products and tools into the community at large.   There are some really interesting benefits for open sourcing some of these internally developed tools and as a company we begun to do some very interesting work in this space.  Some of our work has gotten quite a bit of attention such as SocketStream which is a very fast, real time web framework. 

Today I am very pleased to announce that we are open-sourcing one of the tools that we use to manage and maintain our network infrastructure.   We call it Trigger.  

Trigger is a Python framework and suite of tools for interfacing with network devices and managing network configuration and security policy. Trigger was specifically internally designed to increase the speed and efficiency of network configuration management.

Trigger’s core device interaction utilizes the freely available Twisted event-driven networking engine. The libraries can connect to network devices by any available method (e.g. telnet, SSH), communicate with them in their native interface (e.g. Juniper JunoScript, Cisco IOS), and return output. Trigger is able to manage any number of jobs in parallel and handle output or errors as they return.

If you think a tool like this would be interesting for you or your company feel free to give it a try.  The Open Source repository for it can be found here at :https://github.com/aol/trigger

For the complete set of documentation is hosted at ReadtheDocs:

http://readthedocs.org/docs/trigger

This is just the first of many internal efforts and tools that we plan to Open Source.  I will announce more tools in the coming months that have helped AOL to scale over the years.  Stay tuned!

\Mm  

Cloud Détente – The Cloud Cat and Mouse Papers

image_thumb1

Over the last decade or so I have been lucky enough to be placed into a fairly unique position to work internationally deploying global infrastructure for cloud environments.  This work has spanned across some very large companies with a very dedicated focus on building out global infrastructure and managing through those unique challenges.   Strategies may have varied but the challenges faced by them all had some very common themes.   One of the more complex interactions when going through this process is what I call the rolling Cat and Mouse interactions between governments at all levels and these global companies.  

Having been a primary player in these negotiations and the development of measures and counter measures as a result of these interactions, I have come to believe there are some interesting potential outcomes that cloud adopters should think about and understand.   The coming struggle and complexity for managing regulating and policing multi-national infrastructure will not solely impact the large global players, but in a very real way begin to shape how their users will need to think through these socio-political  and geo-political realities. The potential impacts on their business, their adoption of cloud technologies, their resulting responsibilities and measure just how aggressively they look to the cloud for the growth of their businesses.

These observations and predictions are based upon my personal experiences.  So for whatever its worth (good or bad)  this is not the perspective of an academic writing from some ivory tower, rather they are  the observations of someone who has been there and done it.  I probably have enough material to write an entire book on my personal experiences and observations, but I have committed myself to writing a series of articles highlighting what I consider the big things that are being missed in the modern conversation of cloud adoption.  

The articles will highlight (with some personal experiences mixed in) the ongoing battle between Technocrats versus Bureaucrats.  I will try to cover a different angle on many of the big topics out there today such as :

  • Big Data versus Big Government
  • Rise of Nationalism as a factor in Technology and infrastructure distribution
  • The long struggle ahead for managing, regulating, and policing clouds
  • The Business, end-users, regulation and the cloud
  • Where does the data live? How long does it live? Why Does it Matter?
  • Logic versus Reality – The real difference between Governments and Technology companies.
  • The Responsibilities of data ownership
    • … regarding taxation exposure
    • … regarding PII impacts
    • … Safe Harbor

My hope is that this series and the topics I raise, while maybe a bit raw and direct, will cause you to think a bit more about the coming impacts on Technology industry at large, the potential coming impacts to small and medium size businesses looking to adopt these technologies, and the developing friction and complexity at the intersection of technology and government.

\Mm

Panel at Data Center Dynamics – London

On November 10th and 11th I will be speaking on two panels at the Data Center Dynamics event in London.  The theme for the two day event is Carbon: Risk or Opportunity.   In the morning, On Day One, I am speaking  in a panel entitled The Data Center Efficiency Schism …New Realities in Design with Ed Ansett from HP/EYP and my old friend Lex Coors from Interxion.   The afternoon has me on another panel with Liam Newcombe with the British Computer Society entitled ‘The Shape of the Cloud to Come’ moderated by Data Center Dynamics CTO, Stephen Worn.  Liam and I have passion for this space and our past conversations on this topic in particular and other related topics have been quite entertaining (or so I have been told).  To top it off, this panel is moderated by Stephen who is not known for being timid either, so I am really looking forward to the discussion there.

The entire event should be quite super-charged especially given the recent Carbon Reduction Commitment legislation in the UK.  For those of you keeping a close eye on the emerging impact of carbon legislation across the world this event is likely to source a number of lightning rods and thought leadership to watch out for.  

If you have not signed up and will be in London, I would strongly encourage you to do so.   As always if you happen to see me wandering about, please feel free to stop and chat awhile. 

\Mm

A Practical Guide to the Early Days of Data Center Containers

In my current role (and given my past) I often get asked about the concept of Data Center Containers by many looking at this unique technology application to see if its right for them.   In many respects we are still in the early days of this technology approach and any answers one gives definitely has a variable shelf life given the amount of attention the manufacturers and the industry is giving this technology set.   Still, I thought it might be useful to try and jot down a few key things to think about when looking at data center containers and modularized solutions out there today.

I will do my best to try and balance this view across four different axis the Technology, Real Estate, Financial and Operational Considerations.  A sort of ‘ Executives View’  of this technology. I do this because containers as a technology can not and should not be looked at from a technology perspective alone.  To do so is complete folly and you are asking for some very costly problems down the road if you ignore the other factors.  Many love to focus on the interesting technology characteristics or the benefits in efficiency that this technology can bring to bare for an organization but to implement this technology (like any technology really) you need to have a holistic view of the problem you are really trying to solve.

So before we get into containers specifically lets take a quick look as to why containers have come about.  

The Sad Story of Moore’s Orphan

In technology circles, Moore’s law has come to be applied to a number of different technology advancement and growth trends and has come to represent exponential growth curves.  The original Moore’s law was actually an extrapolation and forward looking observation based on the fact that ‘the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented.’  As my good friend and long time Intel Technical Fellow now with Microsoft, Dileep Bhandarkar routinely states – Moore has now been credited for inventing the exponential.  Its a fruitless battle so we may as well succumb to the tide.orphan

If we look at the technology trends across all areas of Information Technology, whether it be processors, storage, memory, or whatever, the trend has clearly fallen into this exponential pattern in terms of numbers of instructions, amount of storage or memory, network bandwidth, or even tape technology its clear that the movement of Technology has been marching ahead at a staggering pace over the last 20 years.   Isn’t it interesting then that places where all of this wondrous growth and technological wizardry has manifested itself, the data center or computer room, or data hall has been moving along at a near pseudo-evolutionary standstill.  In fact if one truly looks at the technologies present in most modern data center design they would ultimately find small differences from the very first special purpose data room built by IBM over 40 years ago.

Data Centers themselves have a corollary to the beginning of the industrial revolution.   In fact I am positive that Moore’s observations would hold true as civilization transitioned from an agricultural based economy to that of an industrialized one.   In fact one might say that the current modularization approach to data centers is really just the industrialization of the data center itself. 

In the past, each and every data center was built lovingly by hand by a team of master craftsmen and data center artisans.  Each is a one of a kind tool built to solve a set of problems.  Think of the eco-system that has developed around building these modern day castles.  Architects, Engineering firms, construction firms, specialized mechanical industries, and a host of others that all come together to create each and every masterpiece.    So to, did those who built plows, and hammers, clocks and sextants, and the tools of the previous era specialize in making each item, one by one.   That is, of course, until the industrial revolution.industrial

The data center modularization movement is not limited to containers and there is some incredibly ingenious stuff happening in this space out there today outside of containers, but one can easily see the industrial benefits of mass producing such technology.  This approach simply creates more value, reduces cost and complexity, makes technology cheaper and simplifies the whole.  No longer are companies limited to working with the arcane forces of data center design and construction, many of these components are being pre-packaged, pre-manufactured and becoming more aggregated.  Reducing the complexity of the past.  

And why shouldn’t it?   Data Centers live at the intersection of Information and Real Estate.   They are more like machines than buildings but share common elements of both buildings and technology.   All one has to do is look at it from a financial perspective to see how true this is.   In terms of construction, the cost of data centers break down to the following simple format.  Roughly 85% of the total costs to build the facility is made up of the components, labor, and technology to deal with the distribution or cooling of the electrical consumption.

pie

This of course leaves roughly 15% of the costs relegated to land, steel, concrete, bushes, and more of the traditional real estate components of the build.  Obviously these percentages differ market to market but on the whole they are close enough for one to get the general idea.  It also raises an interesting question as to what is the big drive for higher density in data centers, but that is a post for another day. 

As a result of this incredible growth there has been an explosion, a Renaissance if you will, in Data Center Design and approach and the modularization effort is leading the way in causing people to think differently about the data centers themselves.   Its a wonderful time to be part of this industry.   Some claim that the drivers of this change are being driven by the technology.  Others claim that the drivers behind this change have to do with the tough economic times and are more financial.  The true answer (as in all things) is that its a bit of both plus some additional factors.

Driving at the intersection of IT Lane and Building Boulevard

From the perspective of the technology drivers behind this change roads is the fact that most existing data centers are not designed or instrumented to handle the demands of the changing technology requirements occurring within the data center today.

Data Center managers are being faced with increasingly varied redundancy and resiliency requirements within the footprints that they manage.   They continue to support environments that heavily rely upon the infrastructure to provide robust reliability to ensure that key applications do not fail.  But applications are changing.  Increasingly there are applications that do not require the same level of infrastructure to be deployed because either the application is built in such a way that it is more geo-diverse or server-diverse. Perhaps the internal business units have deployed some test servers or lab / R&D environments that do not need this level of infrastructure. With the amount of RFPs out there demanding more diversity from software and application developers to solve the redundancy issue in software rather than large capital spend requirements on behalf of the enterprise, this is a trend likely to continue for some time.  Regardless the reason for the variability challenge that data center managers are facing, the truth is they are greater than ever before.

Traditional data center design cannot achieve these needs without additional waste or significant additional expenditure.   Compounding this is the ever increasing requirements for higher power density and resulting cooling requirements.  This is complicated by the fact that there is no uniformity of load across most data centers.  You have certain racks or areas driving incredible power consumption requiring significant density and other environments, perhaps legacy, perhaps under-utilized which run considerably less dense.   In a single room you could see rack power densities vary by as much as 8kw per rack! You might have a bunch of racks drawing 4kw/rack and an area drawing 12kw per rack or even denser.   This could consume valuable data center resources and make data center planning very difficult.

Additionally looming on the horizon is the spectre or opportunity of commodity cloud services which might offer additional resources which could significantly change the requirements of your data center design or need for specific requirements.  This is generally an unknown at this point, but my money is that the cloud could significantly impact not only what you build, but how you build it.   This ultimately drives a modularized approach to the fore.

From a business / finance perspective companies are faced with some interesting challenges as well.  The first is that the global inventory for data center space (from a leasing or purchase perspective) is sparse at best.    This is resulting from a glut of capacity after the dotcom era and the resulting land grab that occurred after 9/11 and the Finance industry chewing up much of the good inventory.    Additive to this is the fact that there is a real reluctance to build these costly facilities speculatively.   This is a combination of how the market was burned in the dotcom days, and the general lack of availability and access to large sums of capital.  Both of these factors are driving data center space to be a tight resource.

In my opinion the biggest problem across every company I have encountered is that of capacity planning.  Most organizations cannot accurately reflect how much data center capacity they will need in next year let alone 3 or 5 years from now.   Its a challenge that I have invested a lot of time trying to solve and its just not that easy.   But this lack of predictability exacerbates the problems for most companies.  By the time they realize they are running out of capacity or need additional capacity it becomes a time to market problem.   Given the inventory challenge I mentioned above this can position a company in a very uncomfortable place.   Especially if you take the all-in industry average of building a traditional data center yourself in a timeline somewhere between 106 and 152 weeks.  

The high upfront capital costs of a traditional data center build can also be a significant endeavor and business impact event for many companies.   The amount of spending associated with the traditional method of construction could cripple a company’s resources and/or force it to focus its resources on something non-core to the business.   Data Centers can and do impact the balance sheet.  This is a fact that is not lost on the Finance professionals in the organization looking at this type of investment.

With the need for companies to remain agile and allow them to move quickly they are looking for the same flexibility from their infrastructure.    An asset like a large data center built to requirements that no longer fit can create a drag on a companies ability to stay responsive as well. 

None of this even acknowledges some basic cost factors that are beginning to come into play around the construction itself.   The construction industry is already forecasting that for every 8 people retiring in the key trades (mechanical, electrical, pipe-fitting, etc) associated with data centers only one person is replacing them.   This will eventually mean higher cost of construction and an increased scarcity in construction resources.

Modularized approaches help all of these issues and challenges and provide the modern data center manager a way to solve for both the technology and business level challenges. It allows you to move to Site Integration versus Site Construction.    Let me quickly point out that this is not some new whiz bang technology approach.  It has been around in other industries for a long long time.  

Enter the Container Data Center

While it is not the only modularized approach, this is the landscape in which the data center container has made its entry.  container

First and foremost let me say that while I am strong proponent of containment in every aspect, containers can add great value or simply not be a fit at all.  They can drive significant cost benefits or end up costing significantly more than traditional space.  The key is that you need to understand what problem you are trying to solve and that you have a couple of key questions answered first.  

So lets explore some of these things to think about in the current state of Data Center Containers out there today.  

What problem are you trying to solve?

The first question to ask yourself when evaluating if containerized data center space would be a fit is figure out which problem you are trying to solve.   In the past, the driver for me had more to do with solving deployment related issues.   We had moved the base unit of measure from servers to racks of servers ultimately to containers.    To put it more in general IT terms, it was a move of deploying tens to hundreds of servers per month, to hundreds and thousands of servers per month, to tens of thousands of servers per month.    Some people look at containers as Disaster Recovery or Business Continuity Solutions.  Others look at it from the perspective HPC clusters or large uniform batch processing requirements and modeling.    You must remember that most vendor container solutions out there today are modeled on hundreds to thousands of servers per “box”.  Is this a scale that is even applicable to your environment?   If you think its as simple as just dropping a server in place and then deploying servers in as you will, you will have a hard learning curve in the current state of ‘container-world’.   It just does not work that way today. 

Additionally one has to think about the type of ‘IT Load’ they will place inside of a container.  most containers espouse similar or like machines in bulk.  Rare to non-existent is the container that can take a multitude of different SKUs in different configurations.  Does your use drive uniformity of load or consistent use across a large number of machines?  If so, containers might be a good fit, if not, I would argue you are better off in traditional data center space (whether traditionally built or modularly built).

I will assume for purposes of this document that you feel you have a good reason to use this technology application.

Technical things to think about . . .

For purposes of this document I am going to refrain from getting into a discussion or comparison of particular vendors (except in aggregate) and generalizations as I will not endorse any vendor over another in this space.  Nor will I get into an in depth discussion around server densities, compute power, storage or other IT-specific comparisons for the containers.   I will trust that your organizations have experts or at least people knowledgeable in the areas of which servers/network gear/operating systems and the like you need for your application.   There is quite a bit of variety out there to chose from and you are a much better judge of such things for your environments than I.  What I will talk about here from a technical perspective is things that you might not be thinking of when it comes to the use of containers.  

Standards – What’s In? What’s Out?

One of the first considerations you need to look at when looking at containers is to make sure that your facilities experts do a comprehensive look at the vendors you are looking at in terms of the data center aspects of the container.  Why? The answer is simple.  There is no set industry standards when it comes to Data Center Containers.   This means that each vendor might have their own approach on what goes in, and what stays out of the container.   This has some pretty big implications for you as the user.   For example, lets take a look at batteries or UPS solutions.   Some vendors provide this function in the container itself (for ride through, or other purposes), while others assume this is part of the facility you will be connecting the container in to.   How is the UPS/batteries configured in your container?   Some configurations might have some interesting harmonics issues that will not work for your specific building configuration.    Its best to make sure you have both IT and Facilities people look at the solutions you are choosing jointly and make sure you know what base services you will need to provide to the containers themselves from the building, what the containers will provide, and the like. 

This brings up another interesting point you should probably consider.  Given the variety of Container configurations and lack of overall industry standard, you might find yourself locked into a specific container manufacturer for the long haul.  If ensuring you have multiple vendors is important you will need to ensure  that find vendors compatible to a standard that you define or wait until there is an industry standard.    Some look to the widely publicized Microsoft C-Blox specification as a potential basis for a standard.  This is their internal container specification that many vendors have configurations for, but you need to keep in mind that’s based on Microsoft’s requirements and might not meet yours.  Until the Green Grid, ASHRAE, or other such standards body starts looking to drive standards in this space, its probably something to be concerned about.   This What’s in/What’s out conversation becomes important in other areas as well.   In the section below that talks about Finance Asset Classes and Operational items understanding what is inside has some large implications.

Great Server manufacturers are not necessarily great Data Center Engineers

Related to the previous topic, I would recommend that your facilities people really take a look at the mechanical and electrical distribution configurations of the container manufacturers you are evaluating.  The lack of standards leaves a pretty interesting view of interpretation and you may find that the one-line diagrams or configuration of the container itself will not meet your specifications.   Just because a firm builds great servers, it does not mean they build great containers.  Keep in mind, a data center container is a blending of both IT and infrastructure that might normally be housed in a traditional data center infrastructure.  In many cases the actual Data Center componentry and design might be new. Some vendors are quite good, some are not.  Its worth doing your homework here.

Certification – Yes, its different than Standards

Another thing you want to look for is whether or not your provider is UL and/or CE certified.  Its not enough that the servers/internal hardware are UL or CE listed, I would strongly recommend the container itself has this certification.  This is very important as you are essentially talking about a giant metal box that is connected to  somewhere between 100kw to 500kw of power.   Believe me it is in your best interest to ensure that your solution has been tested and certified.  Why? Well a big reason can be found down the yellow brick road.

The Wizard of AHJ or Pay attention to the man behind the curtain…

For those of you who do not know who or what an AHJ is, let me explain.  It standards for Authority having Jurisdiction.  It may sound really technical but it really breaks down to being the local code inspector of where you wish to deploy your containers.   This could be one of the biggest things to pay attention to as your local code inspector could quickly sink your efforts or considerably increase the cost to deploy your container solution from both an operational as well as capital perspective.  

wiz Containers are a relatively new technology and more than likely your AHJ will not have any familiarity with how to interpret this technology in the local market.  Given the fact that there is not a large sample set for them to reference, their interpretation will be very very important.   Its important to ensure you work with your AHJ early on.   This is where the UL or CE listing can become important.  An AHJ could potentially interpret your container in one of two ways.  The first is that of a big giant refrigerator.  Its a bad example, but what I mean is a piece of equipment.    UL and CE listing on the container itself will help with that interpretation.  This should be the correct interpretation ultimately but the AHJ can do what they wish.   They might look at the container as a confined work space.    They might ask you all sorts of interesting questions like how often will people be going into this to service the equipment, (if there is no UL/CE listing)they might look at the electrical and mechanical installations and distribution and rule that it does not meet local electrical codes for distances between devices etc.   Essentially, the AHJ is an all powerful force who could really screw things up for a successful container deployment.  Its important to note, that while UL/CE gives you a great edge, your AHJ could still rule against you. If he rules the container as a confined work space for example, you might be required to suit your IT workers up in hazmat/thermal suits in two man teams to change out servers or drives.  Funny?  That’s a real example and interpretation from an AHJ.    Which brings us to the importance the IT configuration and interpretation is for your use of containers.

Is IT really ready for this?

As you read this section please keep our Wizard of AHJ in the back of your mind. His influence will still be felt in your IT world, whether your IT folks realize it or not.  Containers are really best suited if you have a high degree of automation in your IT function for those services and applications to be run inside them.   If you have an extremely ‘high touch’ environment where you do not have the ability to remotely access servers and need physical human beings to do a lot of care and feeding of your server environment, containers are not for you.  Just picture, IT folks dressed up like spacemen.    It definitely requires that you have a great deal of automation and think through some key items.

Lets first look at your ability to remotely image brand new machines within thestartline container.   Perhaps you have this capability through virtualization or perhaps through software provided by your server manufacturer.   One thing is a fact, this is an almost must-have technology with containers.   Given the fact the a container can come with hundreds to thousands of servers, you really don’t want Edna from IT in a container with DVDs and manually loaded software images.   Or worse, the AHJ might be unfavorable to you and you might have to have two people in suits with the DVDs for safety purposes.  

So definitely keep in mind that you really need a way to deploy your images from a central image repository in place.   Which then leads to the integration with your potential configuration management systems (asset management systems) and network environments.   

Configuration Management and Asset Management systems are also essential to a successful deployment so that the right images get to the right boxes.  Unless you have a monolithic application this is going to be a key problem to solve.    Many solutions in the market today are based upon the server or device ‘ARP’ing out its MAC address and some software layer intercepting that arp correlating that MAC address to some data base to your image repository or configuration management system.   Otherwise you may be back to Edna and her DVDs and her AHJ mandated buddy. 

Of course the concept of Arp’ing brings up your network configuration.   Make sure you put plenty of thought into network connectivity for your container.   Will you have  one VLAN or multiple VLANs across all your servers?   Can your network equipment selected handle the amount of machines inside the container? How your container is configured from a network perspective, and your ability to segment out the servers in a container could be crucial to your success.   Everyone always blames the network guys for issues in IT, so its worth having the conversation up front with the Network teams on how they are going to address the connectivity A) to the container and B) inside the container from a distribution perspective. 

As long as I have all this IT stuff, Containers are cheaper than traditional DC’s right?

Maybe.  This blends a little with the next section specifically around finance things to think about for containers but its really sourced from a technical perspective.   Today you purchase containers in terms of total power draw for the container itself.   150kw, 300kw, 500kw and like denominations.   This ultimately means that you want to optimize your server environments for the load you are using.  Not utilizing the entire power allocation could easily flip the economic benefits of going to containers quickly.    I know what your thinking, Mike, this is the same problem you have in a traditional data center so this should really be a push and a non-issue.

The difference here is that you have a higher upfront cost with the containers.  Lets say you are deploying 300kw containers as a standard.    If you never really drive those containers to 300kw and lets say your average is 100kw you are only getting 33% of the cost benefit.   If you then add a second container and drive it to like capacity, you may find your self paying a significant premium for that capacity at a much higher price point that deploying those servers to traditional raised floor space for example.    Since we are brushing up on economic and financial aspects lets take a quick look at things to keep an eye on in that space.

Finance Friendly?

Most people have the idea that containers are ultimately cheaper and therefore those Finance guys are going to love them.   They may actually be cheaper or they may not, regardless there are other things your Finance teams will definitely want to take a look at.

money

The first challenge for your finance teams is to figure out how to classify this new asset called a container.   If you think about traditional asset classification for IT and data center investments they typically fall into 3 categories from which the rules for depreciation are set.  The first is Software, The second is server related infrastructure such as Servers, Hardware, racks, and the like.  The last category is the data center components itself.    Software investments might be capitalized over anywhere between 1-10 years.   Servers and the like typically range from 3-5 years, and data centers components (UPS systems, etc) are depreciated closer to 15-30 years.   Containers represent an asset that is really a mixed asset class.  The container obviously houses servers that have a useful life (presumably shorter than the container housing itself), the container also contains components that might be found in the data center therefore traditionally having a longer depreciation cycle.   Remember our What’s in? What’s out conversation? So your finance teams are going to have to figure out how they deal with a Mixed Asset class technology.   There is no easy answer to this.  Some Finance systems are set up for this, others are not.  An organization could move to treat it in an all or nothing fashion.  For example, If the entire container is depreciated over a server life cycle it will dramatically increase the depreciation hit for the business.  If you opt to depreciate it over the longer lead time items, then you will need to figure out how to deal with the fact that the servers within will be rotated much more frequently and be accounted for.    I don’t have an easy answer to this, but I can tell you one thing.   If your Finance folks are not looking at containers along with your facilities and IT folks, they should be.  They might have some work to do to accommodate this technology.

Related to this, you might also want to think about Containers from an insurance perspective.   How is your insurer looking at containers and how do they allocate cost versus risk for this technology set.  Your likely going to have some detailed conversations to bring them up to speed on the technology by and large.  You might find they require you to put in additional fire suppression (its a metal box, it something catches on fire inside, it should naturally be contained right?)  What about the burning plastics?  How is water delivered to the container for cooling, where and how does electrical distribution take place.   These are all questions that could adversely affect the cost or operation of your container deployment so make sure you loop them in as well.

Operations and Containers

Another key area to keep in mind is how your operational environments are going to change as a result of the introduction to containers.   Lets jump back a second and go back to our Insurance examples.   A container could weigh as much as 60,000 pounds (US).  That is pretty heavy.  Now imagine you accidently smack into a load bearing wall or column as you try to push it into place.  That is one area where Operations and Insurance are going to have to work together.   Is your company licensed and bonded for moving containers around?  Does your area have union regulations that only union personnel are certified and bonded to do that kind of work?   Important questions and things you will need to figure out from an Operations perspective.   

Going back to our What’s in and What’s out conversation – You will need to ensure that you have the proper maintenance regimen in place to facilitate the success of this technology.    Perhaps the stuff inside is part of the contract you have with your container manufacturer.  Perhaps its not.   What work will need to take place to properly support that environment.   If you have batteries in your container – how do you service them?  What’s the Wizard of AHJ ruling on that? 

The point here is that an evaluation for containers must be multi-faceted.  If you only look at this solution from a technology perspective you are creating a very large blind spot for yourself that will likely have significant impact on the success of containers in your environment.

This document is really meant to be the first of an evolutionary watch of the industry as it stands today. I will add observations as I think of them and repost accordingly over time. Likely (and hopefully) many of the challenges and things to think about may get solved over time and I remain a strong proponent of this technology application.   The key is that you cannot look at containers purely from a technology perspective.  There are a multitude of other factors that will make or break the use of this technology.  I hope this post helped answer some questions or at least force you to think a bit more holistically around the use of this interesting and exciting technology. 

\Mm