IPvSexy, Yes You to can get there too.

Today we celebrated going live with IPV6 versions of many of our top rated sites.  The work was done in advance of our participation in the IPV6 Launch Day.  For the uninitiated, IPv6 Launch Day is the date where major sites will begin to have their websites publicly available in the new Internet numbering scheme and is currently set for June 6, 2012.  As many of you likely know the IPv4 space which has served the Internet so well since its inception is running out of unique addresses.  I am especially proud of the fact that we are the first of the largest Internet players to achieve this unique feat.   In fact three of our sites occupy slots in the Top 25 Sites in the ranking including www.aol.com, www.engadget.com, and www.mapquest.com.  As with all things there are some interesting caveats.  For example – Google is IPv6 enabled for some ISPs, but not all.  I am specifically highlighting global availability.  

 

clip_image001

 

The journey has been an interesting one and there are many lessons learned going through these exercises.   In many conversations on this topic with other large firms exploring making this move, I often hear how difficult this process appears to be and a general reluctance to even begin.  Like the old saying goes even the longest journey begins with the first step.  

This work was far from easy and our internal team had some great learnings in our efforts to take our sites live beginning with the World IPV6 day in 2011.   Although our overall traffic levels remain pretty tiny (sustained about 4-5Mb/s) it is likely to grow as more ISPs convert their infrastructure to IPv6.

Perhaps the most significant thing I would like to share is that while migrating to the numbering system – I was very pleased to find the number of options available to companies in staging their moves to the IPv6.  Companies have a host of options available to them outside of an outright full scale renumbering of their networks.  There are IPv4 to IPv6 Gateways, capabilities already built into your existing routing and switching equipment that could ease the way, and even some capabilities in external service providers like Akamai that could help ease your adoption into the new space.  Eventually everything will need to get migrated and you will need to have a comprehensive plan to get you there, but its nice to know that firms have a bunch of options available to assist in this technical journey. 

 

\Mm

Patent Wars may Chill Data Center Innovation

Yahoo may have just sent a cold chill across the data center industry at large and begun a stifling of data center innovation.  In a May 3, 2012 article, Forbes did a quick and dirty analysis on the patent wars between Facebook and Yahoo. It’s a quick read but shines an interesting light on the potential impact something like this can have across the industry.   The article, found here,  highlights that :

In a new disclosure, Facebook added in the latest version of the filing that on April 23 Yahoo sent a letter to Facebook indicating that Yahoo believes it holds 16 patents that “may be relevant” to open source technology Yahoo asserts is being used in Facebook’s data centers and servers.

While these types of patent infringement cases happen all the time in the Corporate world, this one could have far greater ramifications on an industry that has only recently emerged into the light of sharing of ideas.    While details remain sketchy at the time of this writing, its clear that the specific call out of data center and servers is an allusion to more than just server technology or applications running in their facilities.  In fact, there is a specific call out of data centers and infrastructure. 

With this revelation one has to wonder about its impact on the Open Compute Project which is being led by Facebook.   It leads to some interesting questions. Has their effort to be more open in their designs and approaches to data center operations and design led them to a position of risk and exposure legally?  Will this open the flood gates for design firms to become more aggressive around functionality designed into their buildings?  Could companies use their patents to freeze competitors out of colocation facilities in certain markets by threatening colo providers with these types of lawsuits?  Perhaps I am reaching a bit but I never underestimate litigious fervor once the  proverbial blood gets in the water. 

In my own estimation, there is a ton of “prior art”, to use an intellectual property term, out there to settle this down long term, but the question remains – will firms go through that lengthy process to prove it out or opt to re-enter their shells of secrecy?  

After almost a decade of fighting to open up the collective industry to share technologies, designs, and techniques this is a very disheartening move.   The general Glasnost that has descended over the industry has led to real and material change for the industry.  

We have seen the mental shift of companies move from measuring facilities purely around “Up Time” measurements to one that is primarily more focused around efficiency as well.  We have seen more willingness to share best practices and find like minded firms to share in innovation.  One has to wonder, will this impact the larger “greening” of data centers in general.   Without that kind of pressure – will people move back to what is comfortable?

Time will certainly tell.   I was going to make a joke about the fact that until time proves out I may have to “lawyer” up just to be safe.  Its not really a joke however because I’m going to bet other firms do something similar and that, my dear friends, is how the innovation will start to freeze.

 

\Mm

DataCentres2012–Nice, France

image

Next month I will be one of the key note speakers at the DataCentres2012 conference in Nice, France.   This event produced and put on by the BroadGroup is far and away the most pre-eminent conference for the Data Center Industry in Europe.   As an alumni of other BroadGroup events I can assure you that the quality of the presentations and training available is of the highest quality. I am also looking forward to re-connecting  with some great friends such as Christian Belady of Microsoft, Tom Furlong from Facebook and others.   If you are planning on attending please feel free to reach out and say hello.   It’s a great opportunity to network, build friendships, and discuss the issues pressing our industry today.   You can find out more by visiting the event website below.

http://www.datacentres2012.com/

 

\Mm

Attacking the Cruft

Today the Uptime Institute announced that AOL won the Server Roundup Award.  The achievement has gotten some press already (At Computerworld, PCWorld, and related sites) and I cannot begin to tell you how proud I am of my teams.   One of the more personal transitions and journeys I have made since my experience scaling the Microsoft environments from tens of thousands of servers to hundreds of thousands of servers has been truly understanding the complexity facing a problem most larger established IT departments have been dealing with for years.  In some respects, scaling infrastructure, while incredibly challenging and hard, is in large part a uni-directional problem space.   You are faced with growth and more growth followed by even more growth.  All sorts of interesting things break when you get to big scale. Processes, methodologies, technologies, all quickly fall to the wayside as you climb ever up the ladder of scale.

At AOL I faced a multi-directional problem space in that, as a company and as a technology platform we were still growing.  Added to that there was 27 years of what I call “Cruft”.   I define “Cruft” as years of build-up of technology, processes, politics, fiscal orphaning and poor operational hygiene.  This cruft can act as a huge boat anchor and barrier to an organization to drive agility in its online and IT operations.  On top of this Cruft a layer of what can best be described as lethargy or perhaps apathy can sometimes develop and add even more difficulty to the problem space.

One of the first things I encountered at AOL was the cruft.  In any organization, everyone always wants to work on the new, cool, interesting things. Mainly because they are new and interesting..out of the norm.  Essentially the fun stuff!  But the ability for the organization to really drive the adoption of new technologies and methods was always slowed, gated or in some cases altogether prevented by years interconnected systems, lost owners, servers of unknown purpose lost in the distant historical memory and the like.   This I found in healthy populations at AOL. 

We initially set about building a plan to attack this cruft.   To earnestly remove as much of the cruft  as possible and drive the organization towards agility.  Initially we called this list of properties, servers, equipment and the like the Operations $/-\!+ list. As this name was not very user-friendly it migrated into a series of initiatives grouped the name of Ops-Surdities.   These programs attacked different types of cruft and were at a high level grouped into three main categories:

The Absurdity List – A list of projects/properties/applications that had a questionable value, lack of owner, lack of direction, or the like but was still drawing load and resources from our data centers.   The plan here was to develop action plans for each of the items that appeared on this list.

Power Hog – An effort to audit our data center facilities, equipment, and the like looking for inefficient servers, installations, and /or technology and migrating them to new more efficient platforms or our AOL Cloud infrastructure.  You knew you were in trouble when you had a trophy of a bronze pig appear on your desk or office and that you were marked. 

Ops Hygiene – The sometimes tedious task of tracking down older machines and systems that may have been decomissioned in the past, marked for removal, or were fully depreciated and were never truly removed.  Pure Vampiric load.  You may or may not be surprised how much of this exists in modern data centers.  It’s a common issue I have had with most data center management professionals in the industry.

So here we are, in a timeline measured in under a year, and being told all along the way by“crufty old-timers” that we would never make any progress, my teams have de-comissioned almost 10,000 servers from our environments. (Actually this number is greater now, but the submission deadline for the award was earlier in the year).  What an amazing accomplishment.  What an amazing team!

So how did we do it?

As we will be presenting this in a lot more detail at the Uptime Symposium, I am not going to give away all of our secrets in a blog post and give you a good reason to head to the Uptime event and listen to and ask the primary leaders of this effort how they did it in person.  It may be a good use of that Travel budget your company has been sitting on this year.

What I will share is some guidelines on approach and some things to be wary of if you are facing similar challenges in your organization.

FOCUS AND ATTENTION

I cannot tell you how many I have spoken with that have tried to go after ‘cruft’ like this time and time again and failed.   One of the key drivers for success in my mind is ensuring that there is focus and attention on this kind of project at all levels, across all organizations, and most importantly from the TOP.   To often executives give out blind directives with little to no follow through and assume this kind of thing gets done.   They are generally unaware of the natural resistance to this kind of work there is in most IT organizations.    Having a motivated, engaged, and focused leadership on these types of efforts goes and extraordinarily long way to making headway here.  

BEWARE of ORGANIZATIONAL APATHY

The human factors that stack up against a project like this are impressive.  While they may not be openly in revolt over such projects there is a natural resistance to getting things done.  This work is not sexy.  This work is hard.  This work is tedious.  This likely means going back and touching equipment and kit that has not been messed with for a long time.   You may have competing organizational priorities which place this kind of work at the bottom of the workload priority list.   In addition to having Executive buy in and focus, make sure you have some really driven people running these programs.  You are looking for CAN DO people, not MAKE DO people.

TECHNOLOGY CAN HELP, BUT ITS NOT YOUR HEAVY LIFTER

Probably a bit strange for a technology blog to say, but its true.  We have an incredible CMDB and Asset System at AOL.  This was hugely helpful to the effort in really getting to the bottom of the list.   However no amount of Technology in place will be able to perform the myriad of tasks required to actually make material movement on this kind of work.   Some of it requires negotiation, some of it requires strength of will, some of it takes pure persistence in running these issues down…working with the people.  Understanding what is still required, what can be moved.  This requires people.   We had great technologies in place from the perspective of knowing where are stuff was, what it did, and what it was connected to.  We had great technologies like our Cloud to move some of these platforms to ultimately.    However, you need to make sure you don’t go to far down the people trap.  I have a saying in my organization – There is a perfect number of project managers and security people in any organization.  Where the work output and value delivered is highest.   What is that number?  Depends – but you definitely know when you have one too many of each.

MAKE IT FUN IF YOU CAN

From the brass pigs, to minor celebrations each month as we worked through the process we ensured that the attention given the effort was not negative. Sure it can be tough work, but you are at the end of the day substantially investing in the overall agility of your organization.  Its something to be celebrated.    In fact at the completion of our aggressive goals the primary project leads involved did a great video (which you can see here) to highlight and celebrate the win.   Everyone had a great laugh and a ton of fun doing what was ultimately a tough grind of work.  If you are headed to Symposium I strongly encourage you to reach out to my incredible project leads.  You will be able to recognize them from the video….without the mustaches of course!

\Mm

on taking the responsibilities of CTO of the Huffington Post Media Group. . .

image

Today I was asked to take over the responsibilities of Chief Technology Officer for the Huffington Post Media Group and I can tell I am extremely excited for this opportunity.  This new set of responsibilities will be in addition to my current role as Senior Vice President of Technology at AOL where I have responsibility for the Operations and Day to Day Delivery of all AOL products and services.  What is extremely interesting is that the Huffington Post is like no other platform or agency of its kind.  In fact, I am not sure there really is a ‘kind’ to apply. 

In observing and participating in the daily Operation of the Huffington Post since its integration into AOL and experiencing first hand how that organization functions from an editorial flow perspective it is singly unique to any other organization I have ever witnessed. 

First the integration between the Editorial, Design, and Technology components of the company are truly three equal and dependent legs in the overall delivery of the service. Unlike many media companies where Technology plays a secondary role, at the Huffington Post its an essential and core part of the overall product and delivery strategy.  Technology literally iterates on a daily basis.

In conversations with Arianna Huffington and Tim Armstrong, I have come to truly appreciate the longer term vision and expansion of this product and how important a role Technology will represent. My background of rapidly scaling out infrastructure and development capabilities will I hope lend some considerable capabilities to these goals for the future.

Some people familiar with this type of industry may think its nothing more than a simplified website with a custom CMS.  I can tell you that the back end systems, custom CMS, widget interfaces and overall flexibility that these systems operate on and develop to are part of the reason for the platforms overall success.   In a world where ‘Internet time’ generally means an aggressively accelerated rate of time, the Huffington Post Platform operates at a Faster than Internet time rate.   Its an incredible challenge and one I cant wait to sink my teeth into.

 

\Mm

ATC Ribbon Cutting

grandopening

In my previous post I had mentioned how extremely proud I was of the Technology teams here at AOL in delivering a truly state of the art Data Center facility with some incredible ground breaking technology.  As I mentioned the facility was actually in production use faster than we could get the ribbon cutting ceremony scheduled.  I thought I would share a small slice of the pictures of the internal Ribbon Cutting Event.

___manos-gounares-cloud

Alex Gounares, former fellow Microsoft alum and AOL CTO and I presided over the celebration.   In this photo, Alex and I talk over some of the technologies used in our cloud with one our cloud engineers.  As the facility is based upon pre-racked technologies and modular facility and network build components it allows for significant cost and capital optimization. this allows us to build only when demand and growth dictates the need. All machines in the background are live and have been live for a few weeks.

___cut

After receiving two very large scissors which were remarkably sharp and precise for their size we were ready to go.   A few short words about the phenomenal job our teams performed and it was time for some ribbon to kiss raised floor.

 

 

___

At the end of the day the real reason why this project was such a success really breaks down to the team responsible for this incredible win.   An effort like this took incredibly smart people from different organizations working together to make this a reality.    The achievement is even more impressive in my mind when you think about the fact that in many cases our 90 day to live timeframe included design and execution on the go!   My guess is our next one may be significantly faster without all that design time. The true heroes of ATC are below!

the team

 

\Mm

(Special thanks goes out to Krysta Scharlach for the permission and use of her pictures in this post)

Breaking the Chrysalis

What has come before

When I first took my position at AOL I knew I was going to be in for some very significant challenges.   This position, perhaps more-so than any other in my career was going to push the bounds of my abilities.  As a technologist, as an operations professional, as a leader, and as someone who would hold measurable accountability to the operational success of an expansive suite of products and services.  As many of you may know, AOL has been engaged in what used to be called internally as a “Start-Around”.  Essentially an effort to try and fundamentally change the company from its historic roots to the premium content provider for the Internet. 

We no longer refer to this term internally as it is no longer about forming or defining that vision.  It has shifted to something more visceral.  More tangible.  It’s a challenge that most companies should be familiar with, It’s called Execution.  Execution is a very simple word but as any good operations professional knows, the devil is in the details, and those details have layers and layers of nuances.    Its where the proverbial rubber meets the road.  For my responsibilities within the company,  execution revolves 100% around delivering the technologies and services to ensure our products and content remain available to the world.   It is also about fundamentally transforming the infrastructural technologies and platform systems our products and content are based upon and providing the most agility and mobility we can to our business lines. 

One fact that is often forgotten in the fast-paced world of Internet Darlings, is that AOL had achieved a huge scale of infrastructure and technology investment long before many of these companies were gleams in the eyes of their founders.   While it may be fun and “new” to look at the tens of thousands of machines at Facebook, Google, or Microsoft – it is often overlooked that AOL had tens of thousands of machines (and still does!) and solved many of the same problems years ago.  To be honest it was a personal revelation for me when I joined.  There are few companies who have had to grow and operate at this kind of scale and every approach is a bit unique and different.  It was an interesting lesson, even for one who had a ton of experience doing something similar in “Internet Darling” infrastructures.

AOL has been around for over 27 years.  In technology circles, that’s like going back almost ten generations.   Almost 3 decades of “stuff”.  The stuff was not only gear and equipment from the natural growth of the business, but included the expansion of features and functionality of long standing services, increased systems interdependencies, and operational, technological, and programmatic “Cruft” as new systems / processes/ technologies were  built upon or bolted onto older systems. 

This “cruft” adds significant complexity to your operating environment and can truly limit your organization’s agility.  As someone tasked with making all this better, it struck me that we actually had at least two problems to solve.   The platform and foundation for the future, and a method and/or strategy for addressing the older products, systems, and environments and increase our overall agility as a company.

These are hard problems.  People have asked why I haven’t blogged in awhile externally.   This is the kind of challenge with multiple layers of challenges underneath that can keep one up at night.   From a strategy perspective do you target the new first?  Do you target the legacy environments to reduce the operational drag?  Or – Do you try and define a unified strategy to address both.  Its a lot harder and generally more complex, but they potential payoff is huge.   Luckily I have a world class team at AOL and together we built and entered our own cocoon and busily went to work.  We have gone down the path of changing out technology platforms, operational processes, outdated ways of thinking about data centers, infrastructure, and overall approach. Every inch fighting forward on this idea of unified infrastructure.

It was during this process that I came to realize that our particular legacy challenge, while at “Internet” scale, was more closely related to the challenges of most corporate or government environments than the biggest Internet players.  Sure we had big scale, we had hundreds of products and services, but the underlying “how to get there from here” problems were more universally like IT challenges than scaling out similar applications across commoditized infrastructure.   It ties into all the marketing promises, technological snake oil, and other baloney about the “cloud”.  The difference being that we had to quickly deliver something that worked and would not impact the business.  Whether we wanted to or not, we would be walking down some similar roads facing most IT organizations today.

As I look at the challenges facing modern IT departments across the world, their ability to “go to the cloud” or make use of new approaches is also securely anchored behind by the “cruft”  of their past.  Sometimes that cruft is so thick that the organization cannot move forward.  We were there, we were in the same boat.  We aren’t out of it yet – but we have made some pretty interesting developments that I think are pretty significant and I intend to share those learnings where appropriate. 

 

ATC

ATC IS BORN

Last week we launched a brand new data center facility we call, ATC.  This facility is fundamentally built upon the work that we have been doing around our own internal cloud technologies, shifts in operational process and methodology, and targeting our ability to be extremely agile in our new business model.  It represents a model on how to migrate the old, prepare for the new, and provide a platform upon which to build our future. 

Most people ignore the soft costs when looking at adoption of different cloud offerings, operational impacts are typically considered as afterthoughts.   What if you built those requirements in from day one… how would that change your design? your implementation? Your overall strategy?  I believe that ATC represents that kind of shift of thinking.  At least for us internally.

One of the key foundations for our ATC facility is our cloud platform and automation layer.  I like to think about this layer as a little bit country and a little bit rock and roll.  There is tremendous value in the learning’s that have come before, and nowhere else is this self evident than at AOL.  As I mentioned, the great minds of the past (as well as those in the present) had invested in many great systems that made this company a giant in the industry.   There are many such systems here, but one of the key ones in my mind is the Configuration Management System.  All organizations invest significantly into this type of platform.  If done correctly, their uses can span from more than a rudimentary asset management system, to include cost allocation systems, dependency mapping, detailed configuration and environmental data, and in some cases like ours provide the base foundation of leading us into the cloud. 

Many companies I speak with abandon this work altogether or live in a strange split/hybrid model where they treat “Cloud” as different.  In our space – new government regulations, new safe harbor laws, etc are continuing to drive the relevance of a universal system to act as a central authority.   The fact that this technology actually sped our development efforts in this automation cannot be ignored.

We went from provisioning servers in days, to getting base virtual machines up and running in under 8 seconds.  Want Service and Application images (for established products)? Add another 8 seconds or so.   Want to roll it into production globally (changing global DNS/Load balancing/Security changes)?  Lets call that another minute to roll out.   We used Open Source products and added our own development glue into our own systems to make all  this happen.  I am incredibly proud of my Cloud teams here at AOL, because what they have been able to do in such a relatively short period of time is to roll out a world class cloud and service provisioning system that can be applied to new efforts and platforms or our older products.   Better yet, the provisioning systems were built to be universal so that if required we can do the same thing with stand-alone physical boxes or virtual machines.  No difference.  Same system. This technology platform was recently recognized by the Uptime Institute at its last Symposium in California. 

auto2

This technology was put to the test in the recently with the earthquake that hit the East Coast of the United States.  While thankfully the damage was minimal, the tremor of Internet traffic was incredible.   The AOL homepage, along with our news sites started to get hammered with traffic and requests.  In the past this would have required a massive people effort to provision more capacity for our users.  With the new technology in place we were able to start adding additional machines to take the load extremely quickly with very minimal impact to our users.  In this particular case these machines were provisioned from our systems in existing data centers (not ATC), but the technology is the same.

This kind of technology and agility has some interesting side effects too.   It allows your organization to move much more quickly and aggressively than ever before.   I have seen Jevon’s paradox manifest itself over and over again in the Technology world.    For those of you who need a refresher, Jevons paradox is is the proposition that technological progress that increases the efficiency with which a resource is used tends to increase (rather than decrease) the rate of consumption of that resource. 

Its like when car manufacturers started putting the Miles per Gallon (MPG) efficiency on autos, the direct result was not a reduction of driving, but rather an overall increase of travel.

For ATC, which officially launched on October 1, 2011.  It took all of an hour to have almost 100 virtual machines deployed to it as soon as it was “turned on”.   It has since long passed that mark and in fact this technology usage is happening faster than coordinating executive schedules to attend our executive ribbon cutting ceremony this week.

While the Cloud development and technology efforts are cornerstones of the facility, it is not this work alone that is providing for something unique. After all however slick our virtualization and provisioning systems are, however deeply integrated they are into our internal tools and configuration management systems, those characteristics in and of themselves does not reflect the true evolution that ATC represents.

ATC is a 100% lights out facility.  There are absolutely no employees stationed at the facility full time, contract, or otherwise.   The entire premise is that we have moved from a reactive support model to a proactive or planned work support model.  If you compare this with other facilities (including some I built myself in the past) there is always personnel on site even if contractor.   This has fundamentally led to significant changes in how we operate our data centers, how, what, and when we do our work, and has impacted (downward) the overall costs to operate our environments.  Many of these are efficiencies and approaches I have used before (100% pre-racked/vendor integrated gear and systems integration) to fundamentally brand new approaches.  These changes have not been easy and a ton of credit goes to our operations and engineering staff in the Data Centers and across the Technology Operations world here at AOL.  Its always culturally tough to being open to fundamentally changing business as usual.   Another key aspect of this facility and infrastructure is that from network perspective its nearly 100% non-blocking.   My network engineers being network engineers pointed out that its not completely non-blocking for a few reasons, but I can honestly say that the network topology is the closest I have seen to “completely” non blocking deployed in real network environments ever especially compared to the industry standard of 2:1. 

Another incredible aspect of this new data center facility and the technology deployed is our ability to Quick Launch Compute Capacity.  The total time it took to go from idea inception (no data center) to delivering active capacity to our internal users was  90 days.  In my mind this made even more incredible by the fact that this was the first time that all these work-streams came together including the unified operations deployment model and included all of the physical aspects of just getting iron to the floor.    This time frame was made possible by a standardized / modular way to build out our compute capacity in logical segments based upon the the infrastructure cloud type being deployed (low tier, mid-tier, etc.).   This approach has given us a predictability to speed of deployment and cost which in my opinion is unparalleled.

The culmination of all of this work is the result of some incredible teams devoted to the desire to affect change, a little dash of renegade engineering, a heaping helping of some new perspective, blood, sweat, tears and vision.   I am extremely proud of the teams here at AOL to deliver this ground-breaking achievement.   But then again, I am more than a bit biased.   I have seen the passion of these teams manifested in some incredible technology.

As with all things like this, it’s been a journey and there is still a bunch of work to do.  Still more to optimize.  Deeper analysis and ease of aggregation for stubborn legacy environments.   We have already set our sights on the next generation of cloud development.  But for today, we have successfully built a new foundation upon which even more will be built.  For those of you who were not able to attend the Uptime Symposium this year I will be putting up some videos that give you some flavor of our work with driving a low cost cloud compute and provisioning system from Open Source components.

 

\Mm

Preparing for the Cloud: A Data Center and Operations Survival Guide

image 

This May, I once again have the distinct honor of presenting at the Uptime Institute’s Symposium. This year it will be held in Santa Clara, CA from May 9 through the 12th.  This year my primary topic is entitled ‘Preparing for the Cloud: A Data Center Survival Guide.’   I am really looking forward to this presentation on two fronts.  

First, it will allow me to share some of the challenges, observations, and opportunities I have seen over the last few years and package it up for Data Center Operators and IT professionals in a way that’s truly relevant to how to start preparing for the impact on their production environments. The whole ‘cloud’ industry is now rife with competing definitions, confusing marketing, and a broad spectrum of products and services meant to cure all ills. To your organization’s business leaders the cloud means lower costs, quicker time to market, and an opportunity to streamline IT Operations and reduce or eliminate the need for home-run data center environments. But what is the true impact on the operational environments? What plans do you need to have in place to ensure this kind of move can be successful? Is you organization even ready to make this kind of move? Is the nature of your applications and environments ‘Cloud-Ready? There are some very significant things to keep in mind when looking into this approach and many companies have not thought them all through.  My hope is that this talk will help prepare the professional with the necessary background and questions to ensure they are armed with the correct information to be an asset to the conversation within their organizations.

The second front is to really dig into the types of services available in the market and how to build an internal scorecard to ensure that your organization is approaching the analysis in a true – apples to apples kind of comparison.   So often I have heard horror stories of companies

caught up in the buzz of the Cloud and pursuing devastating cloud strategies that are either far more expensive than what they had to begin with.  The cloud can be a powerful tool and approach to serve the business, but you definitely need to go in with both eyes wide open.

I will try to post some material in the weeks ahead of the event to set the stage for the talk.  As always, If you are planning on attending Symposium this year feel free to reach out to me if you see me walking the halls.  

\Mm

Speaking at CTIA ‘Mobile Business Conference’ Event Oct. 6-8

image

I will be back in San Francisco to speak at the CTIA’s Mobile Business Conference from October 6th through the 8th.    I will be on a panel discussing ‘Embracing the Cloud’.   My Hope in these talks is to continue to highlight the coming impacts of the use of hand-held technologies on a global basis, its intersection with personal usage and the ultimate technology challenges that this poses.  If you are in the area or looking to attend – Would love to connect.

 

\Mm

Speaking at GIGAOM Mobilize Event Sept 30, 2010

 

image

 

For those of you in the San Francisco area I will be speaking on the emergence of technology at the interaction and intersection of broadband applications and 4G Networks at the Mobilize Conference.  

Increasingly people are using their phones, tablets, IPads, and other hand held devices to do more and more.  The interaction of users, the growing social interconnectedness, and location based services offered with this technology set is definitely making this the next technology battleground to solve some really challenging issues.   So Excited…

\Mm