Future Forward 2011 / Rise of Dynamic Computing
I was flattered to be asked to present at Scott Kirsner’s Future Forward 2011 conference yesterday at Wellesley College. My talk was on the future of Cloud Computing and I presented a context and set of investment themes that my partner Chip Hazard and I have put together on what we’re calling “Dynamic Computing.” The entire slide deck can be found here on Slideshare, and the gist of my voice-over from the conference is as follows.
At Flybridge, the two dominant sectors we target are enterprise IT and consumer infrastructure. Over the years, venture capitalists have seen and invested in several inflection-point changes in these sectors – changes that have brought with them tremendous opportunities for new companies and innovation – such as the move from mainframes to minicomputers, minicomputers to PCs, then networked PCs, the client-server, ultimately leading to the web. We’re convinced we’re in the midst of another such inflection point moment right now; but it’s more profound than simply jumping on the cloud-computing or mobile app bandwagons. While these are important elements for certain, we see them as part of a larger shift we’re calling “Dynamic Computing”
Legacy Platforms are Monolithic
Looking at the evolution of computing, from the nexus with mainframes in the 1960s, through the Minicomputer, the PC revolution and into the advent of the commercial Internet in 1990’s, remarkably all these paradigms relied upon a relatively homogenous and monolithic architecture. This made sense at the start, because ubiquitous adoption of computing required establishment of a uniform standard platform that everyone could write to with confidence. And this uniformity, in part, it gave rise to the generation of tech behemoths that persist today, such as Microsoft, Intel, Oracle, SAP, et al. But the strengths of this ecosystem, namely uniformity and backward compatibility have become their Achilles heels in the expanding world of the Internet – as these architectures have become bloated, expensive and energy hogs. Combined with the intense industry consolidation that followed with the Internet bubble bursting at the turn of the 20th century, innovation and new startup activity in the sector languished.
Change is Here
But more recently, the confluence of several core technology developments have exploded the traditional monolithic platform creating a fundamental revolution in how computing solutions are created and delivered. This is what we call Dynamic Computing, and it comprises the following components:
Open Source – the movement that began with Linux in the 1990s was the initial catalyst, bringing low cost, high quality software – and most importantly – CHOICE. No longer are individuals or companies bound to the limited menu provided by the usual suspects. And the FUD spread in the early days of open source – that open source applications would be buggier, less secure, unsupported – long ago faded.
Cloud Computing – though the press has forgotten, what really began as ASPs, MSPs and SSPs, a decade ago, has lowered the cost of development and deployment infrastructure by orders of magnitude creating costs that scale with usage, which is critical to capital-constrained startups.
Dynamic Scripting Languages – such as python and Ruby – have made development faster, portable and scalable.
Virtualization – originally conceived to get performance boosts from underutilized Intel chips, set the foundation for scalability and migration well beyond a single rack within one data center.
And Finally, public APIs have permitted two fundamental and ground-breaking changes:
- They allow applications to be dynamically assembled from best of breed providers, irrespective of geographic proximity
- They provide the opportunity for unfettered access to previously unavailable data, the so-called “Big Data” firehoses.
Add global affordable broadband, and the result is a set Dynamic Computing solutions that can be seamlessly split over great distances with no visible performance degradation.
- Developer Driven Business Models– really a basis for adapting the low-cost distribution model similar to consumer Internet applications like Facebook and LinkedIn. These can provide a low-cost method for customer acquisition by getting developers to embed solutions within their own applications, then they talk about it with their cohort, and so-on and s0-on. News spreads quickly within these circles – which has both plusses and minuses. It can become a great wedge for upselling future features and solutions.
- Legacy Applications Don’t Translate Well – We will see situations where traditional issues persist, but the traditional solutions just don’t work given the disaggregation of the platform – meaning that new solutions will be needed. Such areas include storage, management and security. Consider security; in the past if you were the Chief Security Officer for a mid-sized corporation, you bought an array of gear to cover your hindside, including firewalls, intrusion detection devices, data-leakage protection, virus-scanners, and others. You installed these solutions in your data center and dealt with issues when they arose. But what happens to that CSO when his company has moved to a distributed cloud-environment, with various applications running at several different service providers. That CSO still has all the responsibility she had in the past, but none of the authority; she cannot put a fancy data-leakage prevention solution at Amazon EC2. Thus the opportunity for a new take on the old problems exists.
- Big Data Analytics – Pulling apart the architecture and publishing APIs creates a firehose of data flows that were not available in the past, and the analysis of these flows is creating entirely new product opportunities across all the sectors we cover. And creating new business models as companies can charge for access to their data – which they hadn’t contemplated in the past. A burgeoning set of problems are brewing around big data access, including cost, proprietary access and security of the flows.
- New Opportunities Emerge – the corollary to legacy applications no longer working, is that entirely unforeseen problems & opportunities will emerge because of the exploded platforms. My current favorite is to consider what happens when we get to real-time bidding for the different elements of a distributed stack – imagine an exchange where computes are “traded” based on cost, performance, availability, etc – like commodities.
- Purposeful Built Foundation Technologies – the innovators dilemma for incumbents is that as these new dynamic computing offerings are still nascent, their willingness to cannibalize themselves is nil, which opens the window wide for some massive market potential – but these tend to be more capital intensive as they are foundational. These opportunities should largely be seen in semiconductors and systems. This is a pattern that has repeated itself with every new platform shift – things tend to start with applications built on whatever is generically available, and as popularity increase, new more specific solutions tend to arise. The worry in this area is that if you invest too early – for example in a Hadoop hardware acceleration platform – and scalability never rises to need such performance, the company will starve.