VMware vExpert

Disclaimer

  • Any views or opinions expressed here are strictly my own. I am solely responsible for all content published here. This is a personal blog, not a VCE blog. Content published here is not read, reviewed, or approved in advance by VCE and does not necessarily represent or reflect the views or opinions of VCE or any of its parent companies, partners or affiliates.
BlogWithIntegrity.com

Advertising

Recent Twitter Updates

    follow me on Twitter

    Enter your email address:

    Delivered by FeedBurner

    « The Current Big Thing | Main | The History of Hosting: Ka is a Wheel »

    04/08/2011

    Comments

    Feed You can follow this conversation by subscribing to the comment feed for this post.

    Jh

    It’s interesting that you point fingers at companies with massive scale-out applications as being willing to leverage the “cheapest-of-the-breed,” and insinuate these systems are neither new or reliable. The Facebook engineering team managed some ingenious inventions [plenty of info on this at www.opencompute.org], while simultaneously reducing cost, improving efficiency and improving reliability. Cost and efficiency are direct measurements. Long-term reliability remains uncertain, given that we’ve only operated this datacenter design for a few months. However, if you read through the materials we released, you’ll note an extra 9 in the datacenter reliability. We’ve also observed a far lower rate of infant mortality and hardware fallout from the Open Compute servers than from industry standard devices. Once we have operated the datacenter for a longer period of time, we’ll share further reliability statistics.

    Enterprise IT should absolutely participate in the Open Compute Project. Who doesn’t want to use cheaper, more reliable and more efficient infrastructure? You’re absolutely correct that companies like Facebook build applications to scale horizontally and have deep expertise in managing reliability across a massive fleet of servers. Being a former enterprise IT guy, I admit this doesn’t come as easily to a typical enterprise. However, thanks in large part to the ecosystem built up around hypervisors (cloud management tools, virtualization consoles, packaging systems, etc.) an enterprise can leverage a set of open-source or commercial tools to bridge application development methodologies. Why would enterprises have to rely on expensive, proprietary hardware when Internet companies build applications which scale 100-1000X using commodity hardware in conjunction with a robust software infrastructure layer?

    Jeramiah Dooley

    Jh,

    Thanks for the comment. Yes, I'd agree that I don't think the model of custom-building bare-bones servers specifically dedicated to a known workload is either new or reliable, depending on your viewpoint. That being said, I don't think that it's a bad practice if you can get away with it. If you spend half as much per server, all you need is to be more than 50% as reliable and you've made a positive return. Google does this as well, where they've publically stated that they plan on up to half of the available compute resources being unavailable at any point in time. The reliability of the servers is low by design, but the availability of the APPLICATION is fantastic because of the way it's architected. This is really just taking the existing virtualized infrastructure model and taking it to the next extreme, right?

    As to your second point, I'd argue that all enterprises are currently building out their compute resources using commodity hardware. In my opinion all of the standard-line servers from Dell, Cisco, HP and IBM all fall into that bucket. There's a general need, in most enterprises, for different kinds of hardware tailored to specific kinds of workloads, even in those cases where everything can be run on an x86 processor.

    Please don't mistake my lack of enthusiasm for the "revolutionary" and "game changing" adjectives being thrown around as a lack of understanding about the importance of the effort. These are the kinds of initiatives that move things forward, but it's not something brand new that's never been done before. The scope is impressive, and the desire to be somewhat open and transparent is admirable, don't get me wrong. The company I work for is a huge believer in virtualizing workloads and getting the infrastructure out of the way, allowing companies to focus on their apps and users, so I believe in the message overall. I'm not knocking the Open Compute Project at all, I'm just not willing to buy into the hype generated by the pundits that Facebook is the future of IT.

    The comments to this entry are closed.