VMware vExpert


  • Any views or opinions expressed here are strictly my own. I am solely responsible for all content published here. This is a personal blog, not a VCE blog. Content published here is not read, reviewed, or approved in advance by VCE and does not necessarily represent or reflect the views or opinions of VCE or any of its parent companies, partners or affiliates.


Recent Twitter Updates

    follow me on Twitter

    Enter your email address:

    Delivered by FeedBurner

    « They Made WHO a VMware vExpert? | Main | Despite My Best Intentions... »



    Feed You can follow this conversation by subscribing to the comment feed for this post.


    Very cool and very clean. I like that. Wish I was at that stage in my career again to build something like this. As I said before, when we move I'll set something up in the new place and have it for a home system and tinkering

    Jeramiah Dooley

    The good thing about the lab is that is doesn't have to be big, or cutting edge. You can put something together on your laptop, if you want. Or buy one host and build up from there. When I was asking how to justify the cost of the home lab earlier this year, Chad Sakac told me that everything he'd ever spent on lab gear had been a good investment, and I think he's right. You don't need all of this to get started, just get what you need!


    I think I'm going to start with two of these hosts for my lab, and possibly just one of the Cisco switches. Do you think I can get away without having a dedicated management server, but still be able to test clustering on all three hypervisors? If not, I may go for three, since my computer donating model leaves me without old machines to scavenge.

    Also, I'd read an MSDN blog about making a Hyper-V R2 usb key and it recommended 8GB or larger. If I'm going to get an 8GB for that, I may just pick up 3 at that size, even if Xen & ESXi don't require that much room.

    The main pain point for me is the NICs, although $160 isn't bad for dual-port NICs from what I could see. I think your effort to stay completely within the HCL is a smart thing.

    With just 2, maybe 3 hosts, do you think I'll be OK with just a single Cisco switch for now?

    Great post - really appreciate the perspective. Oh, and I think the pic of your daughter is helping me convince the CFO, since she thinks it is adorable.


    To answer my own question about doing without a dedicated management server, after waking up I see the main benefit of that being able to maintain the management VMs independent of which hypervisor I'm using on the other nodes. Without that, I imagine I would need to keep a separate mgmt environment running under each hypervisor on at least one of the cluster nodes.

    Jeramiah Dooley

    You are exactly right, Mike. I have AD, all of my vCenter instances, monitoring and the like inside the single host management cluster. That way everything else can leverage those apps without me having to rebuild/redeploy every time. For some of the lab setups, like multi-tenant vCD, I'll have separate AD instances, but there's still one to rule them all.

    Jon Langemak

    So for starters, great post. There arent enough people that post their home lab build success stories. I do have a couple of questions for you though...

    -Is this gear compatible with 5? Any issues?
    -Are there pieces (built in storage controller on the mobo, built in NIC on the mobo, etc) that don't work but you just live with? I assume thats why you use the dual Intel NIC etc.

    Any other comments or feedback since you started using it? Im looking to build a home lab and Im thinking about just building two of these physical hosts and doing a virtual lab infrastructure. I like using VMware myself for a couple of my boxes so licensing the physical boxes with ESXi and then just using eval licensing with the virtual ESXi boxes seems like a good fit.

    Any other info you could provide would be great. Thanks again!

    Jeramiah Dooley

    Thanks for the comment Jon! Yes, everything is currently running v5.0 with no issues at all. The on-board NIC (only one) is junk, which is why I'm using the Intel NICs, and that was a tradeoff based on the cost of the AMD processor and chipset. The i5/i7 rigs that are also used typically have an Intel NIC or two on-board as well, which would free up that slot while costing a bit more.

    Overall it's been great. I don't regret going with multiple physical servers rather than using nested ESXi at all, and it's been great for setting up two-site designs and testing SRM and the like. The demo ESXi (or even the free versions) work fine, and I've been really happy with the outcome. Good luck with your build and let me know if there's anything I can help with!

    The comments to this entry are closed.