HealthITGuy's Blog

The evolving world of healthcare IT!

Real World UCS: Production Apps Kicking it!

We have been running Cisco UCS 4 months now and are preparing for a code upgrade and adding more B200 blades to the system for VMware.  So I was thinking what do I really have running in production on the system at this point?  It makes sense to have a good handle on this as part of our code upgrade prep work.  I put together the below information and figured others could find it useful to get a perspective of what is in production on UCS in the real world (note all of the blades refer to the B200 model running with the Emulex card).

Windows 2008 and 2003 Servers:

I will start with a cool one. Tuesday we went live with our VMware vCenter server loaded bare metal on a UCS blade with boot from SAN. This is W2K8 64 bit, vCenter 2.x with update manager and running SQL 2008 64 bit database (used by vCenter). It has 1 Nehalem 4 core CPU and 12 GB of memory and is running sweet. This is a big show of trust in UCS, the center of my VMware world running on it for the enterprise!

2 server blades boot from SAN (1 prod and 1 test) running W2K3 64 bit with Oracle ver. 10G for our document management system. It has 1 Nehalem 4 core CPU and 48 GB of memory and is running with no issues.

VMware ESX Hosts:

4 production VMware ESX 4.0 hosts with NO EMC PowerPath/VE. All boot from SAN, 2 – 4 Core CPU and 48 GB memory. These 4 ESX servers are configured to optimally support W2K8 64 bit Microsoft clusters. We currently are running 4 – 2 node MS clusters on these blades. They are using about 37% of the memory and not really touching the CPU, so we could easily double the number of MS clusters over time on these blades.

10 production VMware ESX 4.0 hosts with EMC PowerPath/VE. All boot from SAN, 2 – 4 Core CPU and 96 GB memory. Today we have 87 guest servers running on our UCS VMware cluster. This number increases daily. We are preparing for a few application go-lives that use Citrix XenApp to access the application, so we have another 47 of these servers built and ready to be turned on by the end of the month. So we should have well over 127 guest servers running by then on the UCS VMware cluster.

Here is a summary of the types of production applications/workloads that are up the current 87 guest servers:

NOTE: For the 10 guest servers listed below for data warehouse, they are very heavy on memory (3 with 64 GB, etc.) and we have hard allocated this memory to the guest servers. Meaning the guest is assigned and allocated all 64 GB of memory on boot, even if it is not using it. So, for these large servers they are really using memory resources in VMWare differently than what you normally would do within the shared memory function of VMWare.

10 servers running data warehouse app; 5 heavy SQL 2008 64 bit servers with the rest being web and interfaces.

15 servers for Document Management servers running W2K3 server including IBM Websphere.

39 W2K3 64 bit server running Citrix XenApp 4.5 in production delivering our enterprise applications. The combination of these servers is probably handling applications for about 400 concurrent production users. This will be increasing significantly within 21 days with coming go-lives.

7 W2K8 64 bit servers that provide core Citrix XenApp DB function (SQL 2008) and Citrix Provisioning servers for XenApp servers.

1 W2K3 server running SQL 2005 for computer based learning; production for enterprise.

1 W2K3 server running SQL 2005 for production enterprise staff scheduling system.

3 W2K3 servers running general production applications (shared servers for lower end type apps).

3 W2K3 servers running interface processors for the surgical (OR) application (deals with things like collar-bone surgeries 🙂 )

1 W2K3 server running a key finance application.

1 W2K3 server running a key pharmacy application.

1 W2K8 server running a pilot SharePoint site (the free version).

There are a few other misc guest servers running as well for various lower end functions, i.e., web servers, etc.

Current VMware Utilization:

Average CPU utilization in the UCS Cluster for the 10 hosts is 8.8%.

Memory usage:

The 3 ESX hosts running guest servers with hard allocated 64 GB memory: 76% average.

The 7 ESX hosts running all other workloads: 41% average.

We still have a good amount of growth within our UCS Cluster with 10 servers. I believe I could run this full load on 6 blade servers if I had to for a short period of time.

There you have it, a good summary of what a production Cisco UCS installation looks like in the real world.

Advertisements

March 18, 2010 - Posted by | Cisco UCS, VMWare | , ,

5 Comments »

  1. As always Michael thanks for sharing the UCS story.

    Can you speak to what clinical applications you guys run? If you are operating under that vendor’s “best effort” support or “guaranteed” support with your setup. I’d imagine you’re getting some push back for 1) using UCS hardware 2) Using so much win2008/sql2008 3)Doing it mostly virtual. 😉

    We are going to see a UCS demo here shortly in Charlotte. Debating bring some in-house to start working with. For an all HP shop…would be a big shift for us. We’ve been hesitant about the blades given a few challenges with our HP c7000 chassis and firmware updates (network modules the root).

    Any reason for not using XenApp 5 and XenDesktop 4 on Win2k8 R2 x64? We are building a new farm right now also.

    Comment by JH | March 19, 2010 | Reply

    • I guess your question is really focused on vendors approving their apps to be ran on VMware or on UCS servers.
      Regarding VMware, we twist arms with the app vendors and push pretty hard to virtualize first. We have success with most medical related vendors, however, our GE/Phillips/Stentor PACS system is not changing for us (even though Phillips/Stentor now runs under VMware, GE won’t approve it).
      Regarding UCS blades for workloads, if the vendor is asking/requiring for their system/app to not run in VMWare and require physical hardware, I have not asked if Cisco server is ok or not. To me it is just another x86 server, which it really is. So no issues there.
      I say check out the UCS demo, I suspect you will like it. Keep in mind it is not just another blade chassis approach . . . Yes, my experience with the 2nd Gen Dell chassis were a pain to maintain.
      Reason for running XenApp 4.5 on W2K3 is Microsoft licensing; we have a W2k3 domain and W2K3 term srv CALs. Someday we will get to W2K8 for the domain, etc., however a big cost to MS.
      Take care.

      Comment by healthitguy | March 19, 2010 | Reply

  2. Awesomeness. Thanks for sharing!

    Comment by Steve Chambers | March 19, 2010 | Reply

  3. Thanks for sharing this Michael.

    myself being a non-production guy, i really look forward to stuff like this.

    one quick clarification. if i understand correctly, you have a 17 UCS blades (1 vcenter + 2 oracle+ 14 esx)

    Comment by Ajay | August 11, 2010 | Reply

    • Ajay,
      I currently have 4 chassis with a total of 28 blades. 4 are in test right now. I think 5 are running Windows and the rest are prod ESX 4.0 hosts.

      Mike

      Comment by healthitguy | September 15, 2010 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: