HealthITGuy's Blog

The evolving world of healthcare IT!

UCS 1.4 Code: What is cool so far.

Here are a few things that have jumped out so far that I like with the 1.4 code:

— Support for the new B230 blades; half size blade with 2 sockets and 32 DIMM slots (16 cores and 256 GB memory!!)

— Able to manage the C-Series servers via UCS Manager (the rack servers)

—  Power capping to cover groups of chassis; this is very powerful now.  Think about it, you can have 4 chassis in 1 rack all sharing 2 or 4 – 220 circuits.  Now you can cap, monitor and manage the amount of power by groups of chassis not just per blade or chassis.

—  Software packaging for new server hardware that does NOT require the IO fabric upgrade to use the new servers.  Nice!

—  Bigger and Badder!  Support for up to 1024 VLANS and up to 20 UCS chassis (160 servers from 1 management point!!).

—  Fiber Channel connectivity options, now can do port channeling and FC trucking as well as some limited direct connection of FC based storage (no zoning ability . .  yet).

OK the list goes on and on, they have packed a lot into this release.

Checking out the new items in UCSM, I had to grab a few screen shots of the following:

Power!  You ever wonder how much power a server or chassis is using?  Now you know, check this out!  I am loving this.

This is cool stuff!


For those UCS users out there, it has not always been very clear what the impact of making various changes to a Service Profile might do to the workload.  They have improved with each release, but this is some great detail now:

Cool stuff in Cisco UCS 1.4 code, I hope to have more time to share with everyone as we continue to maximize our investment.  Time to go home . . .

December 29, 2010 Posted by | Cisco UCS, UCS Manager | , | 1 Comment

Cisco Live 2010 . . .

I am on a plane headed to Las Vegas for this year’s Cisco Live conference. It has been a few years since I last attended Cisco Networkers back before it evolved and expanded just like Cisco and the rest of IT has changed. Back then I was interested in wireless, network security and just getting an understanding of storage area networks and the MDS line of gear.

Today I continue to have a heavy focus on the importance of the datacenter but there is a diverse and wide variety of technologies that all have to come together for a health system to deliver exceptional patient care.

It is time again for our organization to refresh our wireless to support greater bandwidth by using 802.11n, provide for a substantial increase in the number of wireless devices and simplify the management of it.

How we look at our LAN moving forward will need to expand the support of Quality of Service (QoS) to handle more video and voice. How it will all come together will be key as we move forward. Just like we are building in high availability and redundancy in the datacenter we have to focus similar attention on the network. This is the northbound traffic heading out of the datacenter to all of the end node devices no matter what, where and how they are connected. All of this done in a secure manner, of course.

Back in the datacenter, key technologies that are interesting to me are the new things coming in the Nexus product line and how it will tie in with UCS. So as we are expanding and scaling out our compute capacity on UCS what is going to be the most efficient and cost effective way to deliver this high level of service.

I am looking forward to this week and it should be a good time as well. I always like to hear John Chamber’s keynote, I wonder what his focus will be this year. Cloud Computing?

June 27, 2010 Posted by | General | | Leave a comment

Cisco Unified Computing Advisory Board (UCAB)

Just finished attending the Cisco Unified Computing Advisory Board (UCAB) in San Jose this week.  Great experience to have the opportunity to interact with other production UCS customers from various lines of business, the leadership of the Server Access and Virtualization Business Unit (SAVBU) and many other key Cisco staff focused on the success of the Unified Computing platform.  I am not able to go into many details or specifics on meeting content, however I will try and give you a sense of the what and why for the advisory board.

 We spend two solid days focused on customer feedback on our experiences; successes, challenges and what can be improved as well as getting feedback on product growth and future directions.  The key take away from this focus of the event was Cisco’s strong commitment to understanding the real world implementations and the desire to continually improve the unified computing experience and product.

 As you can imagine there was also a large focus on educating us on the short-term growth and roadmaps as well as discussions on longer-term thoughts, designs, etc.  Again this was framed around taking customer feedback to help shape things moving forward.  On this front, I quickly realized Cisco is not standing still and has an amazing vision for what “unified computing” will mean in the future.  The narrow thought of “Cisco is in the server business” quickly became clear that the server is merely a component of Cisco driving the unified computing business.  I think it is clear from the reaction and responses seens so far from other server vendors that there is a realization the future of compute is not just about a server.  That is why you see others scrambling to have their own “unified compute” platform by quickly cobbling existing technologies together and branding some form of unified computing.  The overall benefit of this competition is that all compute vendors will get better and continue the push and move in this direction.

 There is no question in my mind that Cisco is in this market as a leader and will be there long term.  I think a lot of organizations are beginning to understanding this fact and “get” the benefits and cost savings that UCS brings to the table. 

 What is also interesting to me is the timing of when Cisco executed the launch and growth of UCS, during the economic turmoil of 2009.  If you think about it, you could not have picked a more challenging economic time to introduce a paradigm shift in computing.  The up side to the timing is the cost benefits of UCS stood out for us early adopters.  Cisco is continuing to expand their investments in staff, functionality and advancements in technology, which is only strengthening the product.

 Cool stuff!  Look for the next UCS code release 1.3 to happen very soon in June 2010 and an updated ESX 4.x driver for the VIC (Palo card) that works with EMC PowerPath/VE as well.

June 7, 2010 Posted by | Cisco UCS | , | Leave a comment

UCS vs. HP BladeSystem: HP funded Tolly Report

I have had a few readers ask me to comment on a new report from Tolly that was commissioned by HP to compare the network bandwidth scalability between Cisco UCS and the HP BladeSystem c7000.  I have not read the report yet, however, on the Blades Made Simple blog (link listed below), there is a brief explanation of the report findings, link to the full report and then some great comments (you have to check them out).

I encourage you to take a look at the comments, they get pretty detailed about the UCS architecture, comparisons to the HP structure, etc.  I found the comments from Sean McGee (Cisco data center architect and former a network architect for the HP BladeSystem BU) and then feedback from Ken Henault (HP Infrastructure Architect) a lot of fun to read.  You can tell both of these guys are passionate about the technology.  Hey, I can’t blame them, this stuff rocks.  (I do find it interesting there are a lot of folks at Cisco formerly with the HP BladeSystem group).

My two cents (before reading the actual report, mind you):

As a UCS user, I am not too concerned with the over subscription possibility.  In our current production environment we have not seen any issue with bandwidth.  We currently are using 16 blades over 2 chassis and within a few weeks we should start using our 3rd chassis and 8 more servers.  I will be mindful to watch our bandwidth usage and see if there is any real world problems.  I suspect at 24 servers I will not see any issues.

If you want to check out the report, here is the link:

March 2, 2010 Posted by | Cisco UCS | , | 1 Comment

UCS and the “Palo Card” or Virtual Interface Card (VIC):

Well the day finally came when I recieved my first group of UCS blades with the new UCS M81KR Virtual Interface Card (VIC) or what has been known as the Palo Card.  This is the cool CNA built by Cisco specifically to add a great deal of flexibility to the I/O needs of virtual host servers (ok, mainly focused on VMware ESX 4.x, where all the cool virtualization is happening!).

I should have taken a picture of it!  Gone are the Emulex or QLogic stamped name on the mezzanine card.  The VIC provides all of the I/O function to the server blade.  It is a single card with 2 – 10 GB FCoE ports to the northbound switches and then up to 128 virtual I/O interfaces facing the server/host side.

To be able to manage and build your own customized I/O world for an ESX host or guest machines you have to perform a code upgrade to your UCS system.  Once that code upgrade is complete, you see Cisco has added an additional tab in the UCS Manager to be used for configuring the new virtual I/O functions.  Note, I have not seen this new tab yet, we are currently planning our code upgrade process.  I am interested to see how it goes upgrading the firmware, etc. on a production UCS system.  I am sure I will blog about it!

So what does my world currently look like?  I have 2 – 6120 Fabric Interconnects, 3 chassis and 25 B200 M1 blade servers (yes, I need to get a 4th chassis to house my 25th blade).  19 of the B200 blades contain the Emulex CNA and 6 B200 blades with the new VIC CNA.  I currently have my new “VIC” blades in the chassis but not in use.  The UCS manager sees the new blades, can tell me about the VIC, displays the interfaces differently (no virtual vNIC or vHBAs have been created yet).

Stay tune for an update on the code upgrade and screen shots of the new Virtual Tab, etc.

Here is the link at Cisco for details:

February 25, 2010 Posted by | Cisco UCS | , , | 6 Comments

Sharing UCS with others . . .

I have contact with several other like organizations in healthcare whom have similar growth and change occurring.  As part of an idea sharing that we do I hosted several of my peers a few weeks back to give them an overview of Cisco UCS, why we choose it, how we implemented and how we manage the environment.  From these sessions there were a few things that came up with most of the other organizations.  I thought I would share some of them.

Code Updates:  The question and concern was mentioned what happens when you have to apply new bios or code to the components of UCS, do you have to take down 8-16 servers?

The comparison of UCS to a SAN comes in handy for this question.  Currently there are 2 components on a blade that can get updates, the code on the fabric extender (the IO module inside the chassis) and the Fabric Interconnects (6120 switches running UCS Manager).

For the blade itself you can update the firmware on the IO mezzanine card which requires a reboot of that server at your selected time.  In addition, the BMC Controller has firmware that can be updated and the last update did not require a reboot.

For the fabric extender and the Fabric Interconnects the redundancy built into the system means you do not have a downtime.  Like with a SAN array upgrade you perform your upgrades on the A side, then reboot while everything is functioning on the B side.  Then perform the upgrade and reboot on the B side while everything is functioning on the A side.  I would still perform this type of activity during second shift but the system would not require a downtime.

What if a chassis fails?

When you look at the chassis, there is not any component that I could see failing that would take out 8 servers.  The fans, power supplies, blades and fabric extenders are all redundant.  Outside of those items the chassis is just a box.  I am sure there are more details to it that someone familiar with the chassis details, but I do not see it as a likely concern.

Concern that UCS is a Generation 1 Product:

My thoughts are yes it is a gen 1 system, however, not fully.

Yes, the Cisco blade server is “new”, however it is using all of the same industry standard components that other server vendor are using; same memory, CPU, etc.  For the IO cards the Cisco server is using mezzanine cards which have chips and drivers provided by Emulex and QLogic.

From the infrastructure standpoint the Fabric Interconnect is built on the Cisco Nexus 5000 platform which has been in production for at least 18 months.  The Nexus platform has been using converged networking for all of this time.

The major generation 1 component is the added functionality of the UCS Manager on the Fabric Interconnect.

So yes, it is gen 1 but it is taking what has existed and bringing it to the next level.


During one of my meetings we were talking and a peer asked his co-worker, “it is cool, but would we want to take a risk on a gen 1 product?”.  I had to jump in and answer his question as “Yes”, it was to compelling to me not to move forward with UCS.  At this point, Cisco UCS has streamlined our processes and the way we handle x86 servers that I cannot see going back.

December 31, 2009 Posted by | Cisco UCS | , | 1 Comment

UCS Manager: Key Components Part 2

To continue on with the foundational concepts of the UCS Manager . . .


vNIC Template General Tab

You find templates available under Server tab (Service Profile Templates), LAN tab (vNIC Templates) and SAN tab (vHBA templates).  When creating a new template you select the type to be Initial or Updating; the difference being an updating template will “update” any changes to objects using that template.  A vNIC Updating Template that was changed from native VLAN 2 to native VLAN 100 will apply that change to all objects using that template.  An Initial Template will maintain the settings defined at the time of creation and not change.  Which type of Template you use or combination of type will depend on your workflow and change management.

In our case we choose to use Updating vNIC and vHBA Templates and Initial Service Profile Templates.  This seems to give us flexibility with possible changes in the future.  For example, we created a vNIC Updating Template for ESX servers with 12 trunked VLANs and the service console VLAN set as Native.  As we add additional VLANs in the future we only have to make this change to the updating vNIC Template and it will populate to all of our ESX hosts (I think this change would not require a reboot).

Service Profile Templates are one of the ways to generate a new Service Profile.  A “working server” is made up of a physical blade being associated with a Service Profile.  When creating a Service Profile Template you define various functions using the values created in polices, pools and templates.  Meaning you select which boot policy, local storage policy, vHBA setting, vNIC settings and blade assignment you want to use for this Service Profile Template.  The other ways to create a Service Profile are to clone an existing one or create it from scratch. 

Servers Service Profile Template Listing.

The Service Profile is the key to the stateless function of the UCS system.  This is what abstracts the physical identifiers from the hardware and allow you to move a “server” between physical blades.  We were demoing this concept/process and immediately the person said, “oh, that is like vMotion but for the physical level”.  Please note, your “server” has to be powered down to perform this move to a different physical blade, however, I think it is only a matter of time before that changes.

How to Name something in UCS, get it right the first time:

You know how in most applications you are able to give friendly names to objects and items and you can change them?  For example, in EMC Navisphere you can name a LUN anything you want and you can change it?  Well, in UCS it was designed to use the name as the value to identify an object, meaning once you give something a name you cannot change the name.  You learn this pretty early on in your configuration process, there were many times when we had to delete a pool, template, etc. because we did not follow the correct naming convention or did not have something named correct.  Because of this we had a defined step in our process to go through and clean up all the “junk” from the first few days of building and testing (said goodbye to “foo” service profile :)).

LAN tab Concepts:

The LAN Cloud refers to the northbound LAN, the connection of the 6120 to the rest of the LAN.  The Internal LAN refers to the southbound LAN connections to the chassis and blades.

Pin Groups:

With a vNIC and vHBA, when the blade comes up it will be assigned to a 6120 northbound I/O port by some process within the system.  UCS gives you the ability to “pin” a MAC or WWPN to a specific I/O port (Ethernet or Fiber Channel).  Lets say you have a blade running Microsoft SQL and you wanted to make sure that blade always had a dedicated 4 GB fiber channel port to the SAN fabric.  You can define a SAN Pin Group to alway use FC port 2/4 on Fabric A and FC port 2/3 on Fabric B.  I think the power of Pin Groups will come more into play once you can use the Palo CNA adaptor and you can Pin a VM guest to specific ports.  We have not used pin groups in our configuration, not sure if we will.

SAN Cloud:

SAN FC Ports in Red.

The color Cisco picked to represent the fiber channel ports in the GUI is interesting, red.  It took a few days to get use to seeing red ports for the FC and not wanting to figure out what was wrong with them.  I do not know if that was the best color choice.  The LAN ports are done in a Carolina blue.

November 20, 2009 Posted by | UCS Manager | , | 1 Comment

UCS Manager: Introduction

The Cisco UCS Manager is the Java tool that is used to manage all aspects of the system. The UCSM resides on both 6120 Fabric Interconnects which are clustered.  You can either connect directly to a specific 6120 or the virtual IP (VIP) which connects you to the primary 6120.  If the primary 6120 fails it will automatically connect you to the backup.

UCS Manager is organized into four main sections; Equipment, Server, LAN, SAN and the Admin tab.  Each is navigated by using a tree structure.

UCS Manager Tabs.

View of the tabs in UCS Manager.

Equipment tab displays all of the hardware in a tree structure that can be branched out for greater detail.

Server tab is focused on the service profiles and all items specific to the configuration of the blades.  This is where a lot of the configuration happens.

LAN tab is focused on the Ethernet network side to define global settings like VLANs, internal network (connections southbound to the chassis fabric extenders) and external network (connections to northbound 6500 LAN) port settings.

SAN tab is focused on the fiber channel related items like VSANs, uplinks, etc.

Admin tab is where you configure all of the various global administration activities like authentication, LDAP, monitoring, alerts, etc.

I will go into details on the various sections in future posts.

November 12, 2009 Posted by | UCS Manager | , , | Leave a comment

UCS configuration

We accomplished a lot today, I do not have much time to detail it now.

Quick Summary:

—  Defined all pools
—  Reviewed all policies for Server, LAN, SAN tabs
—  Created VSAN global setting (the VLAN setting is interesting)
—  Created vHBA updating templates
—  Created vNIC updating templates
—  Created an Initial Service Profile template

Once all of that work was completed we were able to pre-zone our SAN based on our pools.

Next we were successfully able to build an ESX vSphere host using our new UCS processes.

More details to come later, but a very good day.

November 5, 2009 Posted by | Cisco UCS | , | Leave a comment

Cisco UCS vs. Cisco UC

There have been more than one person that I have told about us implementing the new Cisco UCS blade/server system and they automatically are thinking Unified Communication or VoIP.
It takes them a few minutes to change their mind set away from VoIP to Unified Computing System.
I think this will be a challenge for Cisco to get the message out about UC vs. UC . . .

November 5, 2009 Posted by | Cisco UCS | , , | Leave a comment