HealthITGuy's Blog

The evolving world of healthcare IT!

UCS Network terms to live by . . .

We ended yesterday being confused about how to make the LAN connection between our Cisco 6500 core swicth and the 6120 Fabric Interconnect.  The physical layout is 6500 has 2 – 4 port 10 G modules using the XenPack-10GB-SR model (there are many Xenpack options, make sure you pick the correct one for your environment).  We installed 50 micron laser optimized fiber jumpers between the gear with an LC connector for the 6120 side and a SC connector for the 6500 side (again know which connectors you need for both ends, if you have the wrong one it can be a drag).  We setup an Etherchannel by using 2 – 10 G connections for each fabric interconnects.  The physical connections were the easy part.  The 6500 is a classic Ethernet switch with a lot of functionality.  The 6120 Fabric Interconnect is a layer 2 switch (based on the Nexus 5000 line) plus other things like a fiber channel switch and all the management and smarts for the UCS system.  So is the 6120 handled like a Ethernet switch?  But servers or blades do not connect directly to the 6120 because they plug into the fabric-extenders contained in the chassis.  We were trying to wrap our heads around this new way of approaching LAN connectivity while using what we normally do as a frame of reference.  Here is what we figured out (others who were there can help correct me if I am off based with some of what I am saying . . .)  

6500:

The 6500 is the core layer 3 switch and we want to trunk many VLANs to the 6120 system.  We also use VLAN 1 on our LAN as a valid network, which is a little different.  6500 terms:  On a trunk port the native VLAN is untagged.  All other VLANs that are trunk are tagged.  native = untagged  

6120:

LAN Tab

The UCS LAN tab listing the global defined VLANs.

The 6120 has a physical connection to the LAN via the northbound 6500 switch.  Southbound, the 6120 is connected to the chassis via the fabric-extenders.  The blades are connected to the fabric-extenders. 6120 terms:  In the LAN (global) setting you define all the VLANs that have been trunked from the 6500 to the 6120 and you define one VLAN as “default”.  Note, “default” can be any VLAN ID number, it does not have to be 1.  What you have defined as “default” will be handled as untagged traffic, all other VLANs will be passed with a tag.  On the LAN tab global setting “default” = untagged 

 Blades/Service Profiles:

Each blade in our configuration has an I/O interface card (CNA) with 2 Ethernet and 2 HBA ports.  We use a service profile, created via the UCS manager (running on the 6120) to define vNICs (as well as other things associated with a blade).  Service Profile terms:  In the Service Profile’s for the vNICs, if you want to trunk many VLANs (like you would for a VMWare ESX host) you can select any number of VLANs that have been defined in the global setting and then you select one VLAN to be “native”.  In the Service Profile the VLAN set to “native” is the VLAN the blade will boot on and it will send traffic untagged.  In the Service Profile vNIC the VLAN selected as “native” = untagged   

vNIC settings

vNIC settings showing the VLAN selection options.

How to Configure the LAN:

After some trial and error (strange results) with different combinations on each peice of equipment regarding “native”, “default”, etc. and talking with Cisco engineers, we determined the following correct configuration for VLANs.   6500 we trunk 15 VLANs and set VLAN 1 as “native” (there is real traffic on VLAN 1 in our enviroment).  6120 we defined all 15 VLANs that are trunked from the 6500 and set VLAN 1 to be “default”.  Service Profile for a VMWare ESX host, we select the 15 VLANs that are needed for guest servers and then select the “ESX Service Console” VLAN as “native”.  

How does this work?

When the VMWare ESX host boots using the above Service Profile it will boot on what is defined as the “native” VLAN.  This “native” VLAN is known to the 6120 as VLAN 778 and knows to trunk that VLAN northbound to the 6500 switch.  The 6500 then is able to route that traffic appropriately and the return traffic will come back via VLAN 778. 

Summary:

6500 “native” = 6120 “default” = Service Profile “native”

Advertisements

November 4, 2009 Posted by | Cisco UCS | , , | Leave a comment

UCS implementation continues . . .

I put together a Picasa Web Album of about 50 pictures of the various components of the hardware.  It will give you a good visual of the system.  Check them out at:

http://picasaweb.google.com/healthITGuy68/CiscoUCSPictures#

Yesterday during our day 2 we mapped our out plan for implementing, testing the hardware, etc.  We jumped in creating our various pools and some test service profiles.  We did discover we need to better understanding of how the 6120 functions as an Ethernet “switch” compared to a traditional 6500.  We should nail down those details today.

In addition, we will get to setting up the fiber channel side of the I/O later this afternoon. 

Working on UCS implementation in our make shift conference room

We ran out of conference room space so we setup our own UCS implementation room, a nice warehouse feel to it! Photo credit D. Bregman 🙂

I plan to provide more details as we go on and I get time . . . enjoy the pics!

November 4, 2009 Posted by | Cisco UCS | , | Leave a comment

UCS Implementation, Day 1

Our Cisco advance services engineer arrived today and we started our implementation . . . all good so far.

Some of what I learned:

—  When talking about the abstraction of the hardware (i.e., MAC address, WWN, etc.) it gets interesting when you start talking about connecting to the “KVM” of a blade.  The question is are you connecting to a specific blade or a slot in a chassis?  I will have to get a good grasp of it and try to explain what I figure out . . . stay tune.

—  We configured the 2 – 6120 fabric interconnects with IP addresses and a VIP.  Very straight forward, however, we ended up wiping the config. and doing it a second time because when you name a fabric interconnect it puts a -A or -B on the end.  So the first time we ended up with xxxx-ucs-A-A; a little confusing.  It took all but 15 minutes to erase the config and re-do it with a new name.

—  Cisco UCS Manager uses Java via most web browsers, I have used IE so far.

—  We plan to use AD authentication to validate to UCS manager, there are a few ways to do it.  We plan to use Radius which points to our AD.  Within the UCS manager we will define roles, etc.

—  We found out the first code update was release last Friday, we have decided to upgrade to before we moving forward.

Our UCS Manager

This is our UCS Manager right after giving it some IP addresses.

November 2, 2009 Posted by | Cisco UCS | , | 1 Comment

We made a few UCS related videos

Check out the YouTube videos on the Varrowinc channel that we took today in the datacenter.

http://www.youtube.com/watch?v=7H4ppd1VoT0

To see more do a search for varrowinc channel, there are about 8 out there.

October 30, 2009 Posted by | Cisco UCS | , , , | Leave a comment

Cisco UCS has Arrived . . .

UCS Chassis' are here!

Went to the dock and brought them to the datacenter myself.

Cisco made good on their shipping committment and all the Cisco UCS gear has all arrived, it is cool!

The Cisco 6120 Fabric Interconnects:  These are the “controllers” which contain all the I/O; Converged network, 10 G Ethernet and Fiber Channel connectivity for the system.  Meaning we have 10 G converged network connection to the Cisco chassis.  Then the 6120 splits out the convergence network into the 10 G Ethernet and 4 GB fiber channel which then connects to my LAN and 4 GB fiber channel switches.  In addition, all of the management for the system is done in the 6120 Fiber Interconnect.

Cisco 6120 Fabric Interconnect

The central "controller" of the UCS system, used to manage and handle all I/O.

Green field vs. Brown field.  Most people will see UCS fitting well into a new datacenter, however our install is a great example of how it can go into an existing datacenter environment.  This gets to where and how someone plans for the UCS location in an existing datacenter.  For us, I choose a location with 3 racks next to each other which contain other existing servers.  I placed the 6120s near the top of the middle rack.  This allows me to install UCS chassis within all 3 racks and be within the 3 meter distance provided by the 10 G copper cables.  When a friend saw my pictures of the placement he wondered why I would not put the 6120s next to where the chassis’ will go, which may be a normal thought for people implementing UCS.  By using this location I have rack capacity for at least 10 chassis’ if not 11, all using the 3 meter cables (lower cost than fiber).

6120 Impressions:

Pros:

—  Packaging is good, everything was contained in 1 box (SFPs, 3 meter cables, rails, etc.).
—  6120 size is just like a 1U rack server, people are familiar with this fit and size (familiar feel for server people).
—  6120 has everything well labeled (fans, power, etc.), it is straight forward in appearance, the ports are just like a standard switch (very familiar to network people).
—  6120 has many indicator lights that are straight forward and it is clear what each one is representing.
—  The I/O end of the 6120 is angled on the top to allow for greater air flow through the box, elegant touch.

6120 Fabric Interconnects in the rack.

Installed with HP and Dell servers, and everything still worked.

Con:  (nothing of real significant)

—  Rails.  This was my biggest con.  For a server person, I do not think they will like them.  For a network person they are the norm.  It would have been a real pain to install my 2 – 6120’s if I did not have access to the sides to get them lined up with the back rail support.  Because I have used this type of rail in the past for the Cisco MDS switches I was familiar with them.  However, all my server people are familiar with the rapid rail systems used by Dell, HP, etc.  On the other hand, once you have the 6120s in the rack you are really not going to need to pull them out.

Suggestions:

—  6120 should probably come with a 1 page “quick install” document (page 1-4 of the Cisco UCS Site Preparation Guide would be a good start for what it should look like).  It should tell the user to install it with the I/O ports and power outlets to the back of the rack, where the rail items should be connected to the 6120, etc.  It also should  mention to install the 6120s within the 3 meter cable lengths for the chassis in mind.

—  SFP modules all came in 1 box, which was nice.  However, the 10 G and fiber channel SFPs were all mixed together and you have to find the small text on the SFP itself to know which is for Ethernet or Fiber Channel.  Having something to make the 10 G stand out would be a good thing (a color coded sticker or label on the outside of the static bag).

Overall:  I am pleased with the design and layout of the physical 6120. 

Chassis Impressions:

It was cool when the 2 big Cisco boxes that contained the chassis and servers arrived.  This is a 6U 90 lb. chassis with 4 power supplies, 8 fans and 2 fabric-extender devices for 10 G converged connectivity.  You can get up to 80 G of I/O to a chassis.

Pros:

Blade servers packaged on top of the chassis.

8 blade servers on the top of the chassis in the box.

—  The packaging was great, each big box contained 8 blade servers on top and then you lift that box out to find the chassis on the bottom.  Everything is there and straight forward.

—  Racking the chassis was straight forward, quick install (no tool) rails drop in and you slide the chassis onto the rail in the rack and you are done. 

—  4 power cords with screws to secure them and the power supplies are hot swap and replaced via the front.

—  Installing a blade server is a no brainer.

—  Cabling.  This is the big difference and huge.  I am using 4 copper 3 meter 10 G cables to connect the chassis to a6120 fabric interconnect.  4 cables and I am done for all my Ethernet, Fiber Channel and management.  I will say it again, 4 cables!  Have I said this is sweet yet?

—  Complexity, the chassis is not complex it is just a box for the servers to plug into and to provide big I/O pipes.  Plus, cabling, yes only 4 cables to 8 cables!  (depending on your I/O needs you can use as little as 2 and up to 8 ports per chassis).

Cons:

— Nothing stands out at this point, I will let you know.

The front of an empty chassis.

Fresh out of the box.

Overall:  The chassis is just a blade chassis, smaller than the latest gen of chassis from the other vendors.  It does not need the complexity required by a traditional blade chassis, meaning no need for 2 separate management cards, 2 Ethernet switches and 2 fiber channel switches.  It is straight forward and elegant.

Time:  Ok, this was a big deal.  It took myself and one other person to unbox, rack, power, install 16 servers and cable them to the 6120 in about 90 minutes.  It took a little longer because we were taking pictures :).  For me to rack 16 – 2U servers, power, cable Ethernet and Fiber Channel would easily be 2 to 3 days.  This is huge.  However, if I did 1 traditional blade chassis with 16 servers maybe that would take 1 day.  However, there would be a lot more cabling involved.

Back of a racked chassis.

Those 8 fans are variable and can push a lot of air.

Stay tune, next time I will talk about the blade servers and implementation . . .

Oh, here is my Halloween costume for this year!

Halloween costume 2009

October 30, 2009 Posted by | Cisco UCS | , , | 1 Comment

Cisco Unified Computing System: UCS

Server virtualization has been great for our organization since we implemented VMWare 3.x over 2 years ago.  We have grown our VMWare environment to 24 hosts with over 300 guest servers and upgraded recently to vSphere 4.o.

The server hardware we have been using for our ESX hosts has been 2U rack mounted servers with 2 – quad-core CPU, 32 GB or 96 GB memory, 2 HBA, and 6 to 8 – 1 GB Ethernet NICs. 

Recently we had a project need to add 16 additional ESX hosts to our environment.  At the same time I began to learn about Cisco’s new blade server system the UCS (unified computing system).  My first impression was confusion from what I had been reading in the trade rags.  Then I had a presentation from Cisco on the topic and I was intrigued when it was compared to a SAN but for compute capacity.  Meaning you have 2 “controllers” and you add blades and chassis’ for additional capacity.

At VMWorld 2009 I was able to dig deeper into UCS which was center stage at the conference.  When you entered Mosecone Conference Center the datacenter that supported the conference was 512 Cisco UCS blades located in the main lobby for all to see.  Cisco had the right people on-hand to educate me on the what, how and why the UCS system was built.

VMWorld 2009 Cisco UCS Datacenter

512 Cisco blade servers handled all the compute power for the conference.

So for my project that requires 16 more ESX hosts I did a deep comparison of my traditional approach with converged 10 G network (2U rack mount) vs. a Cisco UCS configuration.  Looking at the cost differences, what it takes to grow both approaches and the amount of required infrastructure (Ethernet and Fiber Channel cabling and ports) Cisco UCS made the most sense.  There is some risk going with a new server “system”, however my previous experience with Cisco has always been very positive. 

We have jump in with both feet with Cisco UCS . . . I will bring you along for the ride.

October 30, 2009 Posted by | Cisco UCS | , , | Leave a comment