HealthITGuy's Blog

The evolving world of healthcare IT!

Cisco UCS has Arrived . . .

UCS Chassis' are here!

Went to the dock and brought them to the datacenter myself.

Cisco made good on their shipping committment and all the Cisco UCS gear has all arrived, it is cool!

The Cisco 6120 Fabric Interconnects:  These are the “controllers” which contain all the I/O; Converged network, 10 G Ethernet and Fiber Channel connectivity for the system.  Meaning we have 10 G converged network connection to the Cisco chassis.  Then the 6120 splits out the convergence network into the 10 G Ethernet and 4 GB fiber channel which then connects to my LAN and 4 GB fiber channel switches.  In addition, all of the management for the system is done in the 6120 Fiber Interconnect.

Cisco 6120 Fabric Interconnect

The central "controller" of the UCS system, used to manage and handle all I/O.

Green field vs. Brown field.  Most people will see UCS fitting well into a new datacenter, however our install is a great example of how it can go into an existing datacenter environment.  This gets to where and how someone plans for the UCS location in an existing datacenter.  For us, I choose a location with 3 racks next to each other which contain other existing servers.  I placed the 6120s near the top of the middle rack.  This allows me to install UCS chassis within all 3 racks and be within the 3 meter distance provided by the 10 G copper cables.  When a friend saw my pictures of the placement he wondered why I would not put the 6120s next to where the chassis’ will go, which may be a normal thought for people implementing UCS.  By using this location I have rack capacity for at least 10 chassis’ if not 11, all using the 3 meter cables (lower cost than fiber).

6120 Impressions:

Pros:

—  Packaging is good, everything was contained in 1 box (SFPs, 3 meter cables, rails, etc.).
—  6120 size is just like a 1U rack server, people are familiar with this fit and size (familiar feel for server people).
—  6120 has everything well labeled (fans, power, etc.), it is straight forward in appearance, the ports are just like a standard switch (very familiar to network people).
—  6120 has many indicator lights that are straight forward and it is clear what each one is representing.
—  The I/O end of the 6120 is angled on the top to allow for greater air flow through the box, elegant touch.

6120 Fabric Interconnects in the rack.

Installed with HP and Dell servers, and everything still worked.

Con:  (nothing of real significant)

—  Rails.  This was my biggest con.  For a server person, I do not think they will like them.  For a network person they are the norm.  It would have been a real pain to install my 2 – 6120’s if I did not have access to the sides to get them lined up with the back rail support.  Because I have used this type of rail in the past for the Cisco MDS switches I was familiar with them.  However, all my server people are familiar with the rapid rail systems used by Dell, HP, etc.  On the other hand, once you have the 6120s in the rack you are really not going to need to pull them out.

Suggestions:

—  6120 should probably come with a 1 page “quick install” document (page 1-4 of the Cisco UCS Site Preparation Guide would be a good start for what it should look like).  It should tell the user to install it with the I/O ports and power outlets to the back of the rack, where the rail items should be connected to the 6120, etc.  It also should  mention to install the 6120s within the 3 meter cable lengths for the chassis in mind.

—  SFP modules all came in 1 box, which was nice.  However, the 10 G and fiber channel SFPs were all mixed together and you have to find the small text on the SFP itself to know which is for Ethernet or Fiber Channel.  Having something to make the 10 G stand out would be a good thing (a color coded sticker or label on the outside of the static bag).

Overall:  I am pleased with the design and layout of the physical 6120. 

Chassis Impressions:

It was cool when the 2 big Cisco boxes that contained the chassis and servers arrived.  This is a 6U 90 lb. chassis with 4 power supplies, 8 fans and 2 fabric-extender devices for 10 G converged connectivity.  You can get up to 80 G of I/O to a chassis.

Pros:

Blade servers packaged on top of the chassis.

8 blade servers on the top of the chassis in the box.

—  The packaging was great, each big box contained 8 blade servers on top and then you lift that box out to find the chassis on the bottom.  Everything is there and straight forward.

—  Racking the chassis was straight forward, quick install (no tool) rails drop in and you slide the chassis onto the rail in the rack and you are done. 

—  4 power cords with screws to secure them and the power supplies are hot swap and replaced via the front.

—  Installing a blade server is a no brainer.

—  Cabling.  This is the big difference and huge.  I am using 4 copper 3 meter 10 G cables to connect the chassis to a6120 fabric interconnect.  4 cables and I am done for all my Ethernet, Fiber Channel and management.  I will say it again, 4 cables!  Have I said this is sweet yet?

—  Complexity, the chassis is not complex it is just a box for the servers to plug into and to provide big I/O pipes.  Plus, cabling, yes only 4 cables to 8 cables!  (depending on your I/O needs you can use as little as 2 and up to 8 ports per chassis).

Cons:

— Nothing stands out at this point, I will let you know.

The front of an empty chassis.

Fresh out of the box.

Overall:  The chassis is just a blade chassis, smaller than the latest gen of chassis from the other vendors.  It does not need the complexity required by a traditional blade chassis, meaning no need for 2 separate management cards, 2 Ethernet switches and 2 fiber channel switches.  It is straight forward and elegant.

Time:  Ok, this was a big deal.  It took myself and one other person to unbox, rack, power, install 16 servers and cable them to the 6120 in about 90 minutes.  It took a little longer because we were taking pictures :).  For me to rack 16 – 2U servers, power, cable Ethernet and Fiber Channel would easily be 2 to 3 days.  This is huge.  However, if I did 1 traditional blade chassis with 16 servers maybe that would take 1 day.  However, there would be a lot more cabling involved.

Back of a racked chassis.

Those 8 fans are variable and can push a lot of air.

Stay tune, next time I will talk about the blade servers and implementation . . .

Oh, here is my Halloween costume for this year!

Halloween costume 2009

Advertisements

October 30, 2009 Posted by | Cisco UCS | , , | 1 Comment