HealthITGuy's Blog

The evolving world of healthcare IT!

We made a few UCS related videos

Check out the YouTube videos on the Varrowinc channel that we took today in the datacenter.

To see more do a search for varrowinc channel, there are about 8 out there.

October 30, 2009 Posted by | Cisco UCS | , , , | Leave a comment

Cisco UCS has Arrived . . .

UCS Chassis' are here!

Went to the dock and brought them to the datacenter myself.

Cisco made good on their shipping committment and all the Cisco UCS gear has all arrived, it is cool!

The Cisco 6120 Fabric Interconnects:  These are the “controllers” which contain all the I/O; Converged network, 10 G Ethernet and Fiber Channel connectivity for the system.  Meaning we have 10 G converged network connection to the Cisco chassis.  Then the 6120 splits out the convergence network into the 10 G Ethernet and 4 GB fiber channel which then connects to my LAN and 4 GB fiber channel switches.  In addition, all of the management for the system is done in the 6120 Fiber Interconnect.

Cisco 6120 Fabric Interconnect

The central "controller" of the UCS system, used to manage and handle all I/O.

Green field vs. Brown field.  Most people will see UCS fitting well into a new datacenter, however our install is a great example of how it can go into an existing datacenter environment.  This gets to where and how someone plans for the UCS location in an existing datacenter.  For us, I choose a location with 3 racks next to each other which contain other existing servers.  I placed the 6120s near the top of the middle rack.  This allows me to install UCS chassis within all 3 racks and be within the 3 meter distance provided by the 10 G copper cables.  When a friend saw my pictures of the placement he wondered why I would not put the 6120s next to where the chassis’ will go, which may be a normal thought for people implementing UCS.  By using this location I have rack capacity for at least 10 chassis’ if not 11, all using the 3 meter cables (lower cost than fiber).

6120 Impressions:


—  Packaging is good, everything was contained in 1 box (SFPs, 3 meter cables, rails, etc.).
—  6120 size is just like a 1U rack server, people are familiar with this fit and size (familiar feel for server people).
—  6120 has everything well labeled (fans, power, etc.), it is straight forward in appearance, the ports are just like a standard switch (very familiar to network people).
—  6120 has many indicator lights that are straight forward and it is clear what each one is representing.
—  The I/O end of the 6120 is angled on the top to allow for greater air flow through the box, elegant touch.

6120 Fabric Interconnects in the rack.

Installed with HP and Dell servers, and everything still worked.

Con:  (nothing of real significant)

—  Rails.  This was my biggest con.  For a server person, I do not think they will like them.  For a network person they are the norm.  It would have been a real pain to install my 2 – 6120’s if I did not have access to the sides to get them lined up with the back rail support.  Because I have used this type of rail in the past for the Cisco MDS switches I was familiar with them.  However, all my server people are familiar with the rapid rail systems used by Dell, HP, etc.  On the other hand, once you have the 6120s in the rack you are really not going to need to pull them out.


—  6120 should probably come with a 1 page “quick install” document (page 1-4 of the Cisco UCS Site Preparation Guide would be a good start for what it should look like).  It should tell the user to install it with the I/O ports and power outlets to the back of the rack, where the rail items should be connected to the 6120, etc.  It also should  mention to install the 6120s within the 3 meter cable lengths for the chassis in mind.

—  SFP modules all came in 1 box, which was nice.  However, the 10 G and fiber channel SFPs were all mixed together and you have to find the small text on the SFP itself to know which is for Ethernet or Fiber Channel.  Having something to make the 10 G stand out would be a good thing (a color coded sticker or label on the outside of the static bag).

Overall:  I am pleased with the design and layout of the physical 6120. 

Chassis Impressions:

It was cool when the 2 big Cisco boxes that contained the chassis and servers arrived.  This is a 6U 90 lb. chassis with 4 power supplies, 8 fans and 2 fabric-extender devices for 10 G converged connectivity.  You can get up to 80 G of I/O to a chassis.


Blade servers packaged on top of the chassis.

8 blade servers on the top of the chassis in the box.

—  The packaging was great, each big box contained 8 blade servers on top and then you lift that box out to find the chassis on the bottom.  Everything is there and straight forward.

—  Racking the chassis was straight forward, quick install (no tool) rails drop in and you slide the chassis onto the rail in the rack and you are done. 

—  4 power cords with screws to secure them and the power supplies are hot swap and replaced via the front.

—  Installing a blade server is a no brainer.

—  Cabling.  This is the big difference and huge.  I am using 4 copper 3 meter 10 G cables to connect the chassis to a6120 fabric interconnect.  4 cables and I am done for all my Ethernet, Fiber Channel and management.  I will say it again, 4 cables!  Have I said this is sweet yet?

—  Complexity, the chassis is not complex it is just a box for the servers to plug into and to provide big I/O pipes.  Plus, cabling, yes only 4 cables to 8 cables!  (depending on your I/O needs you can use as little as 2 and up to 8 ports per chassis).


— Nothing stands out at this point, I will let you know.

The front of an empty chassis.

Fresh out of the box.

Overall:  The chassis is just a blade chassis, smaller than the latest gen of chassis from the other vendors.  It does not need the complexity required by a traditional blade chassis, meaning no need for 2 separate management cards, 2 Ethernet switches and 2 fiber channel switches.  It is straight forward and elegant.

Time:  Ok, this was a big deal.  It took myself and one other person to unbox, rack, power, install 16 servers and cable them to the 6120 in about 90 minutes.  It took a little longer because we were taking pictures :).  For me to rack 16 – 2U servers, power, cable Ethernet and Fiber Channel would easily be 2 to 3 days.  This is huge.  However, if I did 1 traditional blade chassis with 16 servers maybe that would take 1 day.  However, there would be a lot more cabling involved.

Back of a racked chassis.

Those 8 fans are variable and can push a lot of air.

Stay tune, next time I will talk about the blade servers and implementation . . .

Oh, here is my Halloween costume for this year!

Halloween costume 2009

October 30, 2009 Posted by | Cisco UCS | , , | 1 Comment

Cisco Unified Computing System: UCS

Server virtualization has been great for our organization since we implemented VMWare 3.x over 2 years ago.  We have grown our VMWare environment to 24 hosts with over 300 guest servers and upgraded recently to vSphere 4.o.

The server hardware we have been using for our ESX hosts has been 2U rack mounted servers with 2 – quad-core CPU, 32 GB or 96 GB memory, 2 HBA, and 6 to 8 – 1 GB Ethernet NICs. 

Recently we had a project need to add 16 additional ESX hosts to our environment.  At the same time I began to learn about Cisco’s new blade server system the UCS (unified computing system).  My first impression was confusion from what I had been reading in the trade rags.  Then I had a presentation from Cisco on the topic and I was intrigued when it was compared to a SAN but for compute capacity.  Meaning you have 2 “controllers” and you add blades and chassis’ for additional capacity.

At VMWorld 2009 I was able to dig deeper into UCS which was center stage at the conference.  When you entered Mosecone Conference Center the datacenter that supported the conference was 512 Cisco UCS blades located in the main lobby for all to see.  Cisco had the right people on-hand to educate me on the what, how and why the UCS system was built.

VMWorld 2009 Cisco UCS Datacenter

512 Cisco blade servers handled all the compute power for the conference.

So for my project that requires 16 more ESX hosts I did a deep comparison of my traditional approach with converged 10 G network (2U rack mount) vs. a Cisco UCS configuration.  Looking at the cost differences, what it takes to grow both approaches and the amount of required infrastructure (Ethernet and Fiber Channel cabling and ports) Cisco UCS made the most sense.  There is some risk going with a new server “system”, however my previous experience with Cisco has always been very positive. 

We have jump in with both feet with Cisco UCS . . . I will bring you along for the ride.

October 30, 2009 Posted by | Cisco UCS | , , | Leave a comment

How will ARRA work: American Recovery and Investment Act of 2009

As I talk to people about the increase in IT projects related to the “Economic Stimulus” package (i.e. ARRA: American Recovery and Investment Act of 2009) I always get asked “are you getting funds from the government now?”.  The answer is no, the government wrote the ARRA law using the carrot and stick approach.

The goal of the legislation is to encourage healthcare organizations to implement electronic health records to improve patient care.  These are things like prescriptions need to be dispensed electronically, computerized physician order entry (CPOE), closed loop medication dispensing using bar codes on the meds, patients and care giver, etc.  The goal being improved patient outcomes and care. 

Once you have all records and processes (i.e., dispensing medication) in an electronic format you have a lot of data and information that can be used to improve treatment protocols, determine trends and measure the effectiveness of treatment (i.e., which physician, hospital, region, state, etc. have the best outcomes for treating specific diseases).  This in turn should further improve the delivery of patient care resulting in better outcomes and reduced costs over time.

To encourage healthcare organizations to move forward the government is providing greater reimbursement for steps being implemented by 2011, 2012, 2013, etc.  Meaning the sooner a healthcare organization has systems in place the more reimbursement they will receive.  Starting in 2015 the penalty phase begins for those organizations that have not meet the goals defined in ARRA.

The ability to meet the deadlines is dependent on many factors; one major factor being will your system provider (the vendor you purchase your patient electronic medical record, radiology, lab, etc. systems from) modify or build the system to meet the government’s requirements.  This all translates into a lot of growth coming to the healthcare information systems field.

I will soon start to get into more technical topics, but I wanted to provide a basic foundation to why I think technology is so cool and useful in healthcare.  We are not just making a widget . . . we have a positive impact on patients.

October 26, 2009 Posted by | Healthcare IT General | , , | Leave a comment

The Start of a Blog . . .

I have been working in healthcare information systems for 14 years in North Carolina.  My role has changed over the years as technology, systems, requirements, etc. have evolved.  We have been through the early Internet, the dot com boom, Y2K, the dot com bust, HIPAA, Electronic Medical Record, virtualization and now the new requirements of the American Recovery and Investment Act of 2009 (ARRA).  The last item ARRA being a huge game changer for American Healthcare.

Electronic health record (EHR) technology is changing the way healthcare organizations meet the needs of their patients, physicians and employees.  According to HIMSS officials:

 “EHR technology is “meaningful” when it has capabilities including e-prescribing, exchanging electronic health information to improve the quality of care, having the capacity to provide clinical decision support to support practitioner order entry and submitting clinical quality measures – and other measures – as selected by the Secretary of Health and Human Services.” (from Healthcare IT News April 28, 2009).

 This translates to increased requirements for overall computerized system availability.  Clinical systems need to be architected to provide electronic health record services at all times in a flexible and secure manner.  To achieve this level, a heavy investment in advanced technical infrastructure has to be implemented.  The underlying information system technologies used to build a redundant, resilient datacenter are the network, storage, computing capacity, power, cooling, and adequate space.

 All healthcare organizations will be required to provide “meaningful” EHR technology.  Due to the technical complexity this will be a financial and technical challenge for many.  The options available to organizations would be to build and maintain their own technology infrastructure, purchase applications from a provider (Software as a Service model), merge or consolidate with other organizations and or lease technology capacity. 

We are at an evolution period for healthcare IT which will change the way healthcare is delivered in America.  I am excited to be a part of it, shaping how we move forward.


My plan is to use this blog as a way to lay out my thoughts on healthcare IT with a focus of the underlying technology infrastructure.  Key pieces being datacenter, compute capacity, storage, virtualization and application delivery.





October 23, 2009 Posted by | General, Healthcare IT General | | Leave a comment