Three years is a common timeframe for replacing servers at many enterprises.
If organizations wait more than 36 months, they may miss out on productivity-boosting innovations from new technologies or risk increased upkeep costs if support and service agreements lapse. But if organizations pull the refresh trigger any sooner, they may not see the full return on their existing server investments.
That means that considerations other than timing should be part of the mix when thinking about an upgrade, say tech executives and analysts. Chief among the considerations should be how manageable a refresh will be for the organization — and that can hinge on a variety of factors:
Ancillary expenses: Physical server hardware often represents only a small part of the entire refresh cost. IT departments must also account for any needed consulting services and implementation services and expenses, says Christopher Nowak, chief technology officer at Anthony Marano, a produce supplier based in Chicago.
One of the most time-consuming implementation steps is server configuration. The reason? The IT staff may have to inventory systems throughout the data center to gather all the necessary settings. Some may be associated with network switches and others with enterprise storage systems. Important configuration data includes Media Access Control (MAC) addresses, World Wide Name (WWN) or World Wide Identifier (WWID) values, Universally Unique Identifiers (UUIDs), firmware codes and Basic Input/Output systems (BIOS) data.
Stateless computing can ease the process by enabling IT administrators to gather all the important variables in a single package that they can assign to any physical machine via an administrative console.
The IT roadmap: Many of the technology initiatives that enterprises undertake hinge on the performance and reliability of the server infrastructure. This includes planning and paving the way for emerging trends such as client virtualization, which relies on backend servers to deliver the data and applications to end users’ systems.
As a result, server refresh strategies need to accommodate the timing of other IT projects, as well as anticipate any heightened workload demands that these new endeavors might create for the data center.
- ROI: With each generation of server hardware, engineers shrink transistors and other internal components that make up the underlying microprocessors. Innovations in chip designs translate into more power for the servers and less power consumption per computing unit. When factored in across an entire data center, these improvements can add up to significant return-on-investment figures, which can help justify the expense of any new technology.
Decommissioned servers: Organizations also need to have an established plan for decommissioning equipment that’s being replaced. The first step is to consider whether there’s an alternative use for the old hardware. Some possible options include reprovisioning formerly front-line machines as web servers or to provide additional capacity for test and development staff.
When it comes time to part with any hardware, check with the original supplier about trade-in credits that can go toward new purchases. Some manufacturers have a recycling program to dispose of old gear in an ecologically appropriate way, and there are also third-party companies that provide the service.
Read more about server lifecycle management.