Skip to content

Cost Reduction Strategies Areas that every IT organization should be evaluating

November 25, 2008

In order to achieve systematic cost reduction, IT organizations need to be aware of the levers available for cost reduction and then to examine each major product or service to see if the use of these levers are appropriate to achieve cost reduction. Accordingly, for each major IT service there are a number of levers or approaches that can possibly be modulated to lower costs. Typical cost reduction levers include the following:
• Renegotiating or changing suppliers
• Adjusting service level agreements (SLAs)
• Increasing automation
• Re-engineering business processes
• Tracking consumption with a feedback loop
• Considering outsourcing alternatives
• Shifting resources to lower cost locations
• Substituting lower cost or disruptive technology
• Managing the IT portfolio—especially replacing and retiring older systems
• Designing for operations (DFO) to minimize future operations cost while the solution is being designed
Each IT service should be analyzed to see whether one or more of these cost reduction levels can be applied. Ideally, the best approach is to design low cost solutions and services in the first place. However, most IT organizations are obliged to manage a suite of legacy solutions as well. What follows is a deeper discussion of each of the cost reduction levers with examples as appropriate.

Renegotiating or Changing Suppliers.
On an ongoing basis, firms should negotiate new terms with IT suppliers. The core competency for many IT organizations is the ability to integrate products and services from a number of suppliers in order to deliver internal and external IT services to the firm. Getting the best value for money from suppliers is a crucial part of minimizing cost.
A great example of a cost saving opportunity is the ability to enter into multi-year deals with telecommunication service providers for international private-leased circuits. Telecommunication companies will usually offer significant discounts over multi-year terms. While a firm can lose some flexibility with multi-year deals, the cost saving often justify this risk.
For some companies, it may make sense to consolidate to a fewer number of suppliers. Firms that have had a relatively unmanaged approach to accumulating new WAN capacity, for example, may find that over the years they have accumulated a long list of suppliers. This is particularly likely for global firms that operate in countries or regions with strict regulation of telecommunication services. As the trend toward deregulation continues to unfold in many regions, there is an opportunity for firms to consolidate their telecommunications to one or two vendors.
Consolidation is typically best done by issuing a request for a proposal (RFP). The firm issues a document to suppliers defining the scope of the services required. In response to proposals from suppliers, the RFP may be followed by a request for quotation (RFQ), which typically results in responses from vendors in a position to deliver the services requested. This process can quickly enable a firm to come to a short list of suppliers with whom contracts negotiations can then be commenced.

Adjusting Service Level Agreements.
Many IT organizations use service level agreements (SLAs) as a basis for delivering a particular product or service. An SLA defines the boundaries for providing a service to a customer or set of customers. For example, for the IT help desk function, there may be an SLA that specifies the speed with which telephone calls are answered: that is, the average speed of answer (ASA). In specifying an SLA, there is typically a trade-off made between performance or quality and cost. Costs can be cut while sharing control of IT spending with IT’s customers when SLAs are renegotiated.
In the case of the IT help desk, there is usually a direct correlation between the number of personnel and the ASA. To reduce cost, a firm may decide that it is acceptable for users of IT systems to wait a few minutes longer for their help calls to be answered. Reducing ASA from 30 seconds to 2 minutes may yield a significant IT head count saving while only minimally impacting users. This trade-off is primarily a business decision.
Another performance measurement for call centers is the percentage of close on first call (%COFC), which is the proportion of problems solved during the first conversation with a user. The %COFC performance is typically related to the competence of the call center staff and the quality of information and knowledge systems maintained within the call center. Investing in call center competence or in better systems for the call center can improve %COFC, which means fewer support calls and hence further cost reduction. Thus, investing in improvements for the call center ultimately can lead to reduced costs due to fewer follow-on support calls.

Increasing Automation
The old saying that the cobbler’s children have no shoes often applies to IT organizations. While IT organizations are typically expert at automating the firm’s business processes, the use of automation within IT organizations is often lagging. Automation can save significant sums of money by improving performance and productivity and by strengthening budget management in an IT organization.
Opportunities to use IT within IT organizations must be evaluated on equal footing with opportunities to use IT within the business at large. Recent successful automation initiatives include “wired for management” projects that introduce remote control technology to improve the productivity of system administrators through enabling remote PC and server management. Enabling remote management takes the costs associated with travelling out of the equation and dramatically increases the productivity of support engineers. At Intel, most of our worldwide servers are supported from a single worldwide operations support center and this has resulted in significant cost savings for our company.
Many IT organizations use handmade Unix scripts as tools for remote monitoring. However, IT organizations are increasingly using vendor tools for asset and performance management. At a glance, these tools can show the current status and performance history of worldwide computing infrastructure.
The use of computer-aided software engineering (CASE) tools has helped with software development. At the same time, it appears to me that software engineering has some distance to go before it moves from being a craft to an engineering discipline. The reuse of code and automated testing tools have significantly reduced the cost and time spent testing and shortened the time until benefits are realized.
Today’s IT organizations often favor the purchase of packaged software rather than taking on the risk of software development. Forecasting costs and schedules for software development projects is known to be risky and often unreliable. A well-tried vendor solution, while perhaps not an exact fit to the firm’s specification, can be implemented at a known cost for both implementation and support.

Re-Engineer Business Processes
IT-enabled business process re-engineering provides a great opportunity for IT organizations to use IT to transform how the firm’s products and services are delivered and supported. As Michael Hammer wrote many years ago in the Harvard Business Review (1990), “Don’t automate, obliterate.” Hammer’s imperative was to rethink business processes before mindlessly automating them with computing systems. Many business processes can be streamlined, or even eliminated. Thus, a business process re-engineering (BPR) analysis is a necessary precursor to looking for automation alternatives.
ServiceXen IT used BPR to significantly reduce the workload associated with delivering laptops to ServiceXen employees and hence to drive down the total cost of ownership (TCO). While ServiceXen IT had a continuous improvement process in place, in 1999 ServiceXen engineers realized that the more stable environment provided by Windows 2000 and future Windows operating systems would allow IT to implement new components more easily than previous operating systems.
Starting in 1995, when ServiceXen IT began monitoring TCO for personal computing resources, estimates at that time showed that it was not cost-effective for ServiceXen staff to be issued notebook computers when compared to desktop systems. Preparing and maintaining a notebook PC for end-user distribution took too long and caused PC delivery to fall behind schedule. A change was needed in the way that notebook PC operating environments were standardized and distributed before TCO was at parity with desktop machines. As notebook technology evolved and new computing guidelines were implemented within ServiceXen, the lower cost and improved value of issuing and supporting notebook computers emerged.

Variable 1997 2001
Build Time 2 hours 1 hour
Development time 8 person weeks 2 person weeks
Testing time 2 person weeks 4 person days
User base 30,000 70,000
Number of platforms 4 per year 5 per quarter

The net result of continuous improvement and IT-enabled change meant that over a six-year period there was a significant decrease in the TCO of a laptop and also an increased capability to handle change.

Tracking Consumption with a Feedback Loop
When a resource is offered for free or perceived to be free then the resource may well be squandered or overused. The implementing of a chargeback system for IT resources, which is also known as consumption-based tracking, can be a significant incentive for behavior change among IT consumers.
For example, many IT organizations manage the cell/mobile phone programs for enterprise and in some cases users have limited visibility to what the actual cost of the service is. If users have no cost measures available to them, then they cannot manage usage. Users will typically become much more sparing in their use of cell phones when they are presented with a monthly report that shows just how expensive international roaming can be. Introduction of consumption-based billing can be achieved at a low cost and can have significant and immediate impact on company IT spending. Chargeback strategies are discussed in greater detail in another post.

Consider Outsourcing/In Sourcing Alternatives
IT outsourcing is the transfer of an IT function from the firm to an external supplier. This function may be as small as contracting to an outsider to develop some modules of code or outsourcing may engulf the entire IT function. Outsourcing is today’s term for subcontracting, often on a large scale.
The decision to outsource is linked to an economic theory postulated by Ronald Coase (1937). In its simplest form, Coase’s Law says that a firm will expand the scope of its operations until the firm discovers that it is cheaper to buy a particular product or service in the open market rather than build it within the firm. Coase was observing the nascent automotive industry, wherein initially Ford and General Motors made electrical components, batteries, tires, and other products that are now seen as peripherals to the automobile.
In considering IT outsourcing, a firm needs to take into account the cost of agency. If a firm delegates acquiring a service to an agent, there will be an agency cost because of the inevitable divergence between the goals of the firm and the goals of the agent. Forecasting the impact of agency cost can be difficult. Agency costs can be minimized in outsourcing scenarios that include pay-for-performance clauses in their contracts.
Outsourcers take advantage of economies of scale, that is, they provide the same or similar services to a number of different companies so as to be cost competitive for each firm served. Thus, outsourcing can be an attractive method of cost reduction. As with any alternative source of services, however, firms need to recognize that there might be costs associated with the transition and risks associated with depending upon external suppliers of any kind. Thus. outsourcing agreements are best negotiated over multiyear terms with trusted partners.
When internal IT resources are constrained, firms can use outsourcing as a method of moving their resources up the IT value chain, while outsourcing the utility-like function to an external company. Moving up the value chains means that IT employees who were previously helping deliver utility-like functions can be refocused on higher value-add tasks such as new solutions development. When outsourcing occurs, typically some fraction of the host company’s IT organization is retained to manage the out source vendor.
Insourcing is an alternative to outsourcing. Insourcing means contracting a service to a specialist group within the firm. Some firms, including ServiceXen, have created flexible internal workforces that compete with external resources on a project-by-project basis. Insourcing is a strategy that is particularly useful when the external IT labor market is constrained and when external IT resources are expensive.

Shifting Resources to Lower Cost Locations
As firms are under increasing cost pressure, outsourcing and insourcing to lower-cost geographical locations are gaining popularity. However, firms need to look at the total costs and benefits before blindly committing to follow this route. Be sure to include increased travel and communication costs. Consider quality of service and weigh how effective a staff can be when not co-located with other team players. These expenses and risks can quickly eat into benefits delivered through cheaper labor and lower facilities costs in a lower cost geography.

Substituting Disruptive Technologies
Computing infrastructure in an enterprise can be described as an integration of a variety of processing, network, and storage components. IT organizations hope to optimize these components to deliver the best overall performing platform at the lowest cost.
While few empirical studies exist, IT architects certainly try to choose over time the most cost effective mix of IT components that can provide reliable and flexible IT services. The challenge is similar to solving a collection of simultaneous equations on an ongoing basis. IT planners have only limited algorithmic techniques in place with which to estimate the demand and model the suitability of various configurations to meet that demand. While incremental substitution of IT system components can provide modest ongoing cost savings, breakthroughs in cost reduction can be achieved when so-called disruptive technologies are introduced.
According to Clayton Christensen , disruptive technologies are those technologies that appear in the marketplace as low-cost, low-performance alternatives and grow in capability to displace incumbent technologies. The IBM PC based on an Intel microprocessor is, of course, a prime example. Disruptive technologies cause turmoil in existing markets and effectively change the rules of the game for a class of products and a collection of suppliers. I believe that IT organizations should proactively seek out, identify, and rapidly exploit the opportunities offered by lower cost disruptive technologies.

Design for Operations
Our battle-against-cost approach was predicated on this key insight: we can influence costs most prior to deployment of a technology. Once deployed, we are somewhat limited as to the options and the potential impact of any cost reduction strategies.
This is what I would call a design-for-operations (DFO) approach that designs solutions to minimize operating costs before the solutions are deployed. The DFO approach is based on more widely-known engineering techniques, particularly design for manufacturing (DFM). The DFM approach encourages engineers to design products so that manufacturing complexity and cost is minimized. Similarly, the DFO approach for IT solutions and infrastructure means that IT architects should aim to minimize maintenance and operational costs at the onset, when designing IT architectures and solutions.
Addressing structural issues is primarily about changing the architecture of the infrastructure and solutions. One example is using wireless LAN instead of wireline LANs in both new and old buildings. Other examples include using the Internet and virtual private network (VPN) technology to offer secure, low-cost wide area networking in place of dial-up access.
Another way to reduce structural costs is to increase device densities in data centers. Increased density can be achieved by using blade servers rather than standalone or rack-mounted servers. Greater density reduces facilities costs and can also enable data-center consolidation. The move to high-density began with a migration to rack-mount servers, which is still a common approach. Blade servers are the next step in the evolution of dense rack-mounted processors.
Increasing density in the data center is an excellent example of the design-for-operations approach. When a server blade fails, it can easily be replaced. The reduced need for large facilities space, combined with maintenance and operations savings can add up to significant savings for firms with multiple large data centers. Although experiences will vary, it is reasonable to expect that the repackaging of server farms into blade servers could save an organization between $500,000 and $1 million per rack—and, as I shall continue to emphasize, reducing operations costs allows funds to be diverted to other investment areas.

Advertisements
No comments yet

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: