next up previous


Quantitative IT Portfolio Management

C. Verhoef
Free University of Amsterdam, Department of Mathematics and Computer Science,

De Boelelaan 1081a, 1081 HV Amsterdam, The Netherlands
x@cs.vu.nl

Abstract:

We present a quantitative approach for IT portfolio management. This is an approach that CMM level 1 organizations can use to obtain a corporate wide impression of the state of their total IT portfolio, how IT costs spent today project into the budgets of tomorrow, how to assess important risks residing in an IT portfolio, and to explore what-if scenarios for future IT investments. Our quantitative approach enables assessments of proposals from business units, risk calculations, cost comparisons, estimations of TCO of entire IT portfolios, and more. Our approach has been applied to several organizations with annual multibillion dollar IT budgets each, and has been instrumental for executives in coming to grips with the largest production factor in their organizations: information technology.

Introduction

It is known from extensive research being conducted by the former CIO of the US Department of Defense, Paul A. Strassmann [119,120,123,124], that there is no relation between information management per employee and return on shareholder equity. Also there is no relation between profits and annual IT spending. So he shows that there is no direct relation between spending on computers, profits or productivity. Indeed, there are companies--in the same industry--each spending about the same on IT of which the one makes high profits, and the other makes huge losses [125]. This leads to shotgun patterns showing the absence of correlations between any kind of return and the intensity of IT investments. The only vague correlation that Strassmann ever found was that when from two comparable enterprises one is spending slightly less than the other, the less spending organization is doing slightly better. This loose correlation leads one to suspect that governance of IT investments aids in creating value with IT instead of destroying profits. Indeed a continuous stream of reports on value destruction eventually led to the so-called Clinger Cohen Act [46], explicitly stating that the CIO's job is critical to ensure that the mandates of the Clinger Cohen Act are implemented. This includes that IT investments (we quote from [46]):

Reflect a portfolio management approach where decisions on whether to invest in IT are based on potential return, and decisions to terminate or make additional investments are based on performance much like an investment broker is measured and rewarded based on managing risk and achieving results

The US Government has to come to grips with IT portfolio management: any acquisition program for a mission critical or mission essential IT system for the US Government must be developed in accordance with the Clinger Cohen Act of 1996. To that end, the US General Accounting Office proposed a framework for initiating and maturing IT investment management [90]. But also outside the public sector there is increasing interest in deploying IT portfolio management. In a 2002 survey among 400+ top IT executives 60% reported an increase in the level of pressure to prove ROI on IT investments. But 70% believe their metrics do not fully capture the value of IT, and nearly half lack confidence in their ability to accurately calculate ROI on IT investments [71]. The Federal CIO Council summarized in 2002 the first lessons learned for IT portfolio management. The report does not mention quantitative approaches to manage IT portfolios [24]. This report should be seen complimentary to our work: the lessons learned provide a first insight in implementing qualitative aspects of IT portfolio management in organizations. This paper deals with quantitative IT portfolio management. In particular, we consider quantitative aspects of IT development, operations, maintenance, enhancement, and renovation for bespoke software systems. For other possible contents of an IT portfolio, such as license management for COTS systems, processing hardware infrastructure, network equipment, and so on, tools and techniques are available to deal with them. For instance, several companies are specialized in license management, and hardware/network infrastructure investment issues are better understood than software cost issues. In addition, Strassmann indicates that the focus on hardware costs is wrong: on the average this accounts for 5% of the life-cycle cost for information management so hardware is not the dominating factor [122, p. 409]. Bespoke software is our main focus. To the best of our knowledge quantitative IT portfolio management--the subject of this paper--is a terra incognita.

Executives of large organizations with substantial IT budgets learned the hard way that spending more is not the winning strategy. Some of them realized that after a long string of staggering IT investments plus their challenges, they must start to control their IT portfolios. Most executives consider IT spending as a black hole: no matter how much resources are thrown at IT, there is no clear justification of returns--IT is on top of the executives. We consulted executives about a variety of investment related issues. For obvious reasons this consultancy was done under nondisclosure agreements. To give you an idea of the work, think of company-wide IT portfolio analyses, assessing current IT investments, projecting probable consequences of major IT investments for the IT budget in the coming years, quantifying IT portfolio risks (failure risks, cost overruns, underbudgeting risks), assessing the likelihood of added value of IT investments, assessing the IT portfolio of a potential target for a merge or acquisition, calculating minimal ROI threshold for an IT portfolio, etc. Based on these experiences, we developed and used formulas that form a mathematical underpinning of quantitative IT portfolio management. In other words: formulas that help in getting on top of IT. The examples we treat to explain these formulas are composed for that purpose, and do not relate to specific companies.

The bad news for many executives is that the area of software development is fairly immature. The failure rates of software projects are high: about 30% of software projects fail, 50% are twice as expensive, take twice as much time, and deliver half the functionality, and only 20% of the software projects is on time, within budget, and with the desired functionality [59,47,48,49]. In absolute figures, Standish group estimated in 1995 that this costed in the USA $ 81 billion on failed projects, and another $59 billion on serious cost overruns. Immaturity is further illustrated by the fact that 75% of the organizations worldwide are still at the lowest level of the Capability Maturity Model (CMM), a five point scale developed by the SEI [93] indicating the maturity of an organization's software process.


 
Table 1: Distribution of organizations over CMM levels.
CMM level Meaning Frequency of Occurrence (%)
1 = Initial Chaotic 75.0
2 = Repeatable Marginal 15.0
3 = Defined Adequate 8.0
4 = Managed Good to Excellent 1.5
5 = Optimizing State of the art 0.5

In Table 1, taken from [66, p. 30], we show a recent distribution of organizations over CMM levels. As you can see, the majority of the organizations have chaotic processes to build and maintain software. Level 1 means that no repeatable process is in place, in particular, there is no overall metrics and measurement program. In a 2001 survey 200 CIO's from Global 2000 companies were asked about their measurement program. More than half (56%) said they did high-level reporting on IT financials and key initiatives. Only 11% said to have a full program of metrics used to represent IT efficiency and effectiveness, the rest (33%) said they had no measurement program at all [109]. This shows that the number of organizations without a proper metrics program is huge.

This implies that there is no substantial IT-related historical data to build a corporate governance structure on. No wonder that it is hard to detect relations between IT spending and return on investment. Imagine the consequences for an enterprise whose financial department is anarchic. The problem for executives of level 1 organizations who want to get in control of their software portfolios now, becomes then very compelling. For, how to measure anything at the corporate level if there is no relevant data available to accumulate management information from?

Our approach to quantitative IT portfolio management provides you with insight in an IT portfolio, even in the case of level 1 organizations. A level 1 organization has to compensate for the lack of historical information by utilizing and deploying benchmark information. We developed a set of mathematical formulas based on public benchmark information to quantitatively manage IT portfolios. When you have historical data and can establish internal benchmarks, you can use these to instantiate our formulas.

The simplest formulas can be handled by spread sheets or a pocket calculator. For the more involved analyses, a scientific calculator like the HP49G or the TI89 can be used. For advanced issues, it is convenient to use statistical/mathematical packages ranging from spread sheets with plugins, to packages like SAS [31], SPSS [114], SPlus [129,73], Matlab [81], Maple [135], or Mathematica [132].

The set of formulas has shown to be a useful executive's armature to analyze, assess, and control IT portfolio issues, to counteract bombardments of jargon ridden empty promises by the trade press, software vendors, consultants, or proposals from their own internal IT departments. These formulas comprise relevant benchmarked IT-project information, which is often not provided to the executives, if only since the IT jargoneers have no clue how to come up with the information themselves. To use these formulas successfully you do not need to acquire extensive IT knowledge: we have completely hidden this aspect (but it is incorporated via benchmark information). The typical academic qualifications of the executive staff that we dealt with has an MBA combined with an MSc or PhD in an exact science. Think of economy, econometrics, astrophysics, experimental and theoretical physics, chemistry, biochemistry, mathematics, financial mathematics, business mathematics, statistics, biostatistics, electrical engineering, and so on. We encountered hardly ever educational backgrounds with a strong focus on IT, and when executives had such a background, they were combined with degrees in other exact sciences. Executive staff who were exposed to our formulas could understand them, could work with them, and were helped by them in that they lost their naivity with respect to IT decision making.

Does IT portfolio management pay off? In one organization the initial investment in research and development, plus the inevitable errors made during this endeavor, were about $250.000 dollar. An additional $120.000 dollar were needed for training. In this organization, the effort to keep the IT portfolio database up to date costs about $50.000 dollar. The return is depending on the size of the IT portfolio measured in dollars. For a 500 million dollar IT portfolio direct cost savings per annum of 3-5% of the total portfolio value were established by better decision making: killing bad performing projects, mitigating risks, abandoning negative ROI investments, removing system redundancies, etc. As reported in CIO Magazine: just compiling an IT portfolio database saved one company $3 million and another company $4.5 million because the holistic IT portfolio view enabled them to spot redundancies [5]. Those redundancies could be eliminated, reaping the benefits.

For organizations above level 1, the accuracy of the underlying benchmarks can be significantly improved, and the underlying mathematics as presented in this paper can be instantiated accordingly. So, the principles of quantitative IT portfolio management remain unchanged. We will for the rest of this paper assume the level 1 situation so that the majority of current organizations can apply our results, yet when appropriate point out how our work applies to CMM 2+ levels.

How to tell a program from a security

In the 1970s, it was hard for many to grasp the difference between hardware and software maintenance, and Leslie Lamport explained this in a short note [74] entitled How to tell a program from an automobile. We borrowed this title to explain that you cannot simply apply security portfolio management to IT portfolios.

As far as we know there is no related work reported on in the realm of quantitative IT portfolio management, although many of us think that security portfolio management is strongly related. The reason is that the goals of both security portfolio management and IT portfolio management are largely identical. We will show that the means to reach this common goal do not coincide.

A lot of important work has been done in quantitative security portfolio management. And at first sight it seems a promising idea to support the quantitative part of IT portfolio management with the theory that 1990 Nobel Laureate Harry Markowitz developed on security portfolio management [80]. We heard this from several executives who were exposed to his work, but also the US Federal CIO council seems to play with this idea [24]. Moreover, the trade-press quotes people who think that applying this so-called modern portfolio theory is the noble endgame in IT portfolio management [5]. But what is this theory about? In the words of Markowitz [80, p. 205]:

Thus, in large part, this monograph is a discussion of the relationships and procedures by which information about securities is transformed into conclusions about portfolios.

The current paper is also a discussion of the relationships and procedures by which information about IT systems is transformed into conclusions about IT portfolios.

Markowitz investigated the available information about securities, the questions that need an answer and found the appropriate mathematical techniques to provide sensible answers to the problems. In the current paper we do exactly the same: we gather and investigate the data that is commonly available on IT projects, we investigated the questions that need an answer, and by working on the answers, we related the problems to the appropriate mathematical techniques to provide sensible answers.

Given these parallels, it may be hard for the financial expert, who might never have been exposed to information technology, (by building, maintaining, operating, enhancing, or retiring IT) to tell the difference between a program and a security. Likewise, for the IT expert, who might never have been exposed to the nature of securities (by selling them, buying them, comprising them into portfolios, or advising others about these issues), it might be hard to tell the difference as well. As one executive put it [5]:

The cancellation rate of the largest IT projects exceeds the default rate on the worst junk bonds. And the junk bonds have lots of [portfolio management tools] applied to them. IT investments are huge, risky investments. It's time we do [Markowitz's portfolio management].

So they both are high risk investments, and another seeming parallel is implied. In the same article an IT portfolio manager said she had learned that some people argue against using modern portfolio theory since it cannot be overlaid exactly, and that the math is too difficult [5]. But what kind of math is being used in Markowitz's book [80]? As Markowitz states it [80, p. 186]:

The problem of maximizing expected return subject to linear constraints, [..] is a linear programming problem. [..] the problem of finding the portfolio whose smallest [return at time t] is as large as possible can be formulated as a linear programming problem.

The nature of securities is that you can select them, you can calculate their historical return, and based on these data you can use linear programming to calculate the optimal selection that maximizes expected return while minimizing variance by diversification. The underlying mathematics is elementary linear algebra, plus elementary statistics, and the use of standard linear programming techniques such as the simplex method. While this is not immediately digestible for the uninitiated, the math is not inherently complex. So, we agree that the complexity of the math should not be an argument against using modern portfolio theory as proposed by Markowitz. This is obviously also supported by the author of [5] given the supportive title of the article: Do the MATH.

But what about this overlay? The nature of securities is that you can invest and disinvest in them and implement a decision without prohibitively large costs, and often in a matter of seconds. Modern portfolio theory essentially asserts that investing and disinvesting at all times is enabled. The heart of security portfolio management is to monitor, control, and optimize the security selection process.

The Y2K problem and Euro conversion problem have clearly shown that you cannot simply junk IT systems, as is possible with underperforming securities. If you would have applied modern portfolio theory to IT systems that were suffering from the Y2K problem, you should have sold them, and bought better performing ones. For a start, to whom would you sell an IT system, especially, an IT system whose best-before-date is due? Maybe the competitor wants to buy them, if only to reverse engineer them to reveal your successful business rules, or the features that you cannot implement, so they can get ahead of you. Obviously, this is not an option, but even throwing them away, and restarting from scratch with new systems is not an option. For, IT is comprising business knowledge, often accumulations of it for many years. And converting such business knowledge into IT is a laborious, error-prone, and painful process, so abandoning all that valuable knowledge is like burning an entire profitable security portfolio. Moreover, implementing such IT systems takes years of time. So the whole idea that there would be a free selection process, that companies are totally free to choose in which IT they want to invest, that organizations can abandon systems like junking bonds, is not in accord with the realities of large software portfolios.

Let us give a typical example showing the nature of IT investments, and the freedom to choose. Suppose you have a good idea, like a credit card, or an ATM. Both inventions are IT intensive, and both ideas were implemented by leading financial corporations with deep pockets. In the beginning, they created value for both themselves, and their customers. But then the competitors united and answered with commoditized shared credit cards and a shared ATM network--not sharing with the initiator of course. While still creating value for the customers, this competitive answer destroyed not only the profits for the initiators, but also the margins for the entire market. The discretionary IT investment, thus turned into a commodity, and now you cannot operate a financial corporation without credit cards, or ATMs. They have become must-have's while the profit margins are gone--in some cases you even make a loss because of them. So, you are not free to disinvest in credit cards or ATMs anymore, since the innovation set off within the industry and customers insist on these services. This is sometimes called: creating value while destroying profits as explained in detail for the banking sector in [118]. But according to Markowitz's portfolio theory, these are portfolios to disinvest in: they cost money and do not deliver any positive return. Again, Markowitz's theory is here a bad advisor, because these types of projects are really non-discretionary.

Let us give another example to make this apparent. Once you opted for a particular operating system and accompanying languages it is no longer the case that you can easily switch. Once you select a technology for your IT, you cannot easily abandon it. In the words of Michael Porter: there are low entry barriers, but high exit barriers [97, p. 22]. IT systems often make essential use of the underlying operating system idiosyncrasies, clarifying the prohibitively high switching costs. This is not the case with securities. Also, people work with IT systems and they have to learn a new IT system, which is not the case for a security. So also the switching costs for users of the IT systems are high. Moreover, switching to a different computer language or even another dialect is hardly possible for existing software assets [126]. When attempting to convert to another language, you also need to convert your IT personnel successfully. A 2001 Gartner report shows that converting a Cobol developer to a professional Java developer has a chance of failure of about 60% [37, p. 22]. Also a change of technology implies often a change in business process. This has shown to be a challenge in itself, with a high failure rate [113]. So, information technology becomes illiquid after a first selection as opposed to securities or other financial assets that do have liquidity. Changing information technology bears large risks, is virtually impossible both technically and organizationally, and takes huge amounts of time. So again in this situation, Markowitz's portfolio theory is not an adequate tool to support IT decision making.

Also the issue of diversification is not simply transposed to the IT situation. In order to mitigate risks, and stabilize expected portfolio return, a diverse security portfolio is a good idea, and Markowitz gives the underlying mathematical tools to optimize this situation. What does diversification mean in the realm of IT? Should we refrain from many similar systems and make the IT systems more different? For instance, by using more languages, different operating systems, a host of development and maintenance tools? For IT you often try to do the opposite: consolidate different but similar systems into a single overall system: a product line that deals with the variation points of the similar systems successfully [10,22]. Also standardization is a keyword in the IT branch: use only a few languages, a limited number of operating systems, and a few support tools. So it is not a good idea to diversify in a technical sense, because of knowledge investment, transfer, complexity, etc. Apart from these technical aspects, there is also a business aspect. If you are a company building integrated radar systems for warships, then you are building those, and not enterprise resource planning systems for the automotive industry. To implement both types of systems, you need a lot of domain knowledge of both areas, e.g., for the warships you need to know about developing software under military regulations, whereas for the automotive industry entirely different issues are at stake. It is simply not very productive to combine uncorrelated IT domains. So also here, you see that the notions that are natural and relevant for security portfolio management are without rhyme or reason for IT portfolio management.

Another aspect is that for securities there is a rich body of historical information. In contrast, many IT systems lack all historical information, and an entire branch in software engineering--reverse engineering--is devoted to cope with this problem [20]. The information that is around, is out of date, since deployed IT systems are in continuous flux, and IT developers are not stars in writing documentation. While the value of a security may fluctuate on the stock exchange, the object itself does not change a bit. So the nature of the available information is different, the nature of the objects is totally different, but also the relevant questions are different. Markowitz's modern portfolio theory focuses on selection as a tool to minimize risk, and maximize return, while IT portfolio management is not at all about selecting and abandoning. So modern portfolio theory as proposed by Markowitz is not applicable at all to IT portfolio management. Therefore, we are not surprised to read in the same article that advocates the use of Modern Portfolio Theory (MPT) [5]:

Who's using Markowitz's Modern Portfolio Theory in IT? Not many.

Simply, because it is not applicable. Also in paper [3] it was observed that there are problems with applying Markowitz's modern portfolio theory:

However, we have not yet seen any example of a substantial software project actually using these techniques to help in their decision making process. We attribute this to the fact that obtaining economic data in software projects is much harder than in financial markets from where these techniques have been borrowed.

The idea that the data is the problem is erroneous, as we will see later on. The reason is that the nature of software does not resemble the nature of a security.

Not only at the portfolio level but also at the single system level, people are thinking of using modern portfolio theory. As an example, we quote a paper [16] investigating the potential of using modern portfolio theory to guide decisions in software engineering:

We view each software activity as an investment opportunity (or security), the benefit from the activity as the return on investment, and the allocation problem as one of selecting the ``optimal'' portfolio of securities.

The paper claims that many decisions are ad hoc, and that portfolio selection could improve that situation. But then the subjects of selection should comply with modern portfolio theory. According to modern portfolio theory, you can lower risks with large numbers of uncorrelated securities. We quote Markowitz [80, p. 102]:

We see that diversification is extremely powerful when outcomes are uncorrelated. [..] To understand the general properties of large portfolios we must consider the averaging together of large numbers of highly correlated outcomes. We find that diversification is much less powerful in this case. Only a limited reduction in variability can be achieved by increasing the number of securities in a portfolio.

This leaves us to answer two questions: Is the amount of activities large? And, are these activities uncorrelated? According to the activity-based cost estimation literature, there are at most 25 main activities in software development and deployment [66]. Moreover, many of these activities are correlated, and if they are not correlated, there is no free choice. Analysis, design, coding, testing, and operations: they are all strongly correlated. While you can drop analysis and design, it is already well-known that this is not leading to the best possible IT systems. You do not need MPT to decide on these issues. So MPT does not transpose that easily to the software world. The authors of paper [16] admit that applying MPT did not work out in a subsequent paper [17]:

We have been attempting to apply financial portfolio analysis techniques to the task of selecting an application-appropriate suite of security technologies from the technologies available in the marketplace. The problem structures are sufficiently similar that the intuitive guidance is encouraging. However, the analysis techniques of portfolio analysis assume precise quantitative data of a sort that we cannot realistically expect to obtain for the security applications. This will be a common challenge in applying quantitative economic models to software engineering problems, and we consider ways to address the mismatch.

The authors seem to think that MPT is the solution, and the problem is missing data. This is not true1. But their findings are confirming the fact that 75% of the organizations is not mature with respect to their software development and deployment [66]. The number of main activities is so low and their correlation so high, that applying MPT is senseless. Moreover, there are alternative approaches to select activities or technologies. For instance, in [82] an extensive list of best practices is presented. Each with an indication of entry barriers, and the risk of applying the technology. Also in [65] a table is presented providing the approximate return on investment of deploying certain software technologies. In [63,66] an elaborate treatise of success and failure factors for development and deployment of software in various categories is discussed, backed with quantitative data.

There are many more differences that come to mind, but we hope you get the idea by now: a program is not a security, it will never become one, and better data is not going to help. The goal of this paper is not to argue that security portfolio management does not correspond in a simple one-to-one manner to IT portfolio management, but to provide you with the mathematical underpinning that does enable quantitative IT portfolio management. It all starts with collecting the available information, and the hope that this information can be used to answer questions relevant for quantitative IT portfolio management.

Gathering information

Since level 1 organizations have chaotic software development and deployment, not a lot of relevant IT portfolio information is readily available, let alone uniformly accessible. We give a few possibilities of information availability ranging from the ideal situation to the worst possible case.

Except for the ideal situation, we have experienced all of the above cases--or combinations of them--in every substantial IT portfolio. Mostly, we were able to somehow derive the single most important IT specific key indicator underlying quantitative IT portfolio management formulas. It is the number of function points [1] for each application in the IT portfolio. We will indicate shortly how we do this.

Function points

A function point is a synthetic measure expressing the amount of functionality of a software system. It is programming language independent, and it is very well suited for cost estimation, comparisons, benchmarking, and therefore also a suitable tool for developing quantitative methods to manage IT portfolios [108,66]. In a textbook on function point analysis [40, p. xvii, 28] we can read:

As this book points out, almost all of the international software benchmark studies published over the past 10 years utilize function point analysis. The pragmatic reason for this is that among all known software metrics, only function points can actually measure economic productivity or measure defect volumes in software requirements, design, and user documentation as well as measuring coding defects. [..] Function points are an effective, and the best available, unit-of-work measure. They meet the acceptable criteria of being a normalized metric that can be used consistently and with an acceptable degree of accuracy.

Function points have proven to be a widely accepted metric both by practitioners and academic researchers [70, p. 1011]. For executives it is important to know how reliable these metrics are. In [33, p. 132] an accuracy of 15-20% is mentioned, as well as a 2300% variance in productivity when other metrics were used. This was due only to extremely wide variations in 7 definitions of the number of source lines of code (SLOC) of a computer program. In addition, you need to know what the so-called interrater reliability of accredited function point analysts is. Interrater reliability is the consistency between raters, if the variation is high, then the method is not as good as when the variation between raters is low. Moreover, since there are more methods to count function points, what is the intermethod reliability? Empirical research reports that the median difference in function point counts from pairs of raters using Albrecht's function point counting method was about 12%. The correlation across two different methods that were used in this field study was 0.95 [69, p. 88]. So function points can be seen as an objective measure and the intermethod reliability is sufficiently high as to consider comparing function point totals that resulted from more than one counting method. Apart from people, you can also use tools to count function points for existing software assets. One method called backfiring has an accuracy of about 20% [66, p. 79]. One of our students developed for a large financial enterprise a function point counting tool that automatically counts function points with a maximal deviation of 3% of manually counted function points by accredited function point analysts [86]. This is a language specific tool and only counts function points for information systems in Cobol with SQL and/or CICS on a mainframe.

To reassure you at this point: there is no need to understand the details of function points in order to use our results. We use them for several purposes: to recover missing management information, to derive some of our formulas, and to check our projections against the actual amount of function points of an application (if the function point totals are known already). In this paper we mostly encapsulate the IT specific function point metric and derive formulas solely expressed in the language of management: cost, project duration, staff size, risk, return, etc. What is important for now to know is that function points are a suitable basis for quantitative IT portfolio management.

Recovering management information

For some systems, neither management information nor up-to-date documentation is available. In that case we have to resort to the IT artifacts themselves to infer the management information that we minimally need for quantitative IT portfolio management. For the majority of these systems, the source code is available. The number of function points can be estimated using backfiring [62]: with a tool we count the number of logical computer statements, and depending on the used language, the number of function points can be estimated via a conversion table. For instance, a system with 100.000 Cobol statements, is 937 function points according to benchmark2. Via extensive benchmarks it has been empirically found that 1 function point of software is equivalent to 106.7 Cobol statements. For the about 500 languages in use worldwide there is a list with such conversion factors and there are tools available that implement backfiring. Backfiring has a somewhat larger margin of error than other techniques, nevertheless it suffices for recovery of function point totals for the often small part of the IT portfolio lacking all management information. If the amount of systems without any kind of management information is large, we need to resort to more accurate tools that scan the source code to calculate the amount of function points (e.g., the tools in [86]).

Usually about 5% of an IT portfolio lacks not only all management information but also the source code itself [65, p. 129]. Then we use a tool called a disassembler that turns the binary code into a list of assembler instructions. For assembler instructions we can use the backfiring approach: for a variety of assembly languages conversion factors are available. So, e.g., a load module on an IBM mainframe lacking the sources, that consists of 300.000 assembler instructions after disassembly, is also about 937 function points (the conversion factor for IBM mainframe assembler/3X0 is 320). If the amount of lost sources is substantial, then you need more sophisticated tools. They are available, in case you wondered [21,38]. But when you are missing the majority of the source code, you have other priorities than quantitative IT portfolio management. For example, in one case an enterprise-wide inventory to set up quantitative IT portfolio management revealed a huge exposure: we detected a business unit where more than half of the IT systems could no longer be recompiled, due to lacking source code. The exposure was unacceptable since the systems needed a Y2K update, which was almost impossible without source code. Executive management immediately acted to mitigate the risks.

So if there is only source code or an executable we can recover function point information, and from that we can infer management information as we will show later on. As an aside, you can already see why function points are so well-suited for IT portfolio management. We can compare assembler projects to Cobol projects without any problem. The conversion factors 106.7 and 320 are in fact expressing that a Cobol statement is about 2.99 Assembler instructions. Just imagine function points to be a universal IT-currency converter between different projects.

Enriching management information

Suppose that the burdened daily compensation rate of IT personnel is $1.000, with 200 working days per year. If only the total development costs are known, then we can infer more information using our formulas (we discuss them shortly). Namely, for a 5 million dollar IT project it is likely that this project took 20 calendar months with about 15 people involved. Of course, we cannot be sure about this: this is derived using public benchmarks. We can check this with an additional function point count. For instance, if we counted about 3000 function points, we can check that this should have cost 22 calendar months, for 15 people. But if it was only a 1000 function point project, the costs were probably too high. It would be ideal to have function point totals for the entire IT portfolio, but since this implies physical access to source code this is not a short-term viable option for globally operating companies to jump start quantitative IT portfolio management. In the long term, collecting function point totals for large parts of the IT portfolio is feasible.

In most organizations we have encountered the following situation: both project duration and development costs are accessible without too much trouble. With this information we can calculate the costs according to benchmark and compare this with the actual costs. As a rule of thumb, you need to cross check in a few cases:

Of course, the more management information is available, the less information you have to infer, the more you can use the formulas to validate the provided data, and infer company-specific internal benchmarks and formulas. The more accurate the data underlying our mathematics becomes, the more accurate your quantitative IT portfolio management, up to the point where you can continuously control and monitor past, present, and future IT costs and benefits in your IT portfolio.

0.0.0.1 Discounting costs

If the cost of a project was 100.000 US dollar twenty years ago, then this project is completely different from a $100.000 project today. Obviously, inflation is a factor that should be taken into account when dealing with costs over time. As a side-remark, also deflation is a factor that should be taken into account. For instance, in South-American countries depending on the age of a project you may have to divide the numbers by 1.000, dus to a currency reform.

Current-dollars, than-dollars, future-dollars, currency conversions, and notions like (risk adjusted) discounted cash flows are well-known issues within the area of economy. They deal with correcting the difference in dollars over time. If you eliminate this aspect by converting all our formulas to effort-time analyses--formulas in terms of staff instead of costs--you exchange discounting IT dollars for discounting IT productivity. This is more accurate than discounting cash flows. But, CMM level 1 organizations do not know their own productivity, do not have historical data on past productivity, and cannot predict future productivity, so they cannot discount effort using IT productivity. Higher-level CMM organizations can discount in this way, and have the potential to eliminate the additional problem of discounting cash flows. For CMM level 1 organizations you can use as a surrogate the appropriate corrective transformations that are known from economy. In this paper we abstract from doing these conversions, although they are necessary for practical long-term estimates. The reason for this is that we want to unravel and expose the unknown territory of IT portfolio management. Once this is unraveled, we can apply the known economic tools and techniques to discount the cash flows.

For large global IT portfolios you sometimes use a mix: for those countries where the discounted cash flows are well-known, with larger accuracy than IT productivity, it is better to discount cash flows. But if it is problematic to discount the cash flows, then taking IT productivity constant over time is a viable alternative: given Brook's law that there is no single technology that can boost programmer productivity by an order of magnitude [13], and given a constant stream of data, programmer productivity is not wildly changing from one year to another in CMM level 1 organizations.

Compiling an IT portfolio database

We showed a few methods to recover function points from software projects lacking all or almost all management information. Now we show how with often recoverable, but still rather limited management information you can compile and check a corporate wide IT portfolio database. It is very useful to collect the following information in a simple database for IT projects in the entire organization:

This is not much as a source of information, but it will be the best you can do in a level 1 organization for the majority of the completed and proposed projects. Of course, in some cases more data is available such as staff size. Needless to say that all additional relevant information is welcome: staff size, how many internal and external staff, how many working hours, and so on. You can use this information to cross check the data, and obtain an impression of its accuracy. However, rich project information will often lack. So we show how to proceed with the above three pieces of information only.

Often large enterprises are not transparently web-enabled so that software project information is mostly paper-based [125]. It is not possible to study all these project documents, but it is mandatory to collect as many as possible. Since this easily tops thousands of documents it is unavoidable that nonexperts enter the abovementioned data. We know they make errors. We also experienced that the project information itself contains irregularities, and is far from uniform. The first step after compilation of the database is to thoroughly validate its contents. For, it will be the source of information to base your IT portfolio management strategy on. We use quantitative methods to check the database.

Relating cost, duration, and staff size

For the compiled IT portfolio database containing data for project duration and total development cost we want to check whether the entries make sense at all. To do this, we will derive in this section the first eight formulas for quantitative IT portfolio management.

We explained that for CMM level 1 organizations we have to somehow compensate for the lack of historical data and the lack of an overall metrics program, and in this section you will see how we do this. We use benchmarked relations between cost, duration, and staff size to develop our formulas. Consider the benchmark taken from [64, p. 202]:


f0.39 = d

Here f stands for the number of function points [1] and dis project duration in calendar months (that is, elapsed time measured in months). Recall that it suffices to imagine f to be a universal IT-currency converter. The power 0.39 is specific for MIS software projects that are part of all businesses and omnipresent in financial services and insurance industry.


 
Table 2: Distribution of Jones' project database around 1999.
Size End MIS Out- Commer- Sys- Mil- Total  
(FP) User   source cial tems itary    

1
50 50 50 35 50 20 255  
10 75 225 135 200 300 50 985  
100 5 1600 550 225 1500 150 4030  
1.000 0 1250 475 125 1350 125 3325  
10.000 0 175 90 25 200 60 550  
100.000 0 0 0 5 3 2 10  

Total
130 3300 1300 615 3403 407 9155  
Percent 1.42 36.05 14.20 6.72 37.17 4.45 100  


  
Figure 1: Visualizing the schedule powers of Table 3.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/powers.ps,width=10cm}
\end{center}\end{figure}

How much belief should we have in such a benchmark? There is no established tradition in software benchmarking and therefore the amount of projects subject to benchmarking is relatively low. For instance, the latest benchmark release of the ISBSG (International Software Benchmarking Standards Group) is based on the 789 projects that were submitted to their repository in early 2000. We base our work mostly on Jones' database who has probably the largest knowledge base on software projects in the world: in 1996 it contained 6753 projects [62, p. 161]. In 1999 this has grown to more than 9000 projects. We provide the distribution of the projects over size and type in Table 2. The database contains data regarding software comprising over 10 million function points. To compare, a large international bank approximately owns 450000 function points of software, and a large life insurance company possesses 550000 function points [62, p. 51]. Other distributions partitioning Jones' project database over age range, partitions into new, enhancement, and maintenance projects, partitions over selected programming languages, and a distribution over a number of technologies used, can be found in [62, p. 162-164]. The overall set of projects used in producing the benchmarks is biased: the database contains presumably more large projects than in reality [62, p. 159-160]. But a CMM level 1 organization has no historical data to derive its own internal benchmarks, so we resort to Jones' work as a surrogate for lack of historical information. Note that the schedule powers vary for different code sizes, and for different industries. We give you a few of such benchmarked powers that you can use depending on the industry you are in, or depending on the type of project. Table 3 contains a few of them and is taken from [64, p. 202]. For ease of explanation we will mostly use the power 0.39. See Figure 1 to get an idea of how the various benchmarks look like when we plot these benchmarks for the schedule powers of Table 3.


 
Table 3: Various powers derived by benchmarking.
power projects within range
0.36 OO Software
0.37 Client/Server Software
0.38 Outsourced Software
0.39 MIS Software
0.40 Commercial Software
0.40 Mixed Software
0.43 Military Software

In addition to the schedule power benchmarks, we mention two other benchmarks, also taken from [64, p. 202-203]:


\begin{displaymath}{f\over 150} = n, ~~~~~ {f\over 750} = n \end{displaymath}

where n is the number of staff, necessary to do the project. These are overall benchmarks, not specific for the MIS industry. The left-hand formula applies to software development projects, and the constant 150 is specific for those. The right-hand formula is to estimate staff for keeping an application operational, and the constant 750 is specific for this type of work. This excludes large functional enhancements, for which there are other benchmarks.

We stress that you should not use these formulas as the sole decision means for individual software project contracting purposes, to decide on individual resources for large software projects, or to decide on upcoming individual projects with potential large business impact. Namely, for these applications the above estimation formulas are not specific enough, and might even be harmful. But they can be used for sanity checking on individual projects and are accurate enough for decision making for entire IT portfolios. In [64] it is stated about these benchmarks (originally intended for manual cost estimation):

Manual methods are quick and easy, but not very accurate. Accurate estimating methods are complex and require integration of many kinds of information. When accuracy is needed, avoid manual estimating methods.

We want to support decision making for IT portfolios and not for individual projects, and the benchmarks plus their derived formulas have shown to be an excellent vehicle for that purpose. Note that optimal accuracy is not feasible at level 1, since the required data richness is simply lacking. Of course, you can use our formulas for sanity checking as well on a per project basis.

We derive from the above benchmarks our first quantitative IT portfolio management formula, that we will use to check the database. We recall that for CMM levels higher than 1 the principles of our approach stay the same, only the benchmarks instantiating our formulas change. Since there is more and better data available, it becomes feasible to infer company-specific benchmarks, leading to more accurate instantiations of our first formula. We discuss the instantiation using public benchmarks so that level 1 organizations can use it.

We show a simple algebraic derivation to illustrate how you can derive your own instantiation of our first formula in case you want to apply our results. First calculate how many function points the application is according to benchmark.


\begin{displaymath}f = d^{1\over 0.39} = d^{2.564} \end{displaymath}

We made f explicit using elementary algebraic arithmetic. With the second benchmark, we calculate the number of people for the development project:


\begin{displaymath}n = {1\over 150} d^{2.564} \end{displaymath}

So, the total number of calendar months m needed to accomplish the project is $ m = d \cdot n$. This amounts to:


\begin{displaymath}m = {d\over 150} d^{2.564} \end{displaymath}

We assume that there are w working days in a year, and for a given daily burdened compensation rate r we can now calculate the total cost of development  ${\it tcd}(d)$ for a given project duration d:


\begin{displaymath}{\it tcd}(d) = r \cdot {w\over 12} \cdot {d\over 150}
\cdot ...
...rwd\over 1800} \cdot d^{2.564} =
{rw\over1800}\cdot d^{3.564} \end{displaymath}

So the first formula for quantitative IT portfolio management is:


 
(1) $\displaystyle {\it tcd}(d)$ = $\displaystyle {rw\over1800}\cdot d^{3.564}$

Completely analogously, we derived a formula calculating the total cost of a maintenance project for a given duration d (for costs to keep a system operational you do not know the duration per se, and we will derive other formulas for that, see formula 11). ). For this second one we used the 750 benchmark for maintenance staff size, everything else being equal. Formula 2 for quantitative IT portfolio management then becomes:


 
(2) $\displaystyle {\it tcm}(d)$ = $\displaystyle {rw\over 9000} \cdot d^{3.564}$

Of course the rates will vary per organization but also per task: we have used daily rates ranging from 500 to 4000 US dollar. Certain programmers for ERP packages can be more expensive, but also experts in high performance transaction processing at large banks and in the airline industry can be quite expensive (think of TPF experts [56]). We experienced that daily burdened compensation rates for both internal staff and external specialists were always available. Also the number of working days varies per organization and per country and can range from less than 200 to 230+ days per year. You have to use your own company-specific rates to instantiate formulas 1 and 2 for quantitative IT portfolio management.

If you only know the total cost of an IT project, you can calculate the project duration according to benchmark. We can do this with the dual versions of the first and second formulas. Without bothering you with the details of their (mathematically trivial) inference, we formulate formulas 3 and 4 for quantitative IT portfolio management:


  
(3) $\displaystyle {\it dd}(c)$ = $\displaystyle \left({1800 c\over wr}\right)^{0.28}$
(4) $\displaystyle {\it md}(c)$ = $\displaystyle \left({9000 c\over wr}\right)^{0.28}$

where ${\it dd}$ is development duration, ${\it md}$ is maintenance duration and c is the known total cost of either a development or a maintenance project.

We also experienced that actual staff size is sometimes available in a number of business units. It is very useful to collect this information as well in the IT portfolio database. Formulas 5 and 6 provide you with a benchmarked relation between staff size and project duration:


  
(5) $\displaystyle {\it nsd}(d)$ = $\displaystyle {d^{2.564}\over 150}$
(6) $\displaystyle {\it nsm}(d)$ = $\displaystyle {d^{2.564}\over 750}$

where ${\it nsd}$ is the number of staff for development, and ${\it nsm}$ is the number of staff for maintenance projects. With formulas 5 and 6 you can detect differences between actual staff size and benchmarked staff size. In the same way, you can detect differences between actual cost and benchmarked cost. Likewise, we can calculate for a given staff size the benchmarked duration, leading to formulas 7 and 8 for quantitative IT portfolio management (QIPM):


  
(7) $\displaystyle {\it dd}(n)$ = (150n)0.39
(8) $\displaystyle {\it md}(n)$ = (750n)0.39

where n is the actual staff size, ${\it dd}$ is development duration, and ${\it md}$ is maintenance duration.

We already made some example calculations, and we will make some more example calculations to illustrate the use of these formulas to support quantitative IT portfolio management. Throughout the paper we assume for all example calculations a fictious company with a daily burdened rate of $1000 both for development, maintenance, internal and externally, and 200 working days per year. This leads to the following instantiations of formulas 1-4 for QIPM:


\begin{eqnarray*}{\it tcd}(d) & = & {1000\over 9} \cdot d^{3.564} \\
{\it tcm}(...
...^{0.28} \\
{\it md}(c) & = & \left({9c\over 200}\right)^{0.28}
\end{eqnarray*}


For example, a 36 month development project costs ${\it tcd}(36) =
\$39.1$ million according to benchmark, and an 18 month maintenance project costs according to benchmark ${\it tcm}(18) = \$0.66$ million. We announced earlier that when you know the costs, you can calculate the duration. For a $5M development project ${\it dd}(5M) = 20$indicating that such a project should take about 20 months. The number of staff  ${\it nsd}(20) = 15$ leading to 300 calendar months, or 25 man-year, which is indeed $5M. Likewise, a $5M maintenance project, takes ${\it md}(5M) = 31.5$ calendar months according to benchmark and takes  ${\it nsm}(31.5) = 9.3$ maintenance staff. This is 293 calendar months, leading to $4.9 million, which accurately approximates the actual financials.

Cleaning the IT portfolio database

Using formulas 1-8 you can check and clean your just compiled IT portfolio database, by carrying out the process outlined below. Note that the goal is not to comply with our formulas as closely as possible--the goal is to spot the differences so you can locate and correct erroneous IT portfolio database entries after which the real deviations according to benchmarked management information are revealed. These deviations need the attention of the IT portfolio manager or IPM, which can be the CIO, a board member, or someone directly reporting to the executive board.

The above process leads in a relatively short time to a reasonably accurate IT portfolio database. Now we are ready to analyze its contents.

Analyzing an IT portfolio database

In Figure 2 we composed a random excerpt of several IT portfolio databases from organizations that went through the above process--let's call this sample S for later reference. We changed daily compensation rates and variations in working days per year to the example rates: a $1000 daily rate, and 200 working days per annum. Moreover, we did not use different rates for ERP, CRM, maintenance, off-shore outsourcing, and other deviating projects, to simplify explanations.


  
Figure 2: Sample S from an IT portfolio database
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/cdplot.ps,width=10cm}
\end{center}\end{figure}

This figure represents about 200 projects, at a total cost of a little under a billion US dollar with an average project duration of about 18 months. This excerpt contains completed projects with actual financials, and proposed IT projects with estimated cost and duration. The curve in Figure 2 is a plot of formula 1. There are a few outliers that remain after the error correction process. Further analysis of these outliers is necessary.

IT projects above the line

There can be several causes for outliers. One of the most frequent causes is that the development schedules are so crammed, that they approach the so-called impossible region [28,103]. This is sometimes due to a sudden business opportunity, a reaction on a competitor that demands a full focus on speed to market, but can also be due to lack of governance by executive management. Whatever the reason: such projects have a high price tag, are risky, and too many of them can decrease IT performance considerably, leading to value destruction. The project costs and failure risks dramatically increase when the development schedule is compressed to the impossible region. Plotting formula 1 against the development projects in an IT portfolio gives you a quick overview of the outliers, that are probably death march projects [136]. For these projects you can analyze whether the need for speed was really that urgent, and if so, whether the incurred initial costs plus the high risk of failure justified the projects by a surefire business opportunity. For, when the costs balloon due to this type of development, the returns should be obvious, the pay back period should be relatively short, and performance should be measurable.

You will also run into high-risk low-reward projects in the portfolio: death march projects where the hurry is not justified by the potential returns. The two outliers around the 20 million dollar and the one of 25 million dollar were non-discretionary projects. Executive management does not consider these projects as strategic, since they must be done irrespective the strategy. But you can do them the smart and the stupid way. Usually, non-discretionary projects are initiated too late and therefore expose the enterprise to high costs and unnecessary risks. Better governance of these non-discretionary projects leads to better IT performance, by cost avoidance.

So also non-discretionary projects need timely executive attention. Especially, when they potentially affect large parts of the organization. Some frequent examples of unnecessary costly non-discretionary projects that we spotted in various IT portfolio studies are: operating system updates, platform migrations, Y2K projects, Euro conversions, programming language or dialect conversions, conversions to 10-digit bank account numbers (as required by the European Central Bank), and more. The nature of these non-discretionary projects is not special in the sense that they are of an extremely expensive nature, but they become unnecessarily costly by negligence. Non-discretionary projects should not be high-risk no-reward projects destroying value, but sometimes they are. For instance, Y2K costs consumed up to 30% of the total IT budgets in 1999 [125, p. 30]. For non-discretionary projects careful timely planning utilizing as much as automation is key, so that costs are brought to the absolute minimum. Total costs decrease when the schedules are relaxed, that is, when the projects are planned well-ahead of the deadline [103]. Moreover, the projects are done more rapidly and accurately when automated tools are used. Gartner Group advised to use so-called software renovation factories [130] when the code volume exceeds a few million lines of code [51,67] for Y2K updates. This advise remains valid for other modifications that potentially impact large parts of single systems or an unknown number of systems in an entire IT portfolio.

Apart from these unnecessary costs, there will always be death march projects that can be justified by proper business reasons. Executive management should consciously decide on acceptable risks, and put a threshold on death march projects for an IT portfolio. We recall that for this kind of analysis modern portfolio theory [80] is not adequate: you cannot abandon information technology like junking bonds. You have to live with a lot of suboptimal IT. Setting this threshold is just a way to keep the IT spending within borders, not to select the optimal IT portfolio. One of the additional things that executive management can do is to rank death march projects, in potential added value for the business.

Note that an IT portfolio database not only contains completed projects, but also proposals for new projects and projects in progress. The management data provided by the IT-staff are estimates. Especially discretionary projects with a visionary scent can have gigantic budgets, while no indications on returns, net present value, pay back period, or risks are given. The most prominent examples we encountered were e-business initiatives, enterprise wide CRM or ERP implementations, corporate intranet projects, and complete system overhauls. Often, executive management committed themselves to these huge IT investments, without knowing what the consequences are for the IT budget in the coming years. Of course, you have to explore the new in order to innovate, and surely you will have to allocate costs for such endeavors, no matter how risky. Therefore, you need to set a threshold on such initiatives so that you are consciously in control of the costs, the risks, and the amount of potential loss. Depending on the type of the enterprise and the deepness of its pockets, executives can set thresholds to assure that the IT costs will not balloon so that the normal IT spending is in danger. Later on we will give an example on the consequences of large IT investments, and quantify the necessary minimal ROI and some important risks.

IT projects below the line

Also outliers in the low range are important to analyze in more detail. IT-staff is not good in estimating software costs [28,63] and severe underestimations are common practice [61] in level 1 organizations. A typical situation is illustrated in Figure 2: as can be seen, a 33 month project proposal with an actual estimated budget of $7.1 million dollar was approved. Inspection of the available project documentation showed that this was a cost reduction project (CR project) where savings of about $12 million dollar in 5 years were projected. When you compare this to the expected development cost based on formula 1, you will find  ${\it tcd}(33) = \$28.7$million, which is about 75% more than estimated. It is not hard to realize that this project is going to cost money even when the cost savings are fully realized. In fact, implementation of the CR project will presumably cost more than it saves. It should not have been approved at all, and based on our analysis it was assessed internally and aborted. So our formulas can be used to calculate the exposure of an IT portfolio due to underbudgeting the proposed IT investments. We will deal with different kinds of IT portfolio exposure in more detail later on.

Executives should realize that a formal approval and cancellation policy for IT projects can be based on quantitative IT portfolio management. There is often no official cancellation process, and if it is in place, it is often not adequate, which can be measured by the number of restarts. This situation improves when approval and abortion are based on benchmarked thresholds so that erroneous cost saving operations can be routinely spotted, pruned when underway, and cancelled once and for all. Obviously, such preventive measures increase IT performance, by simple cost avoidance.

Synonyms, homonyms, redundancies

As soon as you compile an IT portfolio database, you will find multiple occurrences of the same project under a different name (synonyms) and multiple occurrences of the same name but describing different projects (homonyms). Synonyms can be removed from the database, and homonyms should be renamed so that you can differentiate between them.

In addition you will find existing and proposed systems that are redundant. Recall that in CIO Magazine, it is reported that just compiling an IT portfolio database saved one company $3 million and another company $4.5 million because the IT portfolio view enabled them to spot redundancies [5]. While we spotted redundancies as well, and could prevent unnecessary spending at times, you have to be careful with being too enthusiastic with removal of redundancies. There are two types of redundancies: similar proposed systems and similar existing systems.

First we deal with proposals. An often seen effect after an IT portfolio is compiled, is that similar proposals are put together and carried out as one combined joint project. Only if all the envisioned systems are going to be exactly the same it is likely to reap benefits from removing redundancies, by cancelling one but all the proposals. If another approach is taken towards redundancy, this leads to an increase in stakeholders, increase in necessary flexibility, more variation points, increased organizational complexity, multiple ownership adding to the complexity, increased feature creep, and a larger size of the new system. Larger size implies, more time to develop, more risk. Let us make a calculation to illustrate the possible consequences of removing seeming redundancies. Suppose you find two 12 month projects that are fairly similar. Using formula 1, each project will cost $780.000 according to benchmark. So two of them, cost $1.56 million. Using formula 3 we can calculate what the development time is of a single $1.56 million project. This is 14.5 months. It is highly unlikely that the variation points, the increased number of stakeholders, and the other issues just mentioned will be solved in 2.5 months. So the synergy that looks good on management charts is very unlikely to be accomplished. Moreover this synergistic strategy easily leads to a grand IT project. Something that is strongly discouraged in the Clinger Cohen act [46], since this is known not to be working out properly. We quote:

Reduce risk and enhance manageability by discouraging ``grand'' information system projects, and encouraging incremental, phased approaches.

Now let us have a look at existing systems that seem redundant at first sight. While this can be extremely frustrating, it is not uncommon that an organization has multiple similar systems. You cannot get rid of them without significant investments, if at all. For instance, the US Department of Defense owns more than 700 payroll systems, over a 100 personnel training systems, and myriads of intranets [121]. How easy it is to remove such redundancies is also illustrated by the failure of GTE Corporation (then a leading telecommunications provider) to consolidate an overall medicare system. The US Secretary of Health and Human Services announced in 1994 that: ``We're going to move from the era of the quill pen to the era of the superelectronic highway''. This to provide timely payment, and reduce fraudulent claims. GTE spent $100 million for a unified medicare system and learned that they had to integrate many separate information systems. In 1997 the project was cancelled. According to GTE this project was far more complicated than anyone anticipated.

Solving this kind of problem is much more involved than removing redundant automobiles, buildings, or other tangible assets. The problem is that these systems look like redundancies from the executive viewpoint, but are more like homonyms. There are multiple variation points, and to consolidate those into one overall system takes sophisticated software engineering technology. You have to migrate the similar systems to a software product line [32]. CMM level 1 organizations are most likely not equipped to initiate, migrate, and deploy such complex software artifacts. Apart from that, redundancy in IT is not necessarily a bad thing. In [32] we call this the relativistic effects of software engineering, meaning that the classical way of thinking about software breaks down when the size increases, just like Newton's laws of physics break down when speed increases. For small systems, and small user communities redundancy can be avoided entirely. But as soon as variation points are necessary, the ideal situation is that changes only important for one user should not affect another user. If redundancy is weeded out completely, inevitably, users will have to accept new releases of the software also when the modification was not meant for them. The release effort for the business will then be higher than the cost of dealing with redundancy at the development site. See [32] for quantitative data supporting our arguments in detail. So redundancy is not necessarily a bad thing, and focusing on its removal should not be the prime task of an IPM.

While many business assets can be truly redundant, this is hardly ever the case for IT. In article [5] where the cost savings for redundancy are reported on, it is stated that IT is not special, that it operates like any other part of business. With the above arguments we have illustrated that IT is special. Apart from our arguments, the special status of IT has been subject to discussion for decades. For provocative enlightment, we refer to a classical paper we mentioned earlier: the note by Lamport [74] explaining that maintaining software is entirely different from maintaining an automobile.

The black hole: hidden costs

Apart from the high costs of death march projects, high-risk visionary projects, and other costly efforts, there are also hidden costs that you need to be aware of. Let us continue with the CR project example to illustrate what we mean by this. Apart from the fact that the total development costs of the underbudgeted CR project are in the order of $30 million according to benchmark, there will also be an additional minimal cost of operation during the entire deployment phase of this system, excluding separate enhancement projects. But how to quantify such costs? In this section we develop more formulas to calculate this according to benchmark. For instance, we can calculate with these formulas that for the coming 5 years the costs to keep the CR project operational are benchmarked at $10.4 million. This alone almost annihilates the projected cost reduction of $12 million in 5 years. Moreover there will be more operational costs if the system is not retired after 5 years of deployment. The minimal total cost of ownership is around $48.3 million according to benchmark, so even when the cost savings over the first 5 years will more than double over the rest of the deployment phase, to say $30 million, there is a net loss of in the order of $20 million over the entire lifetime of the system. So, even when the project was estimated correctly, the operational costs are high, and are prolonging the pay back period significantly, making the entire investment questionable.

It is our experience that the above calculations are not made when assessing IT proposals. However, they show that IT investments can easily lead to profit destruction instead of the expected value creation, not only by underbudgeting but also by not taking the operational costs over time into account. The accumulation of operational costs of IT projects plus the existence of profit destructing IT projects that were not anticipated is giving many executives the feeling that IT is a black hole. This feeling is supported by facts. Many large companies suffer from high operational and maintenance costs, 60% and more of the total IT budget is no exception. The US average in 1994 was that about 45% of the total budget was spent on maintenance [61]. This implies that almost half the annual IT budget was spent on operations and maintenance. These costs are increasing, given the fact that more and more IT personnel is working in maintenance [64,65]. These alarming figures are beginning to attract the attention of corporate management. The findings are no freak accidents, but have been reported over and over again for decades [35,8,26,78,106,6,52,64,103,82]. In those publications percentages between 50 and 80% devoted to maintenance costs are reported.

Also the cost per unit of work differs for development and deployment work on software. Already in the seventies this was empirically found. In [127] a ratio of 1 : 50 is reported, and in one US Air Force study Barry Boehm found that the cost per instruction was $30 while the cost per instruction for maintenance was $4000--a ratio of 1 : 133 [7]. For executives it is not clear what maintainers do, since the software was running in the first place to their understanding. Therefore, to obtain control over your IT portfolio it is crucial to know about these hidden costs. Only then it becomes possible to control your IT budget for the existing portfolio, and to project the operational costs for future systems that are proposed to be added to your IT portfolio. We will develop formulas that take operational costs into account.

Minimal total cost of ownership

Of course, it is hard to predict maintenance effort, since a lot of maintenance is not just keeping software running. It is more than that: enhancement to align systems better with new business needs. Those costs are often in your IT portfolio database, and are denoted as development projects (on existing systems). There are however substantial costs connected to the operational phase of software systems that are often not described in IT projects and therefore do not end up in your IT portfolio. For those hidden costs we will develop more formulas supporting quantitative IT portfolio management. Using the formulas it becomes much more clear what the real thresholds for ROI should be (we come back to this later on). In effect, we need to compare the potential value creation with the potential total cost of ownership in order to decide rationally on IT investments.

To be able to say something about total cost of ownership, we need to know how long the software will be operational. With the Y2K problem we have seen that this can be much longer than expected. Or as Strassmann put it [122, p. 253]:

Software is a new form of immortality.

In level 1 organizations where no measurement history is in place, there is also no lifetime data available, so we need to compensate for this lack of information by using a public benchmark taken from [61, p. 419]:


f1/4 = y

where y is the number of calendar years the software will be deployed (f stands for function points again). From this benchmark, and the benchmark for MIS software ( f0.39=d), we can immediately derive two new QIPM formulas 9 and 10:


  
(9) y(d) = d0.641
(10) d(y) = y1.56

Note that for other industries, you have to adapt the schedule power using Table 3. For example, for a 34 month MIS development project we can calculate that according to benchmark this software will be deployed  y(34) = 9.6 years, we used this formula in the previous section to see whether the cost reduction project would really save costs.

Formula 10, which is the dual of formula 9 can be used to estimate how much effort is reasonable for projects of which you know how long they will be necessary. For instance, a company needed a conversion tool to automatically convert Cobol 85 back to an older dialect Cobol 74 [15] for the coming 37 months at most. This is a systems software project, and because of the implementation methods that are going to be used, it is acceptable to use the power for OO software (0.36 in Table 3). So, then the instantiation of formula 10 for the power 0.36leads to:  d(y)=y1.44. Its development time should not be longer than  d(37/12) = 5.1 months. We use the OO schedule power to instantiate formula 1 for this project, leading to: ${\it tcd}(d)=rw/1800\cdot d^{3.7778}$. We keep $r=\$1000$, and w=200days. We calculate that ${\it tcd}(5.1) = 50.800$ dollar to develop the tool seems reasonable, given the limited time you need the tool in the first place. If the costs are deviating considerably, it is time to assess the price setting of the tool supplier.

With formula 9 we can derive another formula that for a given project duration d measured in calendar months, gives us the minimal costs to keep the developed system operational. We will show later on that this is indeed a lower bound, so the actual costs could be higher, but presumably not lower than our formula calculates. We are interested in estimating the minimal cost of operation during the entire deployment time of a software system, based on the development time of the IT project. This is in fact:


\begin{displaymath}{\it mco}(d) = y(d) \cdot wr \cdot {\it nsm}(d) \end{displaymath}

For, in one calendar year, you work w days at daily compensation rate r, with ${\it nsm}(d)$ people, for y(d) calendar years. Combining formulas 9 and 6 then leads to formulas 11 and 12:


  
(11) $\displaystyle {\it mco}(d)$ = $\displaystyle {wr\over 750}\cdot d^{3.205}$
(12) $\displaystyle {\it dd}(o)$ = $\displaystyle \left({750\over wr}\cdot o\right)^{0.312}$

Again, ${\it dd}$ stands for development duration. We used formula 11 for the earlier mentioned 33 month CR project: indeed the minimal cost of operation  ${\it mco}(33)=19.6$ M$, the deployment phase is according to formula 9: y(33)=9.4 years, so after 5 years, $10.4M operational cost turns the estimated $12 million savings into $0.8 which is easily annihilated by the 75% underestimated cost of development.

Formula 12 is useful for rough estimates for merging, acquisition, and outsourcing. Let us give an example of the latter. You can use formula 12 as an indicator whether outsourced operational costs make sense at all. Often you know the total contractual operational costs o for a number of years to keep a system operational. Suppose there is a contract with an outsourcer to keep a system running for $10 million a year for the coming 10 years. Then, using formula 12, we can calculate that the development time of this project was probably  ${\it dd}(\$100M)
= 54.8$ calendar months. And this implies that the deployment phase is approximately  y(54.8) = 13 years. So depending on the actual development time (that could be present in your IT portfolio database) you can get an impression whether the outsourcer is too expensive, or if the software is really that large, you may want to rethink the contract to prolong it. The latter also depends on the added value for the business. The same type of calculation can be done when you want to acquire a company and you know total operational costs. Of course, this is an indicative estimate.

We can derive from formulas 1 and 11 the minimal total cost of ownership. Formula 13 supporting QIPM is:


 
(13) $\displaystyle {\it mtco}(d)$ = $\displaystyle {\it tcd}(d) + {\it mco}(d)$

So, for a given project duration you can calculate the minimal total cost of ownership  ${\it mtco}(d)$ of a system over the entire life cycle according to benchmark.

Let us give an example to show that the above formula makes sense. Suppose there is a project proposal that takes 36 calendar months. Total cost of development according to benchmark is ${\it tcd}(36) =
39.1 $ million dollar. The minimal cost of operation according to benchmark is ${\it mco}(36) = 25.9 $ million dollar. So, minimal total cost of ownership  ${\it mtco}(36)$ of this software system is 65.0 million dollar. The first three years 39.1 M$ is spent, and the subsequent  y(36) = 9.9 years, 25.9 $M per year is needed to keep it running. Indeed, 60.1% is spent on development, and 39.9% is needed to keep the system operational. Recall that our estimates use a minimal scenario: keep the systems running without large enhancements. So, our findings converge with the empirical data that are found for several decades [35,8,26,78,106,6,52,64,103,82].

We will derive these percentages for a given project duration. To that end we need the ratio between operational and development costs. Given formulas 1 and 11 formula 14 for quantitative IT portfolio management is:


 
(14) r(d) = $\displaystyle {12\over5}\cdot d^{-0.359}$

where r(d) is the ratio of minimal cost of operation to development cost. Using this ratio we can easily infer two QIPM formulas 15 and 16 providing you with the development fraction, and the operational fraction of the minimal total cost of ownership:


  
(15) $\displaystyle {\it df}(d)$ = $\displaystyle {1\over 1 + r(d)}$
(16) $\displaystyle {\it of}(d)$ = $\displaystyle {r(d)\over r(d) + 1}$


  
Figure 3: Development and operational fractions as a function of project duration
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/parts.ps,width=10cm}
\end{center}\end{figure}

We plotted formulas 15 and 16 in Figure 3. As can be seen, the development fraction will slightly increase for larger projects, while the operational fraction will decrease as slowly. This does not imply that operational costs will be smaller for larger projects. It just means that when the initial investment is larger that the operational costs are amortized over a longer period of time. Indeed if the project duration approaches infinity, the development fraction of the total costs will approach 1, and since an infinite length project will never finish, the operational fraction is indeed approaching to 0.

From project to portfolio level

At the executive level not only the accumulated minimal TCO per project and TCO of an entire portfolio plays a role, but also the dimension of time is of strategic importance. For instance, executives may want to know how much minimal operational IT costs were spent by a business unit in 1Q99, and how that compared to other business units. Or if they decide to invest next year in new systems, what will be the consequences for the coming years in terms of total costs? All this within the constraints of the current IT budget and the ongoing expenses necessary to keep the existing IT portfolio running. Maybe the plans will lead to unacceptable increases of the total IT budget. Also what-if scenarios can be supported by quantitative data: what if the new systems are introduced in phases? It is clear that for such projections we need to make the step from individual IT projects to entire software portfolios. Therefore, we should be able to superimpose IT project data. We do this by giving our formulas an absolute time dimension. Up till now we have discussed costs in relative time, that is, in terms of durations. The absolute time dimension enables sensible accumulation of IT project indicators. Then we can accumulate costs and benefits of IT projects in all phases: initial, development, deployment, enhancement, retirement phase, and so on.

Level 1 organizations do not deploy connections between projects. Think of reusing high-quality artifacts, such as a software architecture, or reusable requirements. So we can treat information for each software project independently. This is not leading to an incorrect quantitative model: if there are connections, they are often ad hoc preventing significant cost savings over time as could be the case with software product lines. From the IT portfolio management level it looks like two independent projects, each with its own costs. We just need each project's start date and delivery date. Using the formulas we developed so far, we can easily infer additional formulas with an absolute time dimension. We focus now on formulas that for a given set of systems in the portfolio return the cost allocation at any given time.

Cost allocation formulas

We consider a software system s as a tuple in a database. In a level 1 organization a realistic assumption is to base yourself on two-dimensional tuples (is,ds) where is is the initialization date of system s, and ds the delivery date of system s. For the sake of explanation, we abstract from the fact that also project names, organizational unit, and so on should be in the IT portfolio database. In case more information is around we should extend the tuple accordingly. For instance, if the actual financials for development are present this extends the tuple to a three dimensional one. In practical implementations of our formulas for quantitative portfolio management, this implies adding another column in a spread sheet, extending the schema of a database, or another column in a statistical package. For now we assume the minimal scenario and we use the two-dimensional tuple containing only the absolute data on initialization and delivery. The function that is relevant to lift from the project to the IT portfolio level is ${\it ca}_s(t)$: the cost allocation for system s at time t. We divide the cost allocation in two parts. Formula 17 for quantitative IT portfolio management is as follows:


(17)  \begin{displaymath}{\it ca}_s(t) = \left\{ \begin{array}{ll}\displaystyle
cad_s...
...le t\le r_s$ } \\
0, & \mbox{otherwise}
\end{array} \right.
\end{displaymath}

The number rs is the retirement date of the system according to benchmark. The retirement date of a software project that started at is and was delivered to the business on ds can be calculated using formula 9. The absolute-time pendant of formula 9 is formula 18:


 
(18) rs = ds + 12y(ds - is)

For example a software project that started in February 1993, and was delivered in August 1994 took 18 months and will retire presumably in y(18) = 6.3 year. So its retirement date rs is expected to be around December 2000. Note that the + in formula 18 stands for date addition, not addition of real numbers.


  
Figure 4: Example plot of formula 17.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/crude.ps,width=10cm}
\end{center}\end{figure}

In formula 17 we used two other cost allocation functions. We depicted an example plot of formula 17 in Figure 4. The abbreviation ${\it cad}$ is the cost allocation for development, and ${\it cao}$ is cost allocation for operation. In mature organizations these could be Rayleigh curves [6,103], but for the majority of the organizations this is wishful thinking. We provide you with the level 1 versions of these functions, but we stress that when you really have more sophisticated curves at your disposal, that the formulas we depict below can stay the same, except that the cost allocation over time is calculated by integration instead of using external benchmarks (we will discuss this issue later on). Formula 19 supporting quantitative IT portfolio management is:


(19)  \begin{displaymath}{\it cad}_s(t) = \left\{ \begin{array}{ll}\displaystyle
{{\i...
...le t\le d_s$ } \\
0, & \mbox{otherwise}
\end{array} \right.
\end{displaymath}

So we calculate the total cost of development for the duration of the project in calendar months, and divide that by its duration, leading to a constant monthly amount for the duration of the entire development effort. Before initialization and after delivery the cost allocation is zero. This implies that for a given system and time the above function returns the cost allocation in the month containing the time. This function is the first part of the plot displayed in Figure 4.

Similarly we provide such a formula for the cost allocation calculating the minimal operational costs (viz. Figure 4). This is formula 20:


(20)  \begin{displaymath}{\it cao}_s(t) = \left\{ \begin{array}{ll}\displaystyle
{{\i...
...le t\le r_s$ }\\
0, & \mbox{otherwise}
\end{array}
\right.
\end{displaymath}

One of the ways to obtain insight in how you actually invest in IT is that you are able to monitor IT investments over time. This reveals trends in spending, maybe trends that you wanted to avoid if only you knew. We believe that the time dependency of IT cost allocation is crucial for executives in order to decide on strategic IT investments in a realistic manner. We can accumulate costs over time, which is done via integration. Formulas 21-23 for QIPM express the accumulated (minimal) total costs (${\it atc}$), the accumulated development costs (${\it adc}$), and the accumulated operational costs (${\it aoc}$) over a given time interval T for a certain software system s.


   
(21) $\displaystyle {\it atc}_s(T)$ = $\displaystyle \int_{t\in T} {\it ca}_s(t) dt$
(22) $\displaystyle {\it adc}_s(T)$ = $\displaystyle \int_{t\in T} {\it cad}_s(t) dt$
(23) $\displaystyle {\it aoc}_s(T)$ = $\displaystyle \int_{t\in T} {\it cao}_s(t) dt$

where ${\it ca}_s$, ${\it cad}_s$, and ${\it cao}_s$ are cost allocation functions. They can be the ones that we defined in formulas 1719 and 20, but they can also be more sophisticated formulas, like Rayleigh curves, or more sophisticated curves (we explain them later on). Now we can go from the project level to the portfolio level. This is done via summation, since there are only finitely many systems in a portfolio. For a given portfolio P and a given time interval T we can calculate accumulated (minimal) total costs for a portfolio, the accumulated development costs for a portfolio, and accumulated operational costs for a portfolio. Formulas 24-26 are:


   
(24) $\displaystyle {\it atc}_P(T)$ = $\displaystyle \sum_{s\in P} {\it atc}_s(T)$
(25) $\displaystyle {\it adc}_P(T)$ = $\displaystyle \sum_{s\in P} {\it adc}_s(T)$
(26) $\displaystyle {\it aoc}_P(T)$ = $\displaystyle \sum_{s\in P} {\it aoc}_s(T)$

We can use these formulas to answer the questions like the ones we posed earlier. How to compare operational costs of say 5 business units in the first quarter of 1999? For five business units $\mbox{BU}_1,\ldots,\mbox{BU}_5$ you can calculate the total minimal operational IT cost for 1Q99 using the following simple formula:


\begin{displaymath}\sum_{s\in \mbox{BU}_i} \int_{t \in 1Q99} {\it cao}_s(t) dt,
~~~~ i = 1,\ldots,5 \end{displaymath}

We assume that for these business units we have their systems in the IT portfolio database, and that we have initialization and delivery dates of the projects. With this data, we can calculate for each system the operational cost allocations for the first quarter of 1999, and accumulate these figures for all the systems in the portfolio of a specific business unit. In this way you can compare operational cost per business unit and zoom in on differences, signaling trends that may need attention. For instance, if business unit 1 is now at CMM level 2, does this lead to lower operational costs? And if so, is it more than the costs of obtaining CMM level 2 and maintaining that level? And if so, how much could we save if we would do this for other business units as well? Note that it is not at all obvious whether operational costs will go down in case of better approaches to develop systems. The costs tend to become higher [27], this is not too much of a problem if the systems create value. It can also be the case that one business unit is consuming the majority of the total operational budget in the corporate IT portfolio. Depending on the criticality and profitability of such a business unit, corporate executives can decide on different strategies: phasing out, selling, or reengineering and so on.

Hitting the innovation borders

Already in the early nineties some fortune 500 companies found themselves trapped in the situation where their entire IT budget was gobbled up with updating, repairing and enhancing their aging legacy applications [61]. This does not need to be a dangerous situation if you reached exactly that level of automation that you wanted. But it is more likely that when you are in such a situation, you are exposed to unacceptable business risks. The environment changes fast and unpredictably, so to stay competitive, you have to innovate. This means that you must have the supporting budget. But since all new development adds to the operational pressure, you cannot innovate without limits. So, you need to keep track of how much of the IT budget is spent on operational costs at all times to know in advance whether operational costs start to hinder the amount of innovation that executive management deems appropriate for the company. This implies that to initiate new development you probably first need to retire existing systems, reduce operational costs, or increase the total IT budget. The minimal operational costs are significant as we already indicated with formula 11. We will now derive what this means in terms of absolute time.

We calculate the ratio between operational costs and development costs per unit of time. If you have sophisticated data available you can calculate this using actual values, but in level 1 organizations this information is usually absent. So we use our cost allocation formulas to calculate this ratio. Note that we in fact calculate the ratio between the heights of both rectangulars in Figure 4. The height of the first rectangular is:


\begin{displaymath}{\it tcd}_s(d_s-i_s)\over d_s-i_s\end{displaymath}

and the height of the second rectangular is:


\begin{displaymath}{\it mco}_s(d_s-i_s)\over 12y(d_s-i_s)\end{displaymath}

Division yields:


\begin{eqnarray*}{{\it mco}_s(d_s-i_s)\over {\it tcd}_s(d_s-i_s)}\cdot
{d_s-i_s\...
...d_s-i_s)^{-0.359}\over 12(d_s-i_s)^{0.359}}\\
& = & {1\over5}
\end{eqnarray*}


using formula 14 for the relative ratio, and formula 9. This results in the fixed ratio equation 27:


(27)  \begin{displaymath}{\it cao}_s : {\it cad}_s = 5 : 1
%
\end{displaymath}

So the operational cost allocation per time unit is 20% of the cost allocation per time unit for building the system. Or investing a dollar per time unit in IT development conservatively leads to 20 cents per time unit of operational costs for an extended period of time. This phenomenon is for many business executives counterintuitive: the operational costs of IT are much more significant than they expect from a delivered product to the business. So again we see now in absolute time the cost magnet in the IT budget that is attracting large amounts of hidden resources to keep the delivered systems operational.

Operational cost tsunamis

Let us give an example of the dynamics of such hidden costs over time. Suppose a corporation merges with other parties, and to consolidate the merge a lot of IT intensive projects have to be carried out, ranging from an enterprise wide CRM system, a few large ERP systems, several enterprise integration projects, HRM overhauls, e-business projects, Internet related projects, and a large number of relatively small ones. In Table 4 we summarized an impulse of $3.15 billion divided over 72 projects of varying size. We call this IT portfolio M, short for Merge. If you think $3.15 billion is exceptional, consider this quote about IT improvements that stems already from 1984 [54]:

Max Hopper, Armacost's technology expert, planned to announce that BankAmerica would spend at least $5 billion to improve its computer systems over the next few years.

But also the surveys on annual IT spending show that billion dollar investments are not uncommon. For instance, in 1998, the hundred top IT-spending European companies invested together 53.7 billion dollar, which is half a billion per company on the average [125]. Furthermore, the federal government of the United States of America plans to invest $52 billion in 2003 [24].


 
Table 4: IT investment impulse.
# tcd ${\it dd}(c)$ y(d) rs cad cao $\sum{\it cad}$ $\sum{\it cao}$
50 15 27 99 126 0.6 0.11 27.5 5.49
10 30 33 113 146 0.9 0.18 9.0 1.81
6 75 43 133 176 1.8 0.35 10.5 2.10
3 150 52 151 203 2.9 0.58 8.6 1.73
2 300 63 171 234 4.8 0.95 9.5 1.90
1 600 77 188 260 7.8 1.56 7.8 1.56

The first column in Table 4 gives the number of projects, the second the sum of their estimated cost in millions of dollars. We used formula 3 to calculate their presumable cost of development, for the estimated development schedules. We calculated their deployment time, and the total life time (rs). We calculated the cost allocation for development per month in millions of dollars using formula 19, and with equation 27 we calculated its operational cost per month according to benchmark. We accumulated these costs for all projects leading to the last 2 columns. We supposed that all 72 projects start shortly after the merge. Some of them are outsourced, others are done internally, the IT workforce is extended with myriads of people, and so on. It is not hard to calculate the accumulated costs for the entire portfolio for development and operations in absolute time as summarized in Table 5.


 
Table 5: Accumulation of cost allocation data.
development operations
time cost time cost time cost
27 72.9 27 5.49 127 14.6
33 45.4 33 7.3 146 9.1
43 36.4 43 9.4 176 7.1
52 25.9 52 11.1 203 5.2
63 17.3 63 13.0 234 3.5
77 7.8 77 14.6 271 1.6


  
Figure 5: Seismic IT costs induce an operational cost tsunami.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/combined.ps,width=10cm}
\end{center}\end{figure}

We visualized these data points in Figure 5. As you can see there is a significant cost impulse in the first 25 months that rapidly declines after about 50 months.

Next we use the accumulated data to derive the cost allocation function for IT portfolio M. The accumulated cost allocation function for development is the peaky one. We use standard parametric statistical techniques to infer a smooth curve from these data. Using an implementation of a nonlinear least squares regression algorithm [19,55,129,96] as implemented in the statistical system SPlus [129,73] the data points can be fitted to the following curve:


\begin{displaymath}{\it cad}_M(t) = 7.514287t^{1.258007}e^{-0.07304098t} \end{displaymath}

Recall that M is the IT portfolio consisting of the 72 projects in Table 4. Before we continue, a few words on the large amount of digits in the above formula. The data has a certain precision, of course, but we try to fit this data as good as possible, which leads to the large amounts of digits. If we would round the above digits, an entirely different curve would show up. This is partly due to the exponential functions: a slight change in its power is a huge change in the behavior of the curve. So we keep these very precise, so that we will not deviate from the input data. Second, the outcomes of using the formulas, are Moreover, we will use such a precision in the outcomes of the formulas such that readers who redo the calculations can convince themselves that they made the right calculation. Of course, in practical applications you have to round output of such formulas in accordance with the precision of the input data (but not the used coefficients in those formulas).

The other curve represents operational costs. It is a curve with a long wave length and a 100 month lasting cost plateau. This plateau starts right after the IT investment impulse is over. Also the accumulated operational cost allocation function can be fitted to a curve:


\begin{displaymath}{\it cao}_M(t) = 0.02643959t^{1.692713}e^{-0.003407055t^{1.317413}} \end{displaymath}

When the IT portfolio M is turned into reality, and most of the systems have become operational, the IT investment should start to generate added value. But now the operational costs start to rise. They can easily annihilate returns, since the operational costs represent a long lasting significant expense. We call a sudden investment a seismic IT investment, since it causes an operational cost tsunami, just like geographic seismic events can cause tsunamis (a great sea wave produced by submarine earth movement or volcanic eruption). Operational cost tsunamis are often responsible for the black hole of IT that many executives experience, but cannot reveal. These hidden costs can significantly influence the pace of new development.

By quantitative IT portfolio management you can reveal existing operational tidal waves, but also prevent new tsunamis, by astutely timing the rate of innovation. This implies that it is useful to analyze IT investments over a long period of time to uncover cost waves that are still dominating your IT budget. If you look at Figure 5 again, you can see that many years after implementation of the $3.15 billion IT portfolio, significant operational costs of the seismic IT investment are still influencing the IT costs of the business unit owning the portfolio.

With the accumulated cost allocation functions we have a potentially powerful weapon to forecast future costs. First we calculate the accumulated development cost function for the entire portfolio. We use formulas 25 and 22 for that:


\begin{eqnarray*}{\it adc}_M(T) & = & \sum_{s\in M} {\it adc}_s(T) \\
& = & \s...
...\\
& = & \int_{t\in T} 7.514287t^{1.258007}e^{-0.07304098t} dt
\end{eqnarray*}


We take the interval T=[0,t]:


\begin{eqnarray*}{\it adc}_M(T) & = & \int_0^t 7.514287t^{1.258007}e^{-0.0730409...
...
& = & 7.514287(419.087 - 368.192\Gamma(2.258007, 0.07304098t))
\end{eqnarray*}


You can use a computer algebra system like Maple [135] or Mathematica [132] for this evaluation (but we used formula 39 that we discuss later on). The function $\Gamma$is a special mathematical function called the upper incomplete gamma function. This function satisfies the following equation:


\begin{displaymath}\Gamma(a,x) = \int_x^\infty t^{a-1} e^{-t} dt \end{displaymath}

The accumulated development cost function approximates the total IT investment accurately: if we take  $T=[0,\infty)$ we should get the total development budget of the portfolio back. Indeed, $\lim_{x\rightarrow\infty}\Gamma(a,x)=0$, so:


\begin{displaymath}{\it adc}_M(0,\infty) = 7.514287\cdot 419.087 = 3149.14 \end{displaymath}


  
Figure 6: Accumulated minimal TCO over time.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/acplot.ps,width=10cm}
\end{center}\end{figure}

million dollars. This outcome has a 0.02% difference with the actual investment of $3.15 billion. For the total minimal cost of operation we can not do such a ``regression test'', since those costs were never envisioned in the first place. But let's calculate the accumulated operational cost for M as well. We used formula 39 that we discuss later on, but you can also use a computer algebra system like Mathematica [132] or Maple [135] for this.


\begin{eqnarray*}{\it aoc}_M(0,t) & = &
\int_0^t 0.02643959t^{1.692713}e^{-0.003...
... & 2262.23 - 2219.22\Gamma(2.043939903, 0.003407055t^{1.317413})
\end{eqnarray*}


so now we can see the total impact of operational costs for this portfolio over the entire life cycle of the portfolio. We find  ${\it aoc}_M(0,\infty)= 2262.23$ million dollar. The total cost of ownership of this portfolio thus amounts to 5411.37 million dollar. So, 58.2% of the costs are devoted to development, and 41.8% is necessary for operations, which is in accord with the many empirical findings we quoted earlier. Note that this IT investment impulse is now $2.3 billion short. Not all this money needs to be present from the start, but should become available sometime in the future.

When does this future start? When the initial IT investment is fully consumed by implementing it. We can calculate when this is the case. We know that the accumulated total cost allocation for M is as follows.


\begin{eqnarray*}{\it atc}_M(0,t) & = & {\it adc}_M(0,t) + {\it aoc}_M(0,t) \\
...
...55t^{1.317413}) - \\
& & 2766.70 \Gamma(2.258007, 0.07304098t)
\end{eqnarray*}


We calculated this formula simply by adding the above two derived formulas. We plot the three accumulated cost functions in Figure 6. This is an insightful graph giving you an indication of the probable spending situation over the forthcoming years. We can already see that somewhere between month 50 and 60 the money will probably run out. We use this rough estimate as an initial value for root finding. With the computer algebra system Mathematica [132] we solved the root of the following equation (but we could have used Maple [135] or Matlab [81] as well):


\begin{displaymath}{\it atc}_M(0,t) - 3150 = 0 \end{displaymath}

yielding t=57.3123 months. So, the money runs out after 57 months. After that time stamp we really need a positive return from the IT investment to subsidize the missing 2.3 billion. Not immediately, but in due time. When these returns should become available is our next subject.

ROI threshold quavering


  
Figure 7: Minimal ROI threshold over time.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/mrtplot.ps,width=10cm}
\end{center}\end{figure}

After about 50 months, most of the systems in the example IT portfolio have become operational, that is, when ${\it atc}_M(50) =
\$2900$ million is spent. The next hundred months, we need another:


\begin{displaymath}{\it atc}_M(150) - {\it atc}_M(50) = 1838.87 \end{displaymath}

million dollar, to finalize development and keep the portfolio running. Suppose that the investment plan for our example IT portfolio projected an annual return of 10%, starting after 50 months (which is more than 4 years), then in the first year after these 50 months the portfolio should add a value of 315 million. However, you have to spend in that year:


\begin{displaymath}{\it atc}_M(62) - {\it atc}_M(50) = 387.077 \end{displaymath}

million dollar on the IT portfolio as well. So you will make a net loss of 72 million if you just set the ROI threshold on 10%. When we take all the costs into account we get a different picture for the ROI threshold. We calculate the actual minimal ROI threshold, abbreviated ${\it mrt}$, that you need in order to achieve a net 10% ROI at all.

The first 50 months is the investment period: no ROI is expected. From that moment on a net 10% ROI annually over the entire investment of 3.15 billion is projected. This amounts to 26.25 million dollar per month. The 3.15 billion is spent at time stamp 57.3, so until that time the ROI does not need to compensate for IT portfolio costs. But after that time stamp, the ROI should pay for the ongoing costs in addition to the 10% bottom line. The minimal ROI threshold of our example portfolio M is as follows.


\begin{displaymath}{\it mrt}_M(t) = \left\{ \begin{array}{ll}
\displaystyle
0,...
...(t)\over2.625}, & \mbox{if $ t\ge 57.3 $}
\end{array} \right.
\end{displaymath}

We plotted the minimal ROI threshold in Figure 7. We call this curve an ROI threshold quaver after the shape of an eight note in music notation; we sometimes use the (somewhat awkward) term iso-net-ROI line. As soon as the IT investment budget is consumed, you need a return of about twice as much to achieve the 10% bottom line. This is not a short term tremble in the necessary ROI, but a long lasting one. Only decades after the IT investment impulse, the quaver-shaped curve approaches 10%. Obviously, when you do not take hidden costs into account, it is very likely that you will never have a positive return on investment, if the actual profits of the IT investment cannot compensate the ROI quaver.

IT portfolio exposure

Apart from the fact that IT spending is much more costly than many of us envision and that projected returns are not always met (e.g., if ROI threshold quavering is not taken into account) there is also another dimension to quantitative IT portfolio management, that can seriously hinder achieving added value: IT risks. McFarlan [83], proposed already in 1981 that risks of IT projects should be assessed not only separately, but also in the aggregate--as a portfolio. While McFarlan proposed a risk assessment questionnaire, we quantify risk based on benchmark data. We believe that risk assessment will benefit from a combination of quantitative and qualitative information.

In the introduction we already indicated that the risks of software projects are high. As Standish group found, 50% is challenged, and 30% of the IT projects fail. But, can you project these numbers directly on your own IT portfolio? The answer is no. But executives need to get an indication of the ratios of successful, challenged and failed projects. Since there is no historical data on such topics in CMM level 1 organizations, and understandably, this information is usually hidden for executives, we need to compensate for this by using public benchmarks. In this section we show how you can obtain an indication of the IT portfolio exposure based on project durations. This is not necessarily the ideal method to quantify risk, but usually it is the only way for level 1 organizations to get an indication at all.

Adding to the complexity of risks is that some executives think that you can just overhaul IT systems, this is not true. However, it is not a surprise that people think like this: our calculations with respect to IT portfolios revealed that even in the unlikely case that your systems do not need enhancements, the costs to keep them operational are huge. Moreover, enhancing these systems makes things worse: the costs increase. So it makes sense to ask yourself, why not renew these costly systems?

Once these often costly systems are up and running their failure exposure is lower than in a new situation, where childhood diseases, and infant mortality are not uncommon [59,47,48,49,43,42,45]. You could see a deployed system as a set of executable requirements needing continuous debugging, refinement, and extension. While this can be a frustrating task, cherishing existing business-critical systems is often paying off much more than overhauling these systems by new ones. Strassmann noticed this as well, given that he writes [122, p. 258, 257]:

For an enterprise with a large accumulation of legacy systems--which includes all established organizations--there are no technical strategies other than evolutionary migration strategies. Defining the path of such migration requires placing limited objectives along the way. The managerial skill in coming up with such a plan and then making it happen will be the ultimate test which only superior information management teams will pass. [..] In the future, information political contests will be fought over issues that concern managing software assets. [..] Whoever accepts that conservation of software assets is now the key to all information politics will end up as a leader.

Indeed the software assets of an enterprise may have their moments, but the bottom line is that they are relatively mature, and often the cash cows of the company. Scrupulous quantitative IT portfolio management, containing calculations like the ones we have shown thus far will support you in obtaining the appropriate justifications for investing, or disinvesting in such existing assets.

Failure rates for IT projects

Benchmark data indicates a very strong correlation between the size of software and the chance of failure. This relation is also strong when a project is challenged, meaning huge cost and effort overruns, while much less than the originally requested functionality is delivered. Based on public benchmark data we inferred simple formulas indicating risks. Like the other formulas, you should not use them to base individual IT project contracts on, but again they are an excellent means to get an idea of IT portfolio exposure. Table 6 summarizes schedule adherence benchmark data for information system projects [66, p. 192].


 
Table 6: Information system schedule adherence (1999).
Size early on-time late cancelled
(FP) projects projects projects projects
  (%) (%) (%) (%)

1
6.00 92.00 1.00 1.00
10 8.00 89.00 2.00 1.00
100 7.00 80.00 8.00 5.00
1.000 6.00 60.00 17.00 17.00
10.000 3.00 23.00 35.00 39.00
100.000 1.00 15.00 36.00 48.00

Average
5.17% 59.83% 16.50% 18.50%

Based on these benchmarks we fit a curve that can be used to quantify the risk of failure as a function of the project duration. The six observations above are based on many projects, so we consider this a strong benchmark. We assume that IT project failure grows logistically with increasing size. A statistical fit based on the observations gives us formula 28. It is the chance of failure for a given information system project given its size in function points (we will use the subscript i to indicate information systems industry):


 
(28) $\displaystyle {\it cf}_i(f)$ = $\displaystyle 0.4805538\cdot\left( 1 -
\exp\left(-0.007488905\cdot f^{0.587375}\right)\right)$

We note that formula 28 cannot be used for systems above 100.000 function points: the asymptotic behavior of formula 28 is that the chance of failure approaches 50% for large sizes whereas we believe that when the size of software reaches infinity, the chance of failure goes to one. However, for a pragmatic indication for the majority of the projects in your IT portfolio, formula 28 can be used. We could have fitted a curve with more appropriate asymptotic behavior. But such a curve is less accurate below 100.000 function points. There are two reasons for not using this alternative: firstly, the majority of the systems is in the range that formula 28 covers, and secondly, for systems that exceed 100.000 function points, we recommend to do a full function point analysis.

Recall that in CMM level 1 organizations you usually only have elapsed time and not the function point size, so we have to make another calculation using the benchmark taken from  [65]:  f0.39 = d. This leads to formula 29.


 
(29) $\displaystyle {\it cf}_i(d)$ = $\displaystyle 0.4805538\cdot\left( 1 -
\exp\left(-0.007488905\cdot d^{1.506090}\right)\right)$

As an example, the risk of failure for a 36 month MIS project is ${\it cf}_i(36) = 0.39$. So 39% according to benchmark. In Figure 8 we plotted formula 29 to indicate that the chance of failure increases rapidly for longer project durations.


  
Figure 8: Chance of failure as a function of project duration.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/cfplot.ps,width=10cm}
\end{center}\end{figure}

In the MIS industry it is customary to outsource certain parts of an IT portfolio. For instance, 43% of all the outsourcers is working on MIS software [66, p. 264]. To that end it is useful to make comparisons with respect to cost and risk (we elaborate on make-commission decisions later on). We derive outsource software risk formulas by using available public benchmarks. Table 7 summarizes schedule adherence benchmark data for outsourced software projects [66, p. 275].


 
Table 7: Outsource software schedule adherence (1999).
Size early on-time late cancelled
(FP) projects projects projects projects
  (%) (%) (%) (%)

1
5.00 93.00 1.00 1.00
10 8.00 90.00 1.00 1.00
100 7.00 85.00 6.00 2.00
1.000 8.00 67.00 15.00 10.00
10.000 1.00 38.00 34.00 27.00
100.000 1.00 26.00 40.00 33.00

Average
5.00% 66.50% 16.17% 12.33%

Similarly to the derivation of formula 28, we carried out a statistical fit based on the observations summarized in Table 7. This leads to formula 30 for quantitative IT portfolio management. It is the chance of failure for a given outsourced project as a function of its size in function points (the subscript o refers to the outsource industry):


 
(30) $\displaystyle {\it cf}_o(f)$ = $\displaystyle 0.3300779\cdot\left( 1 -
\exp\left(-0.003296665\cdot f^{0.6784296}\right)\right)$

If outsourcers use the function point metric, you can use formula 30. If they don't, you can use the productivity benchmark taken from [65] for outsourcers (that we tabulated in Table 3):  f0.38 = d. This leads to formula 31.


 
(31) $\displaystyle {\it cf}_o(d)$ = $\displaystyle 0.3300779\cdot\left( 1 -
\exp\left(-0.003296665\cdot d^{1.7853411}\right)\right)$

Challenge rates for IT projects

It is convenient to have formulas indicating the chance of late projects. Similarly to the failure rate formulas, we can easily infer a curve using the benchmark for late projects depicted in Table 6. Again we assume that the chance of late projects grows logistically with the size of IT systems. This leads to formula 32 expressing the chance of late projects in the information systems industry for a given function point size.


 
(32) $\displaystyle {\it cl}_i(f)$ = $\displaystyle 0.3672107\cdot\left( 1 -
\exp\left(-0.01527202\cdot f^{0.5535625}\right)\right)$

The instantiation for the MIS industry using benchmark  f0.39=dleads to formula 33.


 
(33) $\displaystyle {\it cl}_i(d)$ = $\displaystyle 0.3672107\cdot\left( 1 -
\exp\left(-0.01527202\cdot d^{1.4193910}\right)\right)$

So, the risk on cost overruns for a 36 month MIS project is ${\it cl}_i(36)
= 0.31$. According to benchmark there is a 31% chance that this project will suffer from cost serious overruns. In Figure 9 we plotted formula 33.


  
Figure 9: Chance on late projects as a function of project duration.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/clplot.ps,width=10cm}
\end{center}\end{figure}

Likewise, we can do the same for outsource software. Using the data of Table 7 we easily find formula 34 for quantitative IT portfolio management:


 
(34) $\displaystyle {\it cl}_o(f)$ = $\displaystyle 0.4018422\cdot\left( 1 -
\exp\left(-0.009922029\cdot f^{0.5657454}\right)\right)$

If the outsourcers work with function points, you can use formula 34 immediately, and if not, you can use formula 35:


 
(35) $\displaystyle {\it cl}_o(d)$ = $\displaystyle 0.4018422\cdot\left( 1 -
\exp\left(-0.009922029\cdot d^{1.488804}\right)\right)$

where we used the schedule power 0.38 as listed in Table 3.

Challenge and failure rates for portfolios

Knowing how to calculate prominent exposures on a per project basis, we can make the step from individual projects to the portfolio level. We accumulate the project exposures to obtain the portfolio exposure. You can then answer questions like: which business unit has the highest exposure to failed projects? For a given portfolio P the failure exposure of P is formula 36:


(36)  \begin{displaymath}{\it fe}(P) = {1\over\vert P\vert}\cdot \sum_{s\in P} {\it cf}(d_s)
\end{displaymath}

where |P| is the number of systems in the portfolio, s is a system, ds is its project duration, and ${\it cf}$ is some derived chance of failure function, for instance formula 29 or 31. In this way you can calculate the average failure rate of the entire IT portfolio. It is up to executive management to set a threshold on the overall failure exposure of an IT portfolio.

Similarly, for a given portfolio P the late exposure of Pis formula 37:


(37)  \begin{displaymath}{\it le}(P) = {1\over\vert P\vert}\cdot \sum_{s\in P} {\it cl}(d_s)
\end{displaymath}

For instance, for the sample portfolio that we depicted in Figure 2 we can calculate both exposures: ${\it fe}(S) = 0.13$ and ${\it le}(S) = 0.14$. The sample portfolio has a chance of failure of 13%, and a 14% chance of serious cost overruns. For our seismic IT impulse depicted in Table 4 we can likewise calculate that  ${\it fe}(M) = 0.126$ and ${\it le}(M) =
0.135$. The percentages that Standish Group found, (30% cancelled, 50% challenged, 20% okay), are not found in many business unit level IT portfolios. This is due to the fact that per portfolio, only a fairly small number of very large projects are present. At the corporate level the percentages can be a bit higher, but still not approaching the Standish Group findings so closely, that you can use their benchmark to calculate your IT portfolio exposures. This is due to the fact that large companies often have a lot of smaller business units. When only very large business units are present, also larger projects are undertaken, with their risks. The Standish Group findings are accumulated at the country level (for the USA). Maybe only the largest projects in the surveyed companies were taken into account.

Depending on the nature of the company and the deepness of its pockets, such IT portfolio exposures give you an indication whether you are within the exposure zone that you consider acceptable. If not, it is time to mitigate those risks, and identify the carriers of large exposures; they are almost always large IT projects (as McFarlan also pointed out [83]).

Comparison with higher CMM levels

A natural question is whether the accuracy of our approach explained so far will drastically increase when the underlying mathematics is not based on level 1 formulas, but on formulas available to organizations with CMM levels that are higher than 1. It is not easy to make comparisons, since there are hardly any published cost allocation curves (which might be due to the fact that 75% of the organizations are at CMM level 1). Apart from that, many cost estimation techniques were traditionally based on lines of code, instead of function point-like metrics, such as the various versions of function points [1,62], or Tom DeMarco's bang metric [28]. It is known that different definitions for lines of code can lead to an uncertainty of 500% in the software productivity literature, rendering comparisons of different estimates based on lines of code often useless [60, p. 16]. Recall that in [33, p. 132] even a 2300% variance was found for different definitions for lines of code. Therefore, it is not a surprise that in a review article on software cost estimation techniques [85], huge differences were found when about 15 cost estimation techniques were applied to a single project.

Nevertheless we found an example curve in a textbook on software cost estimation. With this published example we illustrate that the results might not lead to radically different decision making than in the level 1 situation. One argument why this is the case, is that we are not using the actual relations between cost, effort, and duration over time, but rather their mean values, expressed by the area under cost allocation curves. If the areas are of the same order of magnitude, all calculations based on the areas under these curves will be of the same order of magnitude as well, and therefore the decision making will not drastically change. Of course, CMM level 2+ organizations have historical data, which enables the derivation of internal benchmarks. They are more precise than the external benchmarks that we use now. So the quality of the decisions will improve, based on the input data that you can instantiate our formulas with, but our method can still be used.

We give an example supporting the fact that more involved cost estimation formulas usually do not change the outcome of IT portfolio decision making. Note that in general, it is a good idea to estimate software costs as accurately as possible.

In Boehm's book on software engineering economics [6, p. 68] a Du Bridge Chemical software development project is used as a running example. Its distribution is as follows.


\begin{displaymath}{\it ead}(t) = {m t \over p^2}\cdot \exp-\left({t^2\over2p^2}\right) \end{displaymath}

The above function is called a Rayleigh curve; ${\it ead}$ is the effort allocation for development, m is again short for man-months, and p represents the month at which the project achieves its peak effort. For the Du Bridge Chemical project, Boehm used the following data points: m =91, and p =7 months. Boehm also gives a rule of thumb for p: it is half the estimated development time. This implies that the total effort for 14 months is the area under the curve plotted in Figure 10. We integrate over the effort allocation formula and find:


\begin{displaymath}\int_0^{14}{\it ead}(t)dt = 78.7 \end{displaymath}


  
Figure 10: Rayleigh effort curve and our cost allocation formulas.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/rayleigh.ps,width=10cm}
\end{center}\end{figure}

You can use calculus, a scientific calculator, Matlab [81], Maple [135], or Mathematica [132] to calculate the integral (but we used formula 39 that we discuss later on). After the 14 months, Boehm applies another model. Boehm calculated the effort allocation for the first year of maintenance. He did not use a Rayleigh curve for it, but used a fraction of the total development effort that is exactly the same as our inferred ratio in equation 27. However, his ratio was not inferred from general arguments like our ratio, but calculated using actual data. He calculated the so-called annual change traffic: he measured the exact amounts of added instructions and modified instructions (but deleted code was not taken into account). Based on that a fraction of m was found: yielding in the first year of maintenance $0.20\cdot m
= 18$ months. In Figure 10 this is expressed by the horizontal line at a height of 1.5 from the 14th to the 26th month. So the total effort for the first 26 months of the project according to Boehm is 96.7 man-months.

Let us compare this to our calculations for the level 1 case. The described project is an MIS project: it is a raw material inventory project. So we can use formula 5, which is instantiated with an MIS schedule power. The number of staff necessary for development is  ${\it nsd}(14) = 5.8$. Since it is a 14 month project, m = 81man-months. We plotted our effort allocation function in Figure 10 with a dashed line. Using formula 6, we can calculate the required staff for maintenance:  ${\it nsm}(14)=1.16$. This is the lower horizontal line that lasts for the entire life cycle of the system.

To compare our calculation to Boehm's work, we take only the first year of maintenance: m=14 months. So the total effort for development plus the first year of operation is 95 man-months. This is less than 2% difference with Boehm's method. His method is clearly meant for for CMM levels higher than 1. For, it is not feasible for a level 1 organization to measure the correct staff increase and decrease over time, the peak effort allocation, the number of added instructions, deleted instructions, modified instructions, and so on.

From one example you cannot draw far reaching conclusions, but our approach makes sense. Let's have a second look at the general Rayleigh curve from Boehm's book. The area under a Rayleigh curve [103, p. 46] is exactly the total number of man-months (we use formula 39 for this):


\begin{eqnarray*}\int_0^\infty{\it ead}(t)dt
& = & \int_0^\infty {mt \over p^2}\cdot e^{-t^2\over2p^2}dt \\
& = & m
\end{eqnarray*}


In our formulas, we abstract immediately from the staff variations, and treat the number of man-months for development as a constant over time. So the above derivation shows that in general CMM level 2+ organizations will have more accurate data over shorter time frames: with the Rayleigh curve you can predict for each moment in time, the exact effort. While in our case, we use the average from the start, which coincides with the area under the Rayleigh curve. The benefit of having Rayleigh curves is that you then find a better average: not based on an external benchmark, but on actuals.

Towards full transparency

When you have the accumulated cost allocation curves for development and deployment both for groups of projects and individual projects, you know the how, the when, and the how many of your IT-dollar expenditure. This adds to the badly needed transparency in IT performance and investments. In the previous section we have seen a few such curves already: a seismic IT impulse and an operational cost tsunami at the portfolio level. In all portfolios we analyzed, we encountered these two extremes plus curves in between these extremes.

Cost allocation equation

We note that our experience is limited to large companies: it may be the case that different models are necessary in other cases. Based on our experience, we found that IT portfolio costs over time can be accurately approximated by the following cost-time function c(t)returning for a given time t the corresponding cost. We conjecture that this will also hold for IT portfolios that we have not assessed. Formula 38 for QIPM is:


(38)  \begin{displaymath}c(t) = a\cdot t^{\alpha}\cdot e^{-b\cdot t^{\beta}}
\end{displaymath}

In our formula  $a,\alpha,b,\beta$ are constants idiosyncratic for the environment in which the work is carried out. Useful relations for the coefficients can be inferred, as we will see later on. Cost could be seen either as effort, its corresponding financial remuneration, or another cost dimension (see Figure 11 for several plots of formula 38).

It is already known for a long time that for $\alpha=0$ and $\beta=2$the above equation is used to estimate effort allocation for research and development projects as shown by Norden [87,88,89]. Recall that for these $\alpha$ and $\beta$ our cost allocation equation reduces to the Rayleigh curve. After Norden, Putnam applied Rayleigh curves to software projects [101,104,102]. Also Boehm based his COCOMO model in part on Rayleigh curves. However, he noted that [6, p. 68]:

It is evident that the shape of the Rayleigh distribution in Fig. 5-5 is not a close approximation to the shape of the labor distribution curves for any of the organic-mode software projects shown in Fig. 5-4. This is largely because an organic-mode software project generally starts with a good many of the project members at work right away, instead of the slower buildup indicated by the Rayleigh distribution. However, the central portion of the Rayleigh distribution provides a good approximation to the labor curves of organic-mode software projects.

Boehm thought that Rayleigh curves were not in accord with the actual cost allocation of a certain type of project. So, Boehm used Rayleigh curves only around the peak effort p: between 0.3pand 1.7p. He could have used a so-called decentralized Rayleigh curve. If you need non-zero man power at the start of the project, you should use an additional location parameter, to shift the Rayleigh curve to the left (use $t-\mu$ instead of t). It is not necessary to invent another distribution for that. So by removing such a constraint it is possible to approximate reality much better. We did not display a decentralized version of our cost allocation function, but when we need it, this does not add any difficulty to using our results. Parr invented an alternative distribution for the same reason: a non-zero man power at the start of the project [91]. Again, a decentralized Rayleigh distribution would have solved Parr's issue satisfactorily (we discuss his distribution later on).

There are also cases where a Rayleigh curve is really not a good fit for the given data, including a decentralized version. Then the limitations of Rayleigh curves should not function as a procrustean bed preventing accurate modeling of reality--when that reality is just no Rayleigh curve. Indeed this was also found in a 2001 US Air Force study, where a generalization of a Rayleigh curve was necessary to model funding curtailment to R&D programs [98]:

The Rayleigh function has the shape parameter set to a constant of 2. This makes the model somewhat rigid in its ability to model programs. The Rayleigh function forces a proportionate tail using the peak expenditure point as the start. In actuality there are programs where a proportionate tail is not derived from the point of peak expenditures. For example, a program may have a peak expenditure during one time period and a very short tails--program expenditures stop shortly thereafter. The Rayleigh function would not provide an accurate model of reality in this case because of its rigidity tied to the constant shape parameter.

So Rayleigh curves are not the universal solution to the effort allocation problem. More flexibility is necessary especially when you lift from the project to the portfolio level. To give you an idea of the drastic variation you can achieve by varying the four coefficients of formula 38, we plotted several variations of Boehm's example Rayleigh curve:


\begin{displaymath}{91 t\over 49}\cdot \exp-\left({t^2\over98}\right)\end{displaymath}


  
Figure 11: Varying the four coefficients of formula 38.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/allvar.ps,width=13cm}
\end{center}\end{figure}

The original Rayleigh curve plus variants is plotted in each quadrant of Figure 11. The variation that is possible with formula 38 is richer than those of a Rayleigh curve, and it is essentially needed for the purpose of quantitative IT portfolio management. Below we give the ranges of the four coefficients. Figure 11(a) and (b) are all Rayleigh curves, whereas in 11(c) and (d) we relax the fixed powers characterizing the Rayleigh curve: we vary both powers.

If we look back at the statistically fitted seismic IT impulse to characterize the development cost allocation for our example portfolio M, we see that it is an instantiation of formula 38. It has an  $\alpha=1.25$, which is larger than allowed for a Rayleigh curve. There is a much faster effort buildup than a Rayleigh curve can accommodate. Indeed $\beta=1$, which is small compared to a Rayleigh curve (where $\beta=2$). And a smaller $\beta$ leads to smaller wave lengths. This instantiation of formula 38 corresponds to a rapid staff build-up that cannot be fitted into a Rayleigh curve. Looking at the operational cost tsunami, it is obvious that this is also an instance of formula 38. While  $\alpha=1.69$ is large, the small a=1/41 is functioning as a shock absorber that dampens the peak. While  $\beta=1.32$is relatively small, the tiny b=1/294 smooths the decay of the wave considerably. The good news is therefore, that you can fit more realistically IT portfolio costs using our cost allocation equation. The bad news is that with more degrees of freedom the curve fitting becomes more involved (more on tools and techniques to support the mathematics later on).

Cumulative cost allocation equation

It is insightful to partition a portfolio P into sets of IT projects that are somehow related. This relation could be a business unit, or the set of corporate wide systems, or systems in a similar phase: operations, development, retirement, outsourced, dormant, and so on. In this way, accumulating the individual projects does not lead to information loss about the type of investment. Moreover, P is then divided into sensible IT investment chunks just like an asset portfolio is partitioned into sensible categories. You can obtain the corporate view, by adding all these parts into one large function that describes the accumulated costs of the enterprise. In order to do so, we need accumulated cost functions for these groups of systems. From formula 38 we can infer formula 39:


 
(39) a(t) = $\displaystyle \int_0^t c(t)dt$
    = $\displaystyle \int_0^t a\cdot t^{\alpha}\cdot e^{-b\cdot t^{\beta}}dt$
    = $\displaystyle {ab^{-{(\alpha+1)\over\beta}}\over \beta}
\left(\Gamma({\alpha+1\over\beta}) -
\Gamma({\alpha+1\over\beta},b\cdot t^\beta)\right)$

where $\Gamma(x)$ is the $\Gamma$ function extending the factorial on natural numbers to real (and complex) numbers. The function name a stands for accumulated cost function. In our case, we can express $\Gamma$ using Euler's identity:


\begin{displaymath}\Gamma(x) = \int_0^\infty t^{x-1} e^{-t} dt \end{displaymath}

An interesting property of Euler's $\Gamma$ is that  $\Gamma(n+1)=n!$for all $n\ge0$. The 2-adic  $\Gamma(a,x)$ is the upper incomplete $\Gamma$ function we already introduced to calculate the accumulated costs for the seismic IT impulse and the operational cost tsunami. We note that for  $(\alpha+1)/\beta = 0, -1, -2, -3, \ldots$the $\Gamma$ function is not defined, and therefore also formula 39 is not defined.

Change in cost equation

Although at CMM level 1 organizations staff buildup on a per project basis is not a feasible metric to collect corporate wide, we can use formula 38 to project staff size globally. We simply take the derivative of formula 38, which leads to formula 40 for quantitative IT portfolio management:


(40)  \begin{displaymath}
{\it cc}(t) = (\alpha -b\beta t^\beta)at^{\alpha-1}e^{-bt^\beta}
\end{displaymath}

The function name ${\it cc}$ stands for change in cost function. We can calculate the peak time for the entire IT investment by solving the equation  ${\it cc}(t)=0$. This leads to formula 41:


(41)  \begin{displaymath}
{\it pt}= \left({\alpha\over b\beta}\right)^{1/\beta}
\end{displaymath}

In Figure 5, we depicted the seismic IT impulse, and the resulting operational cost tsunami. With formula 41 we can calculate that the development peak load is at 17.2233 months, whereas the peak load for operations is at 90.30533 months--more than a factor 5 later than the development peak load. We can also calculate the peak costs, by simply calculating  $c({\it pt})$. This leads to formula 42:


(42)  \begin{displaymath}
\pc = a\left({\alpha\over b\beta e}\right)^{\alpha/\beta}
\end{displaymath}

For our example portfolio, we find peak efforts of 76.66294 million dollar for development, and 14.95207 million dollar top cost for operations--a factor 5 less than the development peak costs.

Putting it all together

The abstract formulas 3839 and 40 can be used to obtain a corporate view of your IT portfolio. For a start, we can calculate total cost of ownership for the class of cost allocation functions we defined in formula 38. Formula 43 of quantitative management of IT portfolios is:


(43)  \begin{displaymath}
{\it tco}= {ab^{-{(\alpha+1)\over\beta}}\over \beta}\Gamma({\alpha+1\over\beta})
\end{displaymath}

For our example portfolio, we summarized the development and operations coefficients in Table 8. These coefficients are used to calculate total cost of ownership with formula 43.


  
Figure 12: Superposition of various functions for our example portfolio M.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/corporate.ps,width=10cm}
\end{center}\end{figure}


 
Table 8: Statistically fitted constants for development and minimal cost of operation for the IT portfolio M.
constants development deployment
a 7.514287 0.02643959
$\alpha$ 1.258007 1.692713
b 0.07304098 0.003407055
$\beta$ 1 1.317413

So, using formula 43 we find 3149.142 million dollar for development and 2262.23 million dollar for minimal cost of operation. If you look closer at formula 39, you can easily see that tco is the first part of the formula which is indeed independent of time. So, you could see the formula as follows: the first term is the price you will eventually have to pay for that part of the IT portfolio that is described by the cost equation. The second term is time dependent: it is the repayment rate ensuring that the IT portfolio is developed and deployed. Compare this to building and living in a house: the bank pays the sum that you cannot afford to pay instantly. The mortgage is the time dependent part that tells you when and how much installment is due to ensure that you can build and inhabit the house. So in fact, formula 39 gives you the TCO plus your debt to build and deploy the IT portfolio over time. This leads to the repayment factor expressed by formula 44:


(44)  \begin{displaymath}
{\it rf}(t) =
{ab^{-{(\alpha+1)\over\beta}}\over \beta}
\Gamma({\alpha+1\over\beta},b\cdot t^\beta)
\end{displaymath}


  
Figure 13: Typical patterns when you accumulate IT investment costs over time.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/portfolio.ps,width=11cm}
\end{center}\end{figure}

Given an IT portfolio P, that is partitioned in  $P_1,\ldots,P_n$, for which cost functions of the class defined in formula 38 are known, then the corporate cost of ownership for the portfolio at a given time t is given by formula 45:


(45)  \begin{displaymath}
{\it cco}(t) = \sum_{i=1}^n{\it tco}_i -\sum_{i=1}^n{\it rf}_i(t)
\end{displaymath}

In Figure 12, we superimposed for the example portfolio Mthe accumulated cost equation, the current cost equation, and the change in cost equation for the seismic IT impulse and the ensuing operational cost tsunami. This gives you a graphical view of the corporate cost of ownership of IT portfolio M. The accumulation of cost functions for many business units does not look as regular as Figure 12, so in Figure 13 we plot an accumulation of a variety of IT investments over time. These plots show typical patterns you can expect to find.

Total IT spending comprises IT investments started on various time stamps, with varying intensity, and varying start times of development within such investments. The current situation in many organizations is that they only have insight in the annual total IT spending cost, but not in how these costs are partitioned in related IT investments. By grouping the IT investments over time, and analyzing these partitions, you can try to recover the cost allocation functions for major development, operations, and enhancement efforts. An IT portfolio database is a necessary--but not a sufficient--condition for this. In Figure 13 we composed an example to illustrate this: we superimposed a number of major IT investments of our fictious company, their ensuing operational costs plus their enhancement costs. Some of them can be recognized by their peaks, others are faded out by the more dominating waves. The left-hand plot in the upper row of Figure 13 shows the accumulation of all the development costs over time for the IT investments. The middle plot represents accumulated operational costs over time, and the right-hand plot is their sum: the total accumulated costs. The lower row plots the cost allocations for the IT investments. They are the cost allocation for development, operational and enhancement costs, and their sum. These accumulations give less insight in how the costs are build-up than if you would have had the cost allocation functions from the onset. The challenge is to uncover the cost allocation formulas from the data that is collected in the IT portfolio database. This is not an easy task, and there is no guarantee that you can completely recover the actual cost allocation formulas that belong to the IT investments of the past. The major investments leave so many traces that their recovery is often within reach. They are the cost waves from the past that are still dominating the current budgets.

Quantitative support for decision making

Next we turn our attention to how quantitative information can support decision making. Of course, the entire decision making process comprises of many factors, of which quantitative input is one aspect. Currently, not much quantitative data supports strategic decisions on IT investments. We agree with Strassmann who writes on this topic [122, p. 261]:

Credible financial analyses are necessary before top management can act with an understanding of the consequences of any decision.

We elaborate on quantitative support for a decision that many executives face when IT investments are due: outsourcing or not? For many organizations owning serious sized IT portfolios it is not possible to construct and maintain all the software in-house. So this is partly an in-house matter, and partly taken care of externally. At the executive level, decisions should be made to that end. And quantitative support will help in making the most effective decisions. There are the following possibilities for IT systems or IT portfolios:

You can obtain quantitative information to support decision making by using various instantiations of formula 1, and if entire portfolios are considered for outsourcing, these formulas suffice to support decision making. As we noted earlier, these formulas have the proviso that you should not use them as the only means for single system decisions for contracting purposes. And because make-commission decisions are not only taken at the portfolio level, but also at the project level, we will develop some new formulas supporting decision making for contracting purposes. For these formulas we need richer information than estimated development time (or estimated total cost).

The smaller the amount of systems that are subject to make-commission decisions, the more realistic it becomes that you have to know the amount of function points involved. This amount can be obtained by carrying out a function point analysis. So we assume for the moment that for the part of the IT portfolio that is subject to make-commission decisions we know for each IT project its size in function points. We recall it is not necessary to know exactly what function points are except that it is a synthetic measure indicating the size of IT systems.

In-house development

First we need to obtain an idea of the productivity for in-house development of MIS systems. In CMM level 1 organizations there is no historical data around to infer productivity rates, so we use benchmarks to compensate for that. In [66, p. 184,189] six MIS development benchmarks are present that illustrate the relation between the productivity and size (based on many projects). Five of the benchmarks are derived by us from a graph (using a ruler), and one of them was stated in a table. Based on this, we fit a curve through these benchmarks. Formula 46 for quantitative IT portfolio management, expressing the productivity for MIS development (measured in number of function points per staff month) for a given size in function points, is as follows:


(46)  \begin{displaymath}
p_i(f) = 1.627 - 38.373\cdot e^{-0.06222733f^{0.424459}}
\end{displaymath}

In Figure 14, the six dots are representing the benchmarks taken from [66] and the plot through the benchmarks is formula 46. Recall that the subscript i stands for information systems development. As an example, the productivity for in-house staff doing a 1000 function point MIS project is 13.6function points per staff month (according to benchmark). We note that the asymptotic behavior of formula 46 is not in accord with reality: $p_i(\infty)=1.627$. But for projects approaching infinite size, the productivity approaches to zero. So formula 46 should not be used for projects larger than 100.000 function points.


  
Figure 14: Productivity for MIS development projects.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/pif.ps,width=10cm}
\end{center}\end{figure}

Using formula 46, we derive alternative formulas of earlier derived formulas supporting quantitative IT portfolio management. But we can also infer an alternative for the benchmark  f0.39=d. For this we use another benchmark taken from [66, p. 185] that is called the assignment scope for in-house MIS development. An assignment scope for a certain activity is the amount of software (measured in function points) that you can assign to one person for doing that particular task. Note that the assignment scope is relatively size independent. We have seen two such assignment scopes: 150 as the assignment scope for average development over all sorts of IT systems and 750 for average operational costs. Indeed, depending on the task, the assignment scope can be different. For all activities that are usually done while developing MIS systems, the average assignment scope is 175 function points. Formula 47 for quantitative IT portfolio management calculates the amount of calendar months an in-house MIS development project takes.


(47)  \begin{displaymath}
d_i(f) = {175\over p_i(f)} = {175\over 1.627 - 38.373\cdot
e^{-0.06222733f^{0.424459}}}
\end{displaymath}


  
Figure 15: Comparing two different benchmarks for in-house MIS development.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/dif.ps,width=10cm}
\end{center}\end{figure}

For example, a 1000 function point development project takes  di(1000)=12.9 calendar months. We plotted the earlier used benchmark ( d=f0.39) against its alternative formula 47 to give you an idea of the deviations (viz. Figure 15). According to the earlier used benchmark, development takes 14.8 months. The plot with an asymptote around the horizontal line at 107.6is formula 47, the dotted curve is Jones' benchmark  d=f0.39. As you can see there are deviations so there is no single correct formula in CMM level 1 organizations. Fortunately, the formulas for quantitative IT portfolio management often provide enough quantitative data to help in deciding on portfolio investments.

We derive a formula similar to formula 1 calculating total cost of development for MIS projects using the just derived formula 47. Formula 48 calculates total cost of development for MIS projects for a given function point size.


(48)  \begin{displaymath}
{\it tcd}_i(f) = {rw\over12}\cdot{f\over p_i(f)} = {rw\over12}\cdot{f\over 1.627
- 38.373\cdot e^{-0.06222733f^{0.424459}}}
\end{displaymath}

As before, r is the fully loaded compensation rate and w is the number of working days in a calendar year. We derived formula 48 as follows. Using formula 47, we know the schedule in months. Then using the assignment scope, we know that the staff necessary to do this project must be f/175. So the total effort is  $f/175\cdot 175/p_i(f)$ which amounts to f/pi(f)calendar months. The monthly compensation for this is rw/12, thus their product yields formula 48. So a 1000 function point project will cost in our fictious company  ${\it tcd}_i(1000)=\$1.2$ M ( $r=\$1000, w=200$).

Outsourced development

As in the previous section, we derive the same formulas but then specific for the outsource industry. We distinguish them from the in-house formulas by using the subscript o which is short for outsourcing. We start with the productivity for outsourced software projects.

Analogously to the in-house situation we found in [66, p. 267,271] six outsource development productivity benchmarks. Again, five of the benchmarks stem from a graph, and one of them was stated in a table. We fit a curve through the benchmarks. Formula 49, expressing the productivity for outsource development (measured in number of function points per staff month) for a given size function points, is as follows:


(49)  \begin{displaymath}
p_o(f) = 2.63431 - 21.36569\cdot e^{-0.01805819f^{0.5248877}}
\end{displaymath}

As an example the productivity of a 1000 function point project done by outsourcers is benchmarked on  po(1000)=13.8 which is a bit higher than in-house development productivity for 1000 function point projects ( pi(1000)=13.6). The asymptotic behavior of formula 49 is not in accord with our experience. Very large projects do not have a lower bound of  $p_o(\infty)=2.63431$ for productivity. So formula 49 should not be used for projects larger than 100.000 function points.


  
Figure 16: Productivity for outsource development projects.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/pof.ps,width=10cm}
\end{center}\end{figure}

In Figure 16, the six dots are representing the benchmarks taken from [66] and the plot through the benchmarks is formula 49. We use a benchmark taken from [66, p. 269]: the assignment scope for outsource development. For all activities that are common in the outsource industry, the average assignment scope is 165 function points. Using this we can infer formula 50 expressing the amount of calendar months for an outsource development project, given its size in function points.


(50)  \begin{displaymath}
d_o(f) = {165\over p_o(f)} = {165\over 2.63431 - 21.36569\cdot
e^{-0.01805819f^{0.5248877}}}
\end{displaymath}


  
Figure 17: Comparing two different benchmarks for outsourcing.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/dof.ps,width=10cm}
\end{center}\end{figure}

Also for outsourcing there is an earlier benchmark that relates function point size to project duration in calendar months:  d=f0.38. We plotted this benchmark against formula 50 to indicate the deviations (viz. Figure 17). The solid plot is formula 50, the dotted curve is Jones' benchmark  d=f0.38.

Now we can derive total cost of development, similarly to formula 48. Formula 51 calculates total cost of development for outsource projects for a given function point size.


(51)  \begin{displaymath}
{\it tcd}_o(f) = {rw\over12}\cdot{f\over p_o(f)} = {rw\over1...
...{f\over
2.63431 - 21.36569\cdot e^{-0.01805819f^{0.5248877}}}
\end{displaymath}

So, a 1000 function point system costs  ${\it tcd}_o(1000)= 13.5$ million dollars (we took  $r=\$1000, w=200$).

Quantitative comparison

With the just derived formulas, we can compare development costs of IT systems done in-house with outsourcing such systems. Of course, there will be different daily rates, and in case of off-shore outsourcing also different working days per year. These different numbers are not hard to obtain when you are discussing contracts with an outsourcer.

As an example, suppose you need a 10000 function point information system and you want to explore the possibilities for outsourcing. Of course, competitive issues play a role in such decisions. For a start, externals tend to share their knowledge obtained in your project by doing similar jobs for others. By way of anecdotal evidence consider a quote taken from [45, p. 61], clearly showing that your trade secrets are not always safe when you ask others to implement a discretionary effort:

He showed Michalik a technology that an engineering friend had built for the Swiss bank UBS; [he] told Michalik that he had a killer app on his hands.

So when the IT system is a discretionary effort, you may not want to outsource it, even if the development cost are markedly lower. For instance, the expected return in combination with being the first in your branch, could potentially reap more benefits than lower costs, and the danger of being imitated as soon as returns are apparent for the competitors. Or, if there is no other option than to outsource, consider to protect vital parts of the innovation by one or more patents. Such considerations are outside the scope of this paper. We solely provide the decision maker with quantitative data forming one ingredient of the decision making process.

Let $r_i=\$666$ be the fully loaded daily rate for in-house MIS development. We assume  wi=wo=w=200 working days per year. This leads to a burdened monthly compensation of 11100 dollar. Let  $r_o=\$1000$ be the daily rate of an outsourcer, which leads to a monthly compensation of 16666 dollar. The monthly compensation we took is not a contrived difference, but in accordance with 2002 compensation rates.


 
Table 9: Several indicators to compare in-house development with outsourcing.
indicator dimension make commission
r $ 666 1000
w days 200 200
p(f) FP/SM 3.35 4.84
d(f) CM 52.24 34.10
${\it tcd}(f)$ M$ 33.13 34.43
${\it cf}(f)$ % 39.0 27.0
${\it cl}(f)$ % 33.7 33.7


  
Figure 18: Comparing MIS development to outsource development productivity.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/pifpof.ps,width=10cm}
\end{center}\end{figure}


  
Figure 19: Comparing MIS development to outsource development schedules.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/difdof.ps,width=10cm}
\end{center}\end{figure}


  
Figure 20: Comparing MIS development to outsource development costs.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/tcdiftcdof.ps,width=10cm}
\end{center}\end{figure}


  
Figure 21: Comparing MIS development to outsource development chance of failure.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/cfifcfof.ps,width=10cm}
\end{center}\end{figure}


  
Figure 22: Comparing MIS development to outsource development chance of late projects.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/clifclof.ps,width=10cm}
\end{center}\end{figure}

In Table 9 we summarized some important indicators to support decision making. We used some abbreviations as well: FP/SM stands for function points per staff month, CM is short for calendar months, M$ stands for a million dollar US. In both cases the initial development costs are equal, since the outsourcers are faster with larger projects (according to benchmark). They both have a 33+% chance of being late. Note that schedule slips of in-house development is less expensive than schedule slips of outsourcers. The chance of failure is 12% lower, though. If speed to market is important, and information leaks to the competitor are not too much of a problem, then the quantitative data supports an outsourcing decision. If you expect the system to be mission-critical during its 10+ years of deployment, then it may be better to maintain and enhance it in-house. If the system is planned well in advance, the longer development schedule is not too much of a problem. As you can see, the final decision depends on more than data such as summarized in Table 9.

To get an idea of such comparisons for various software sizes, we plot Figure 18. The in-house productivity expressed by formula 46 is the solid curve, and the dotted curve is formula 49 calculating the productivity of the outsource industry. The productivity of smaller projects is better for in-house projects than by outsourcers. But larger projects are more productively done by outsourcers. One of the reasons for this higher productivity is unvoluntary unpaid overtime (so not necessarily better skills). This also clarifies why the chance of late delivery is not that different, and that the schedule in calendar months is much shorter. We depicted the schedule as function of size in Figure 19. Formula 47 is the solid curve in Figure 19, and formula 50 is the dotted curve. The development schedule of outsourcers is much shorter when the system-size increases. In Figure 20 we depict how this translates into development budgets. The solid curve is formula 48 and the dotted curve is formula 51. The comparisons for the 1000 function point example showing that the costs are not dramatically different is an overall trend, both the solid cost curve in Figure 20 and the dotted outsource variant are less deviating than the development schedules might have insinuated.

Another interesting comparison is the risk dimension. We plot in Figure 21 formulas 28 (the solid curve) and 30 (the dotted curve). The chance of failure for outsource projects is smaller than for in-house development of MIS applications. This is not true for the exposure of being late. In Figure 22 we depicted formulas 32 and 34 expressing the change on late projects in-house and by an outsourcer respectively. The dotted curve expressing late outsourced projects is above the solid curve for late MIS projects done in-house. So although the chance of failure is smaller, the chance of being late is larger. This might be due to the fact that when you sign a contract with an outsourcer, not delivering is an obvious contract violation. So outsourcers deliver, but suffer from more time/effort overruns than the in-house case. In-house development fails more often, but if they do not fail, they deliver less late.

This type of quantitative input supports strategic decision making, but other factors are as important: the business goal of the system, its criticality for the business, the deepness of the stakeholders' pockets, the competitive landscape, etc.

Cost-time analysis and lifetime analysis

We could develop many more formulas for quantitative IT portfolio management, supporting strategic decision making for IT portfolios and sanity checking on It projects. But at this point we think it is worthwhile to turn our attention to a more fundamental issue. It is the issue whether it is possible to incorporate our empirically found formulas within an existing body of mathematical and statistical knowledge. For, if we are able to connect our work to established theory, we can benefit from findings in that area, and insights from these areas could lead to insights in the formulas we developed thus far. Others have tried to connect quantitative IT portfolio management to modern portfolio theory, and we showed that this correspondence is not as promising as it seemed at first. This was also found in [17].

After a careful study of our empirically found formulas we are confident to have found this existing body of knowledge. It is called lifetime analysis or failure time analysis as is applied to cancer research, cure rate estimation, reliability analysis, and other areas ranging from returns on the NYSE to the physical laws that the crushing of coal are subject to. This analysis will follow shortly, and it made us think of our work as cost-time analysis.

In the practical software engineering area, there is no established tradition of mathematically describing important phenomena in order to control the engineering process, and the ensuing artifacts [77,44]. In part this is due to the fact that the mathematics is not closely connected to obvious applications that are of immediate use in practice. For fields where this is obvious, such as software cost estimation, not many people relate their work to common practice in mathematics or statistics. Let us illustrate this. Recall Boehm [6] and Parr [91] who go to great length to infer alternatives to Rayleigh distributions whereas they could have used a decentralized version. Another indication is the ongoing discussion how useful mathematics is for the software practitioner. We refer to Glass who in IEEE Software [44] writes:

If these mathematics-based [approaches] ...are truly important to the field, then it is in some ...application of software that I have not yet encountered.

We assume Glass thinks of formal methods, and indeed, it is not obvious how (and when) to apply formal methods in industry. But software cost estimation is omnipresent, and cannot be done without mathematical and statistical support. So the practical software area that Glass apparently never encountered where math is staring you in the face is software cost estimation, and the related area of quantitative IT portfolio management--the subject of study in this paper.

Due to the traditional lack of applying mathematics successfully in practical software engineering, the mathematics that is being used is not related to the rich body of standard mathematics and statistics. It seems that progress in research on software cost estimation is bogged down by the assumption that you first have to theoretically justify how knowledge accumulation and problem solving processes are modeled mathematically, in order to analytically derive software cost estimation formulas. In other areas where mathematics and statistics are necessary, people explicitly refrain from such practices. For, this only leads to Mickey mouse mathematics3 lacking general applicability, since it is based on too idiosyncratic assumptions reflected by personal or otherwise limited experience, which is a bad advisor when it comes to mathematics.

As another illustration, consider this: in a textbook by Londeix on cost estimation for software development [79, p. 90] a learning function 2atn is considered in an exercise for which the manpower distribution and cumulative manpower cost are to be calculated. Londeix rejects this particular distribution on the following grounds [79, p. 195]:

We can verify that when n increases the time scale is reduced. An increase of n gives a sharp rise of the peak manning relative to its value for n=1. Therefore, a non-linear learning curve would give an early more peaky model which would not be helpful to represent the reality of software development.

This is further justified in the textbook by a reference to Norden's model [89]: since Norden is using a linear learning curve, the nonlinear learning curve cannot be correct. Needless to say that this line of reasoning is erroneous. A learning curve can never be inappropriate because someone else is using a different curve. Irrespective of his line or reasoning, Londeix is mistaken: we know from empirical research reported on in [98] that nonlinear learning curves accurately model IT intensive programs. The only limitation that Londeix should have questioned is that the assumption that n is a natural number is too restrictive.

So the general tendency in software cost estimation is to first restrict one selves and within those limitations try to model cost estimation phenomena. This is the wrong ordering as we will argue below.

Ready, fire, aim

In general it is not smart to fix a specific (theoretical) cost-time model in advance--and then see if this assumption fits reality. Unless the theoretical derivation is conclusive, this should always be done the other way around: you decide on a--hopefully general enough--family of distributions, and by curve fitting the most likely distribution for that particular effort will turn up as the result of statistical analysis.

The fixed-model approach is not the way to go when you just want to control things accurately, but this ready-fire-aim approach seems the norm in software engineering. It is much better to skip the motivational part all together, and instead do an educated guess that the relation you wish to describe probably fits a very general family of distributions like our family of cost allocation functions 38 is fortuitously doing. Or discriminate among several parametric models, to see which one should be used at all [100]. To quote Lawless [76, p. 13] who touches upon this issue in the realm of lifetime analysis:

Extensive motivation is not provided for the various models. To do this would require a thorough discussion of aging and failure processes and would take us outside the book's intended subject area. Indeed, the motivation for using a particular model in a given situation is often mainly empirical, it having been found that the model satisfactorily describes the distribution of lifetimes in the population under study. This does not imply any absolute ``correctness'' of the model.

Kalbfleisch and Prentice go a step further, when they write about failure time data analysis [68, p. 3]:

In many situations it is also important to develop nonparametric and robust procedures since there is frequently little empirical or theoretical work to support a particular family of failure time distributions.

They even indicate that you might question the assumption that there would be an analytic relation at all. We are in complete agreement with the observations done by Lawless, Kalbfleisch, and Prentice.

Also consider this: there can be some time between empirically found mathematical tools and their formal underpinning. For instance, the so-called Weibull distribution was empirically found in 1933 by Rosin and Rammler to describe the crushing of coal [107] (it was later attributed to Weibull [131]). However, the first full theoretical derivation based on physical principles stems from 1995 [14]. In the mean time, the distribution has been very instrumental in lifetime analysis [76].

The statistical ready-fire-aim practice is not unique to software engineering. This issue is also noted in other areas. For instance, in the realm of cure rate estimation in clinical trials for diseases such as lymphoma and breast cancer, similar critical remarks, eloquently expressing the viewpoints of [68, p. 67], are made [95]:

In the last decade, mixture models under different distributions, such as exponential, Weibull, log-normal and Gompertz, have been discussed and used. However, these models involve stronger distributional assumptions than is desirable and inferences may not be robust to departures from these assumptions. In this paper, a mixture model is proposed using the generalized F distribution family. Although this family is seldom used because of computational difficulties, it has the advantage of being very flexible and including many commonly used distributions as special cases. The generalised F mixture model can relax the usual stronger distributional assumptions and allow the analyst to uncover structure in the data that might otherwise have been missed.

Indeed, using fixed or restricted families of distributions also restricts the possibility to fit your data, and as the authors of [68,95] correctly observe, using a general distribution can uncover structure in your data that using restricted models can be missed. The generalized F distribution was originally intended as a selection process to detect which known less general distribution fits data most appropriately [100].

So, instead of trying to unravel why a particular curve is appropriate to model a correlation you can better skip that part for later and readily begin to fit curves that are approximating the relation accurately. Any formula, satisfying your purpose is by definition useful--the answer why this formula does fit is just a nice-to-have. As you will see shortly, it turns out to be utterly useful to relate our cost-time formulas to established practice in statistical modeling.

The Generalized $\Gamma$ distribution

In order to do so, it is necessary to turn formula 38 into a so-called probability density function. We normalize formula 38 so that the area under the curve becomes 1, by dividing formula 38 by the area under it. We use formula 39 for this. Formula 52 is the probability density function variant of formula 38:


(52)  \begin{displaymath}
f(t) = {\beta b^{{\alpha+1\over \beta}}\over\Gamma({\alpha+1\over
\beta})}\cdot t^\alpha e^{-bt^\beta}
\end{displaymath}

Formula 53 is the cumulative distribution function belonging to formula 52:


(53)  \begin{displaymath}
F(t) = 1- {\Gamma({\alpha+1\over \beta},bt^\beta)\over\Gamma({\alpha+1\over
\beta})}
\end{displaymath}

Indeed normalization means that  $F(\infty)=1$, which is not hard to check. Now we can easily relate formulas 52 and 53 to established statistical tools.

Theorem 1   Formula 52 is equivalent to the generalized $\Gamma$ distribution [115].

The generalized $\Gamma$ distribution is defined as follows:


\begin{displaymath}g(t) = {\beta\over\Gamma(k)\cdot\theta}\left({t\over\theta}\right)^{k\beta-1}
e^{-\left({t\over\theta}\right)^\beta}
\end{displaymath}

If we take  $\alpha = \beta k-1$ and  $b=1/\theta^\beta$, we can easiliy find that formula 52 turns into the generalized $\Gamma$ distribution. Vice versa, if we take  $k=(\alpha+1)/\beta$and  $\theta=(1/b)^{1/\beta}$, this reduces the generalized $\Gamma$ distribution to formula 52. So, both distributions are the same.

Obviously then also all the other artifacts coincide, e.g., their cumulative distribution functions are the same.

Lifetime data analysis

So now we have related our normalized empirically found cost-time family of relations to an existing family for which there are tools and techniques available that we can borrow to effectively deal with the practical side of applying the necessary statistical analyses. This field is not modern portfolio theory, but lifetime data analysis.

In the 1930s functions similar to the generalized $\Gamma$ distribution were used to the analyze the distribution of economic income [2,25]. But also in 2001, it was shown that stock returns on the New York Stock Exchange can be approximated by a generalized log gamma distribution [11]. So there are relations between quantitative IT portfolio management and the returns on stock options, but as argued, these relations are not what you would expect, namely that security portfolio theory founded by Markowitz corresponds in some natural way to issues relevant for IT portfolios.

A major application field of the generalized $\Gamma$ distribution is in so-called lifetime data analysis [76]. This branch of statistics is also referred to as survival time, or failure time analysis [68] and is widely used in engineering to support reliability analysis, and in the biomedical sciences. While the notion of lifetime should be taken literally, e.g., in biomedical science, in other fields it merely indicates a non-negative-valued variable. Our application comprises cost-time analysis for IT portfolios. There is a one-to-one correspondence with concepts from lifetime analysis, and the field of software cost estimation, in particular quantitative IT portfolio management.

Let us first review the few most basic concepts of lifetime data analysis (this information can be found in any book on lifetime data analysis), so that we can illustrate the strong correspondence with IT cost issues. Suppose T is a nonnegative random variable representing a lifetime, e.g., of individuals in a population. Let f(t) be the probability density function of T. Then the distribution function F(t) is defined as follows:


\begin{displaymath}F(t) = P(T\le t) = \int_0^t f(x)dx\end{displaymath}

Where $P(T\le t)$ denotes the chance that the lifetime T is between zero and t. The survival function is defined as:


\begin{displaymath}S(t) = P(T\ge t) = \int_t^\infty f(x)dx\end{displaymath}

As we can see, when  $t\rightarrow\infty$, then the distribution function F will approach 1, whereas the survival function will approach zero:  $S(\infty)=0$. In other words, it becomes harder and harder to survive when time proceeds. An important notion in lifetime analysis is the so-called hazard function, also known as the hazard rate, the age-specific failure rate, or the more poetic name: force of mortality:


\begin{displaymath}h(t) = {f(t)\over S(t)}\end{displaymath}

The hazard function expresses the instantaneous death rate at time t given that there is survival up till t. The hazard rate thus describes the way in which the instantaneous death of an individual (or the failure of some device) changes with time. So when you follow subjects from birth to death the hazard rate can be a bathtub curve: right after materialization there can be childhood diseases, then a relatively constant rate, and then the rate will go up again. The cumulative hazard function is easily defined:


\begin{displaymath}H(t) = \int_0^t h(x)dx\end{displaymath}

With these basic definitions it is possible to derive a number of fundamental relations between the various notions. For a start, the probability density function can be written as a product of two intuitive functions:


\begin{displaymath}f(t) = h(t)\cdot S(t)\end{displaymath}

And since f = -S', it is not hard to find that


\begin{displaymath}S(t) = e^{-\int_0^t h(x)dx}\end{displaymath}

Combining the above two formulas:


\begin{displaymath}f(t) = h(t)\cdot e^{-\int_0^t h(x)dx}\end{displaymath}

So, the hazard function can serve as a means to derive the probability density function. For instance, suppose that the hazard function is constant:  $h(t)=\lambda$. Then the probability density function is  $f(t)=\lambda e^{-\lambda t}$, which is the exponential or Poisson distribution that is often used in reliability and lifetime analysis.

Correspondence to software cost estimation

Let us give a second example, to start showing the one-to-one correspondence. Norden presupposed that the effectiveness of a group of engineers increases progressively during the life cycle of a project that he represented by a function p(t), where the pis an abbreviation for the problem function indicating the level of skill available to solve the problems. He assumes this function to be linear. In fact the function p is what in lifetime analysis is called the hazard function. Norden, thus assumes a linear hazard rate. With the basic formulas for lifetime analysis, finding the probability density function is obvious:


f(t) = ate-at2/2

This is the Rayleigh distribution that Norden derived, using differential calculus. In lifetime analysis, the Rayleigh distribution is also known as the linear hazard rate distribution [4,72].

In the paper [91], Parr proposed an alternative to the Rayleigh distribution to estimate software costs. This model is sometimes called the sech square model due to its formulation:


\begin{displaymath}V(t) = (1/4){\rm sech}^2((\alpha t+ c_3)/2) \end{displaymath}


  
Figure 23: The shape of logistic growth.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/logistic.ps,width=10cm}
\end{center}\end{figure}


  
Figure 24: The damped sine model.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/damped.ps,width=10cm}
\end{center}\end{figure}


  
Figure 25: The survival rate S, hazard rate h, and its product f: the damped sine model.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/dsh.ps,width=10cm}
\end{center}\end{figure}

We note that  ${\rm sech}(x)=2/(e^x+e^{-x})$, and that sech stands for secans hyperbolicus. Parr goes to great length in deriving this formula using differential calculus. We think that without knowing it, he just proposes a logistic hazard rate modeling his ideas on the rate at which problems are solved is software development. When you assume logistic growth for problem solving, the sech square formula follows immediately using the basic relations for lifetime analysis. The hazard function Parr actually assumes is as follows (viz. Figure 23):


\begin{displaymath}h(t) = {\alpha\over 1 + ae^{-\alpha t}} \end{displaymath}

Looking at Figure 23 you can see that the hazard function first looks like a linear hazard rate, so the beginning is a decentralized Rayleigh curve, and later on, the hazard rate becomes constant so then it will approach an exponential, or Poisson, density function. So it is just a smooth mix of linear and constant hazard rates. Maybe some IT projects do have a problem solving curve that resembles the one of Figure 23. Then again, maybe others don't. For instance, in [58,57] the work of Parr is used to derive an alternative to Parr's work for the phasing of resources: the damped sine model. Also these authors use differential calculus to theoretically derive an effort distribution function based on their ideas of problem solving rates. It is the damped sine model (see Figure 24):


\begin{displaymath}f(t) = c\cdot e^{-at}\sin(bt)\end{displaymath}

The shape of the damped sine model could very well be an effort distribution albeit that this function oscillates around zero, thus can be less than zero, which is not intuitive for cost-time analysis. To improve our intuition for this model we calculated the survival rate, and the hazard rate (see Figure 25 for plots of f,S,h):


\begin{eqnarray*}S(t) & = & {c\cdot e^{-at}\over a^2+b^2}\cdot(b\cos(bt)+a\sin(bt))\\
h(t) & = & {a^2+b^2\over a+b\cot(bt)}
\end{eqnarray*}


Looking at Figure 25, the hazard rate seems first linear, and then it approaches asymptotically to infinity. You could say that this hazard rate is roughly the inverse of the logistic hazard rate. So this is a smooth mix of a Rayleigh distribution at the beginning and an infinite hazard rate at the end. The latter is rather nonintuitive, and can be due to the following. One of the things that is modeled in [58,57] is that a project has an endpoint, where the effort model should be exactly zero. This can only be modeled when at some positive point in time a zero can be produced in the model. The sine curve has this property. This then leads to the unnatural hazard rate. We think that trying to model this type of assumption is not helping in coming to grips with cost-time analysis. It only complicates matters considerably. Apart from that, the assumption that effort should be exactly zero is not making much sense for high-momentum work. As is reported on in [30, p. 63] it requires fifteen minutes or more of concentration to reach the state of flow that is necessary to do engineering, design, development, writing, or similar work. This implies that it is not necessary to model such efforts at a granularity smaller than 15 minutes. As you can see, modeling the fact that a project ends, is leading to problems that should better be avoided. Therefore, it is better to avoid preliminary assumptions on how problems are solved, and fit data using as general as possible families of functions, so that you do not miss trends that you will most probably miss when you are too restrictive about the family of curves to choose from. It is illustrative that in [58,57] a more flexible family of curves is abandoned:

The beta curve provides great flexibility; however, a theoretical justification for use of the curve is lacking.

Although we appreciate the efforts of the authors of [58,57] we are convinced that it would be better to adhere to flexible families of curves, rather than restrict yourself to theoretically derived curves that can easily be too restrictive to model the reality. We prefer pragmatic flexibility over theoretical foundations that may be hard to justify after all.

In the next example we will see that a too restrictive model broke down, and that a more flexible model solved the problems. Moreover, it illustrates the strong relations between lifetime analysis and software cost estimation. Recall the earlier cited US Air Force study by Porter. He used the Weibull distribution for recalibrating the costs and lifetime of R&D programs [98]. Indeed, the linear hazard rate did not give Porter enough freedom to fit the experimental data to Rayleigh distributions. Apparently, the hazard rate, that he calls performance rate for R&D programs is not linear but follows some different pattern. Without realizing this, he assumes the following hazard rate:  $\beta t^{\beta-1}$. Also in Porter's paper differential calculus is used for inference of the cost-time function. But again, using the basic concepts of lifetime analysis the probability density function is obvious:  $\beta
t^{\beta-1}e^{-t^\beta}$. And this is indeed the Weibull distribution that is often used in lifetime data analysis.

As another example, recall the learning function in the earlier mentioned textbook by Londeix [79]. In that book, a hazard rate of 2atn is considered. To derive the distribution, the textbook also uses differential calculus. Again, using the lifetime data analysis basics, it is trivial to infer the answer.

So you could say that for estimating costs of software projects, R&D programs, and IT portfolio management, the hazard function in lifetime analysis corresponds to the problem solving rate, that the survival function corresponds to the percent of work remaining, that is, the residual investment, that the distribution function Fcorresponds to the accumulated cost function, and the density fcorresponds to the manpower buildup function. Therefore, you can see cost-time analysis as lifetime analysis.

This is in accord with the interpretation of the Rayleigh curve: the linear part stands for the learning curve to overcome problems one at a time, which is the hazard function. And the quadratic exponential factor represents the velocity with which you can solve those problems once the solutions are known, which is the survival function.

Hazard rate and survival function

The hazard and survival function provide central intuitions for a cost-time analysis--just as they justify the use of specific models in lifetime analysis. For any cost-time distribution you can calculate these functions, as we did for a number of known ones, to better understand the distribution, and its feasibility. Vice versa, if you know for a fact what the problem solving rate of an R&D-like project is, then you can immediately infer the correct distribution. But knowing this rate implies that you presumably understand the problem in the first place, so then it is not an R&D project anymore. This paradox shows that trying to theoretically infer a cost-time distribution seems to be feasible only for well-understood problems. Measuring problem solving rates seems infeasible for CMM level 1 organizations, who are lacking an overall metrics program.

To gain insight in the empirically found formula 52 it is therefore worthwhile to calculate the survival and hazard functions. The survival function is easily inferred from formula 53. Formula 54 for quantitative IT portfolio management is:


(54)  \begin{displaymath}
S(t) = {\Gamma({\alpha+1\over\beta},bt^\beta)\over
\Gamma({\alpha+1\over \beta})}
\end{displaymath}

In cost-time analysis, we call the survival function sometimes the percentage of work remaining, or the residual investment. Also hazard functions are present under different names in cost-time analyses. We have come across problem function, performance function, skill function etc. Using the basic concepts of lifetime analysis, it is easy to derive the hazard function that belongs to formula 52. This is formula 55:


(55)  \begin{displaymath}
h(t) = {\beta b^{{\alpha+1\over \beta}} t^\alpha e^{-bt^\beta}\over
\Gamma({\alpha+1\over \beta}, bt^\beta)}
\end{displaymath}

While, constant, linear, logistic and simple power hazard rates can be found using intuition or by modeling problem solving processes, the family of hazard rates described in formula 55 is beyond the imaginative powers of many of us. In order to appreciate formula 55 let's see what happens when we take $\alpha=1$and $\beta=2$ in formula 55. For the upper incomplete $\Gamma$ function holds:  $\Gamma(1,x)=e^{-x}$. Using this,  $\Gamma(1,bt^2)=e^{-bt^2}$, so formula 55 reduces to $b\beta t$, which is a linear hazard rate. Thus, the probability density function for $\alpha=1$ and $\beta=2$ becomes the well-known Rayleigh curve again. Likewise, if you take  $\alpha=\beta-1$, you immediately reduce formula 55 to the Weibull hazard rate (using that  $\Gamma(1,x)=e^{-x}$). This implies that you can use this family of hazard functions to fit a large variety of differently shaped cost-time relations. So the advantage is that you do not need to pick one particular model, and then see if it fits. If the models are general enough, whatever fits best will come out of a statistical analysis.


  
Figure 26: The hazard rates, survival rates and their products for the seismic cost impulse and operational cost tsunami.
\begin{figure}
\begin{center}
\leavevmode \psfig{file=pix/combined-analysis.ps,width=12cm}
\end{center}\end{figure}

To illustrate formulas 54 and 55 further, we depicted six related plots in figure 26. The left-hand column contains the hazard rate, the survival function, and their product: it is the seismic cost impulse that we earlier discussed (and depicted in Figure 5). Recall that the cost allocation function f can be found by taking the following product: $f(t)=h(t)\cdot S(t)$. The right-hand column contains the hazard, survival, and cost allocation function for the operational cost tsunami that was the consequence of the seismic IT impulse (also earlier depicted in Figure 5). Let us compare these rates.

The first row of Figure 26 shows us the hazard rates, or problem solving rates for both development and operations of the example IT portfolio. We used the coefficients depicted in Table 8 to instantiate formula 55, the result forms the two hazard rates. Although both curves stem from a single family, they both show rather different characteristics, not resembling any thus far known problem solving rate that we know of. The seismic hazard rate is very steep and then approaches an almost constant rate. This is achieved when most of the projects in the IT portfolio are implemented. The tsunami hazard rate on the other hand shows a much slower growth, but not linear. As can be clearly seen, none of the theoretically inferred models (Rayleigh, Parr, or Damped sine) nor generalizations of these (think of Weibull curves), would have approximated our cost allocation function accurately. The reason is that the hazard rate (that determines the cost allocation function) is too far off the known models.

The second row of Figure 26 gives us both survival rates. Again, we used the coefficients in Table 8 and instantiated formula 54 to plot both survival rates. As can be seen, the investment for the seismic IT impulse is spend fast: the left-hand plot rapidly drops to zero. The spending rate for the ensuing operational cost tsunami is much slower: after a long time it is still necessary to invest. The latter rate clearly indicates that there will be a long period of investments that are followed by the initial IT expense to develop the example portfolio.

Finally, in the third row we plotted the product of the first and second row, giving us the cost allocation functions. Note that we normalized the total cost to 1 in both cases, so the shape of the operational cost tsunami deviates from how it is depicted in Figure 5. But if you look at the scale used in both sides of the third row, you will see that the peak of the operational cost tsunami is approximately at a fifth of the peak of the seismic cost impulse, which is also the case in Figure 5.

Inference procedures

Despite the generality of the formulas, they are not as applicable if there is no sound inference procedure for our cost allocation function. So, there is another question that needs attention: how easy is it to infer the coefficients for formula 38, or equivalently for formula 52?

The parameterization as given in the paper so far, is relatively intuitive for human beings, but has limited value when it comes to inference procedures. Especially when you are uninitiated in statistical analysis. With inference procedures like maximum likelihood estimates you can easily run into trouble.

After Stacy's publication in 1962 on the generalized $\Gamma$distribution, the problems with estimating the coefficients were reported on by several people [92,117,53,50]. Even with fairly large samples in the hundreds of observations, convergence problems occurred with maximum likelihood estimates. Very different sets of parameters, lead to very similar distributions. Looking back to Figure 11 it is not hard to imagine that for a given set of data, similar curves can be found by varying the pairs  $(a,\alpha)$ and $(b,\beta)$, leading to totally different coefficients. This does not ease inference, and several papers addressed these problems [116,99,36,75,133,134]. Prentice [99] reparameterized the distribution, and took away most inference problems: now it is easy to fit curves. First we redisplay the generalized $\Gamma$ distribution:


\begin{displaymath}g(t) = {\beta\over\Gamma(k)\cdot\theta}\left({t\over\theta}\right)^{k\beta-1}
e^{-\left({t\over\theta}\right)^\beta}
\end{displaymath}

Prentice's alternative parameters are:


\begin{eqnarray*}\mu & = & \log(\theta) - 2/\beta\cdot\log(\lambda)\\
\sigma & = & 1\over\beta\sqrt{k} \\
\lambda & = & 1\over\sqrt{k}
\end{eqnarray*}


where  $-\infty<\mu, \lambda < \infty$, and $\sigma>0$. The new probability density function becomes:


\begin{displaymath}f(t) = \left\{ \begin{array}{ll}\displaystyle
\par {\vert\lam...
...er\sigma}\right)^2}
, & \mbox{otherwise}
\end{array} \right.
\end{displaymath}

This formula is not intended for human interpretation, but more appropriate for computer manipulation. There are tools around that use the above formula to carry out curve fitting for the generalized $\Gamma$ distribution. A tool called Weibull++ especially designed for lifetime analysis contains the above formula [105]. Also SAS [31] contains the above formula [111, Sec. 30.32]. For users of free software: there is a toolbox for describing ocean wave distributions that contains the generalized $\Gamma$distribution [12]--it is not a coincidence that we associate the long operational cost waves with tsunamis. This is GNU licensed software and the Matlab [81] sources are available, which is handy when you want to tweak the code. Recall that we fit the curves using nonlinear regression as implemented in Splus [19,55,129,96]. Indeed, it is not always trivial to find good starting values, and we used techniques similar to [133,134] to find them. Using tools especially geared towards this type of analysis surely improves the ease with which you can carry out your own cost-time analysis.

The generalized F distribution

While many people resort to using restricted families of distributions to do software cost estimation, this somewhat rigid practice breaks fully down when lifting from the project to the portfolio level. One of the reasons is that the projects can be quite heterogeneous. For instance, the productivity of individual programmers can vary widely. The variation in error detection (or debugging) for small programming efforts has been found to be 26 to 1. Remarkably, the subjects had the same programming experience [110]. These findings have been confirmed in many studies. In another study a 20 to 1 ratio of development time for programmers with the same experience [84,29,18,9,128]. Usually there will be variation in experience. Comparing low complexity efforts done by capable programmers with high complexity efforts by less capable programmers, can lead to ratio of 1:400 in productivity differences [23, p. 256]. In [23, p. 240] it was found that for large programming efforts, this effect is less pronounced, although still a variation of productivity ratios of $\mbox{2--4}:1$ is measured. It will be clear that very heterogeneous cost-time functions on a per project basis will not be uncommon. The decreasing variation is in accord with Markowitz's work on risk diversification: since the variance for productivity has an upper bound, the variance of the total productivity of the entire programming team will approach zero if the team size approaches infinity [80, p. 107]. And when projects become large, you need more programmers. So for selecting programmers for a team, in theory you could use modern portfolio theory. But given the enormous shortage of programmers, in practice there is not much choice.

Although the generalized $\Gamma$ family is fairly general, we like to point out in this section, that there are even more general distributions, especially designed to deal with heterogeneous data. They are additionally used to test which less general distribution fits the data best. Therefore, we sometimes use another family of distributions that also accurately approximates cost-time data for portfolios. This family is more flexible than the generalized $\Gamma$ distributions and there are described inference procedures. The disadvantage of this family of distributions is that it is relatively unknown outside the realm of failure time data analysis. We like to briefly discuss the generalized F distribution as proposed by Prentice [99]. Prentice worked on this subject supported by a grant of a cancer research center [99, p. 614]. The primary use of his distribution was to incorporate all the known failure time distributions, so that with a test you could discriminate among them. You could say that he developed mathematical tools to aim before firing. In cancer research also heterogeneous populations occur: some will die, some will be cured, some will not get the disease. We loosely compare this to the heterogeny of programmer productivity: some are no programmers at all, some will never finish a program, and some will spread code like a cancerous organism. The failure time could be seen as the time it takes to complete the IT development project.

A positive random variable T is said to have a generalized F distribution with $\mu$ and $\sigma$ as location and scale parameters and s1, s2 as shape parameters, if  $W=(\log T -
\mu)/\sigma$ is the logarithm of a random variable having an F distribution with 2s1 and 2s2 degrees of freedom. The probability density function of W, enhancing formula 52 with respect to flexibility is:


(56)  \begin{displaymath}
f(w) = {(e^w s_1/s_2)^{s_1}(1+e^w s_1/s_2)^{-(s_1+s_2)}
\over
B(s_1,s_2)}
\end{displaymath}

where  $-\infty<\mu<\infty$, $\sigma,s_1,s_2>0$ and B is the beta function defined as:


\begin{displaymath}B(s_1,s_2) = {\Gamma(s_1)\Gamma(s_2)\over\Gamma(s_1+s_2)}\end{displaymath}

The generalized F distribution contains many other distributions: for $\sigma=1$ formula 56 reduces to the F distribution, if  $s_i\rightarrow\infty$, for i=1 or 2 formula 56 becomes the generalized $\Gamma$ distribution, if s1=s2 it reduces to the generalized log-logistic distribution, for s2=1 it reduces to the Burr type III, and for s1=1 to the Burr type XII distributions. Burr distributions are used in environmetrics to estimate the concentrations of chemicals such that a given percentage of species will survive [112]. Furthermore, the $\Gamma$, $\chi^2$, Poisson, Rayleigh, Log-normal, Log logistic, Pearson type III, Maxwell-Boltzmann and many other distributions are special cases of the generalized F distribution.

Just as with the generalized $\Gamma$ distribution, formula 56 is of limited interest unless there are tools and techniques to infer the coefficients. There is an Splus package to fit both the generalized $\Gamma$ and the generalized F distributions. It is called GFCURE, referring to cure rate estimation using the generalized F distribution, and stems from cancer research [95,94].

Software cost estimates are censored

In clinical trials it is often desirable to analyze the data before all the individuals have died. Therefore, it is common to work with so-called censored data. An observation is censored (or right censored) if the exact value of the observation is not known, but only that it is greater than or equal to this observation. For instance, when a software project is estimated to cost half a million dollar, this can be seen as censored data: it will most likely be at least $500.000. It seems an established idea in software engineering, that it is not possible to say anything about estimates if the data is censored, that is, no actual data is available [28]. But often, we do have censored data to our avail, especially in IT portfolios, where not only finished IT projects, but also projects in progress, and IT project proposals are present. Finished IT projects provide you with life and uncensored data, ongoing and proposed ones are characterized by censored data. Note that you cannot use censor analysis, if the deviations are arbitrary, that is, sometimes too much, sometimes accurate, and sometimes too low. There is no tradition whatsoever of IT projects being too early, or less costly than anticipated in advance [59,47,48,49,43,42,45]. This implies that IT portfolio data is right censored. Both in lifetime and failure time analysis a lot of research is done to support analysis in the presence of censored data sets, so we can deploy the results of this work to our advantage and analyze IT portfolios more accurately, even in the case where IT projects are not finished. We have not yet applied censor analysis in practice, since for that you need uncensored data as well. This would imply access to a body of historical data, which is hardly present in CMM level 1 organizations.

Estimating an empirical survival function

Something that is easily measurable is the spending rate for a set of correlated IT investments over time. This is used in cost uncertainty analysis for systems engineering [41] (containing a software component) to capture the cost distribution over time at an early stage. In the book [41] it is found that in many cases the total cost distribution of a systems engineering project comprises large numbers of uncorrelated cost items, so their accumulation approaches the normal distribution. This assumption enables you to estimate the accumulated cost function at an early stage. As mentioned earlier, empirical evidence shows that cost allocation functions for R&D projects, software projects, and IT portfolios are not normally distributed. This is shown in [87,88,89,101,104,102,103,98] and by us. So you have to do something different.

If you measure accumulated costs, you equivalently have the residual investment, which is the survival function. In lifetime data analysis, it is also not hard to measure how the survival of a population over time develops. Techniques are available to estimate from these observations the empirical survival function. If you have the survival function, you can infer the hazard rate and the cost allocation function. Product-limit estimates, also known as Kaplan-Meier estimates, are used in lifetime analysis to estimate empirical survival functions. Since you do not always want to wait until the entire investment is made, your data will be censored. This implies that you need to estimate the empirical survival function with censored data. For details on these methods we refer the interested reader to [68,76]. If there is enough accounting data, and other IT related management data lacks, you can use the residual investment rate to estimate the survival function. Also if quantitative IT portfolio management is consolidated within your organization, you can track the residual investment rate to check whether the survival function that was originally proposed for the IT investment is consistent with the real spending rate. This type of analysis can then signal at an early stage possible problems. In this stage, you are really getting on top of IT.

Conclusions

The Clinger Cohen Act of 1996 enforced the use of IT portfolio management but did not explain how this should be done in operational terms. As far as we know, this paper is the first one to describe quantitative IT portfolio management in depth. We hope that organizations in general, and in particular, the ones subject to the Clinger Cohen Act will benefit from the material presented in this paper. Most organizations are CMM level 1 organizations. We argued that CMM level 1 organizations--who need such tools probably most urgently--can jump start quantitative IT portfolio management by compensating their lack of historical data with external benchmarks. With examples composed from actual quantitative IT portfolio management projects we illustrated our approach. Based on our findings, we were able to relate our work to an existing body of knowledge, from which we could borrow methods, insights, techniques, and tools to our advantage to solve relevant problems in quantitative IT portfolio management. Comparing the advancements made in these other areas with the gap in the software field we feel that much work, some of which has been outlined in this paper, is necessary to nurture and mature quantitative IT portfolio management. When better benchmarks and more historical data becomes available, the numerical values in our formulas will change, but presumably not their generic form. In our opinion, we developed the first iteration of a collection of formulas that form a useful basis for getting started with quantitative IT portfolio management.

0.0.0.2 Acknowledgements

This paper is the result of many fruitful interactions with people from industry to whom we are indebted. In particular, we would like to express our gratitude to Dr John F.A. Spangenberg (Head of IT Performance and Investment Management at ING Group4) for his encouragement to report on our work on quantitative IT portfolio management. We thank ING Group for underscoring the relevance of our research on quantitative IT portfolio management by a partial sponsorship. Furthermore, we thank David Faust (Director, Core Services - Global Equities Technology and Operations, Deutsche Bank AG) for his valuable comments, discussions and suggestions for improvements.

Bibliography

1
A.J. Albrecht.
Measuring application development productivity.
In Proceedings of the Joint SHARE/GUIDE/IBM Application Development Symposium, pages 83-92, 1979.

2
L. Amoroso.
Ricerche intorno alla curva dei redditi.
Ann. Mat. Pura Appl., 21(4):123-159, 1932.

3
J. Asundi and R. Kazman.
A Foundation for the Economic Analysis of Software Architectures.
In K. Sullivan, editor, Proceedings of the 3rd Workshop on Economics-Driven Software Engineering Research (EDSER-3), 2001.
Available via: www.cs.virginia.edu/~sullivan/edser3/asundi.pdf.

4
L.J. Bain.
Analysis for the linear failure rate distribution.
Technometrics, 16:551-559, 1974.

5
S. Berinato.
Do the MATH.
CIO Magazine, October 2001.
Available via: www.cio.com/archive/100101/math.html.

6
B. Boehm.
Software Engineering Economics.
Prentice Hall, 1981.

7
B.W. Boehm.
The high cost of software.
In Horowitz E., editor, Practical Strategies for Developing Large Software Systems. Addison-Wesley, Reading, MA, USA, 1975.

8
B.W. Boehm.
Software engineering.
IEEE Transactions on Computers, C-25:1226-1241, 1976.

9
B.W. Boehm and P.N. Papaccio.
Understanding and Controlling Software Costs.
IEEE Transactions on Software Engineering, SE-14(10):1462-1477, 1988.

10
J. Bosch.
Design and Use of Software Architectures - Adopting and Evolving a Product-Line Approach.
Addison-Wesley, 2000.

11
K. Brännäs and N. Nordman.
Conditional Skewness Modelling for Stock Returns.
Technical Report 562, Department of Economics, Umeå University, 2001.

12
P.A. Brodtkorb, P. Johannesson, G. Lindgren, I. Rychlik, J. Rydén, and E. Sjö.
WAFO - a Matlab toolbox for analysis of random waves and loads.
In Proceedings of the 10th International Offshore and Polar Engineering conference, pages 343-350, 2000.

13
F.P. Brooks Jr.
The Mythical Man-Month - Essays on Software Engineering.
Addison-Wesley, 1995.
Anniversary Edition.

14
W.K. Brown and K.H. Wohletz.
Derivation of the Weibull Distribution Based on Physical Principles and its Connection to the Rosin-Rammler and Lognormal Distributions.
Journal Applied Physics, 78(4):2758-2763, August 1995.

15
J. Brunekreef and B. Diertens.
Towards a user-controlled software renovation factory.
In P. Nesi and C. Verhoef, editors, Proceedings of the Third European Conference on Maintenance and Reengineering, pages 83-90. IEEE Computer Society Press, 1999.

16
S. Butler, P. Chalasani, S. Jha, O. Raz, and M. Shaw.
The potential of portfolio theory in guiding software decisions.
In K. Sullivan, editor, Proceedings of the 1st Workshop on Economics-Driven Software Engineering Research (EDSER-1), 1999.
Available via: www.cs.virginia.edu/~sullivan/EDSER-1/PositionPapers/jha.pdf.

17
S. Butler, S. Jha, and M. Shaw.
When Good Models Meet Bad Data - Applying Quantitative Economic Models to Qualitative Engineering Judgements.
In K. Sullivan, editor, Proceedings of the 2nd Workshop on Economics-Driven Software Engineering Research (EDSER-2), 2000.
Available via: http://www.cs.virginia.edu/~sullivan/edser2/shaw.pdf.

18
D. Card.
A Software Technology Evaluation Program.
Information and Software Technology, 29(6):291-300, 1987.

19
J.M. Chambers and T.J. Hastie, editors.
Statistical Models in S.
Wadsworth & Brooks/Cole, Pacific Grove, CA, 1992.

20
E.J. Chikofsky and J.H. Cross.
Reverse engineering and design recovery: A taxonomy.
IEEE Software, 7(1):13-17, 1990.

21
C. Cifuentes and K.J. Gough.
Decompilation of Binary Programs.
Software--Practice and Experience, 25(7):811-829, July 1995.

22
P. Clements and L.M. Northrop.
Software Product Lines - Practices and Patterns.
Addison-Wesley, 2002.

23
S.D. Conte, H.E. Dunsmore, and V.Y. Shen.
Software Engineering Metrics and Models.
The Benjamin/Cumming Publishing Company, Inc., 1986.

24
Federal CIO Council.
A Summary of First Practices and Lessons Learned In Information Technology Portfolio Management.
Technical report, Federal CIO Council, Best Practices Committee, 2002.
Available via: http://cio.gov/Documents/BPC_Portfolio_final.pdf.

25
R. D'Addario.
Intorno alla curva dei redditi di Amoroso.
Riv. Italiana Statist. Econ. Fnanza, anno 4(1), 1932.

26
E.B. Daly.
Management of Software Engineering.
IEEE Transactions on Software Engineering, SE-3(3):229-242, 1977.

27
S.M. Dekleva.
The Influence of the Information Systems Development Approach on Maintenance.
MIS Quarterly, 16(3):355-372, September 1992.

28
T. DeMarco.
Controlling Software Projects - Management Measurement & Estimation.
Yourdon Press Computing Series, 1982.

29
T. DeMarco and T. Lister.
Programmer performance and the effects of the workplace.
In Proceedings of the 8th International Conference on Software Engineering, ICSE-8, pages 268-272. IEEE Computer Society, 1985.

30
T. DeMarco and T. Lister.
Peopleware - Productive Projects and Teams.
Dorset House, 1987.

31
G. Der and B.S. Everitt.
Handbook of Statistical Analyses Using SAS.
CRC Press, 2001.

32
J. Doe and C. Verhoef.
Software Product Line Migration and Deployment, 2002.
Unpublished manuscript.

33
J.B. Dreger.
Function Point Analysis.
Prentice Hall, 1989.

34
A.W. Rathe editor.
Gantt on Management: Guidelines for Today's Executive.
American Management Association, New York, 1961.

35
J.L. Elshoff.
An analysis of some commercial PL/I programs.
IEEE Transactions on Software Engineering, SE-2(2):113-120, 1976.

36
V.T. farewell and R.L. Prentice.
A study of distributional shape in life testing.
Technometrics, 19:69-75, 1977.

37
J. Feiman and N. Frey.
Migrating Legacy Developers to Java: Costs, Risks and Strategies.
Technical report, GartnerGroup, Stamford, CT, USA, 2001.

38
L.G. Freeman and C. Cifuentes.
An industry perspective on decompilation.
In H. Yang and L. White, editors, International Conference on Software Maintenance, 1999.
Printed in the short paper appendix.

39
H.L. Gantt.
Organizing for Work.
Harcourt, Brace & Howe, New York, 1919.

40
D. Garmus and D. Herron.
Function Point Analysis - Measurement Practices for Successful Software Projects.
Addison-Wesley, 2001.

41
P.R. Garvey.
Probability Methods for Cost Uncertainty Analysis - A Systems Engineering Perspective.
Marcel Dekker Inc., 2000.

42
R.L. Glass.
Computing Calamities - Lessons Learned From Products, Projects, and Companies that Failed.
Prentice Hall, 1998.

43
R.L. Glass.
Software Runaways - Lessons Learned from Massive Software Project Failures.
Prentice Hall, 1998.

44
R.L. Glass.
A New Answer to ``How Important is Mathematics to the Software Practitioner?''.
IEEE Software, 17(6):135-136, November/December 2000.

45
R.L. Glass.
ComputingFailure.com - War Stories from the Electronic Revolution.
Prentice Hall, 2001.

46
United States Government.
Clinger Cohen Act of 1996 and Related Documents, 1996.
Available via: www.c3i.osd.mil/org/cio/doc/CCA-Book-Final.pdf.

47
The Standish Group.
CHAOS, 1995.
Retrievable via: standishgroup.com/visitor/chaos.htm (Current February 2001).

48
The Standish Group.
CHAOS: A Recipe for Success, 1999.
Retrievable via: www.pm2go.com/sample_research/chaos1998.pdf.

49
The Standish Group.
EXTREME CHAOS, 2001.
Purchase via: https://secure.standishgroup.com/reports/reports.php.

50
H.W. Hager and L.J. Bain.
Inferential procedures for the generalized gamma distribution.
Journal of the American Statistical Association, 65:1601-1609, 1970.

51
B. Hall.
Year 2000 tools and services.
In Symposium/ITxpo 96, The IT revolution continues: managing diversity in the 21st century. GartnerGroup, 1996.

52
M. Hanna.
Maintenance Burden Begging for a Remedy.
Datamation, pages 53-63, April 1993.

53
H.L. Harter.
Maximum-likelihood estimation of the parameters of a four-parameter generalized gamma population for complete and censored samples.
Technometrics, 9:159-165, 1967.

54
G. Hector.
Breaking the Bank - the Decline of BankAmerica.
Little, Brown & Company, 1988.

55
S. Huet and M.-A. Gruet.
Stastical Tools for Nonlinear Regression - A Practical Guide with S-Plus Examples.
Springer Verlag, 1996.

56
IBM Corporation.
Transaction Processing Facility - General Information, 4.1 edition, 1993.
Available via: www-3.ibm.com/software/ts/tpf/images/gtpgim00.pdf.

57
W. Jarvis.
New Approaches to Phasing of Resources.
In Proceedings of the 34th Annual Department of Defense Cost Analysis Symposium, 2001.
Available via: www.ra.pae.osd.mil/adodcas/Presentations%202001/RSCAllocation.PDF.

58
W. Jarvis and E. Pohl.
Program Execution Efficiency and Resource Phasing.
In Proceedings of the 32nd Annual Department of Defense Cost Analysis Symposium, 1999.
Available via: www.ra.pae.osd.mil/adodcas/slides/jarvis33.pdf.

59
J. Johnson.
Chaos: The dollar drain of IT project failures.
Application Development Trends, 2(1):41-47, 1995.

60
C. Jones.
Programming Productivity.
McGraw-Hill, 1986.

61
C. Jones.
Assessment and Control of Software Risks.
Prentice-Hall, 1994.

62
C. Jones.
Applied Software Measurement: Assuring Productivity and Quality.
McGraw-Hill, second edition, 1996.

63
C. Jones.
Patterns of Software Systems Failure and Success.
International Thomsom Computer Press, 1996.

64
C. Jones.
Estimating Software Costs.
McGraw-Hill, 1998.

65
C. Jones.
The Year 2000 Software Problem - Quantifying the Costs and Assessing the Consequences.
Addison-Wesley, 1998.

66
C. Jones.
Software Assessments, Benchmarks, and Best Practices.
Information Technology Series. Addison-Wesley, 2000.

67
N. Jones.
Year 2000 market overview.
Technical report, GartnerGroup, Stamford, CT, USA, 1998.

68
J.D. Kalbfleisch and R.L. Prentice.
The Statistical Analysis of Failure Time Data.
Wiley & Sons, 1980.

69
C.F. Kemerer.
Reliability of Function Points Measurement - A Field Experiment.
Communications of the ACM, 36(2):85-97, 1993.

70
C.F. Kemerer and B.S. Porter.
Improving the Reliability of Function Point Measurement: An Empirical Study.
IEEE Transactions on Software Engineering, SE-18(11):1011-1024, 1992.

71
T.A. Kirkpatrick.
Research: CIOs Speak on ROI.
CIO Insight, 1(11), March 2002.
Available via: www.cioinsight.com, results of questionnaire available via: common.ziffdavisinternet.com/download/0/1396/0110_rio_research.pdf.

72
D. Kodlin.
A new response time distribution.
Biometrics, 23:227-239, 1967.

73
A. Krause and M. Olson.
Basics of S and S-Plus.
Springer Verlag, 2nd edition, 2000.

74
L. Lamport.
How to Tell a Program from an Automobile.
In J. Tromp, editor, A Dynamic and Quick Intellect - Liber Amicorum in honor of Paul Vitanyi's 25-year jubilee, pages 77-79. CWI, 1996.
Available via: research.microsoft.com/users/lamport/pubs/automobile.pdf.

75
J.F. Lawless.
Inference in the generalized gamma and log gamma distributions.
Technometrics, 22:409-419, 1980.

76
J.F. Lawless.
Statistical Models and Methods for Lifetime Data.
Wiley & Sons, 1982.

77
T.C. Lethbridge.
Priorities for the Education and Training of Software Engineers.
Journal of systems and Software, 53(1):53-71, july 2000.

78
B.P. Lientz and E.B. Swanson.
Software Maintenance Management - A Study of the Maintenance of Computer Application Software in 487 Data Processing Organizations.
Reading MA: Addison-Wesley, 1980.

79
B. Londeix.
Cost Estimation for Software Development.
Addison-Wesley, 1987.

80
H.M. Markowitz.
Portfolio Selection - Efficient Diversification of Investments, volume 16 of Cowles Foundation For Research in Economics and Yale University.
John Wiley & Sons, 1967.
3rd printing.

81
W.L. Martinez and A.R. Martinez.
Computational Statistics Handbook with MATLAB.
CRC Press, 2001.

82
S. McConnell.
Rapid Development.
Microsoft Press, 1996.

83
F.W. McFarlan.
Portfolio approach to information systems.
Harvard Business Review, 59(5):142-150, September - October 1981.

84
H. Mills.
Software Productivity.
Little Brown, 1983.

85
S.N. Mohanty.
Software Cost Estimation: Present and Future.
Software--Practice and Experience, 11:103-121, 1981.

86
M. Mustatevic.
Automation of function point counting.
Master's thesis, University of Amsterdam, Programming Research Group, 2000.
In Dutch.

87
P.V. Norden.
Curve fitting for a model of applied research and development scheduling.
IBM Journal of Research and Development, 2(3), June 1958.

88
P.V. Norden.
Useful Tools for Project Management.
In B.V. Dean, editor, Operations Research in Research and Development. Wiley & Sons, 1963.

89
P.V. Norden.
Useful Tools for Project Management.
In M.K. Starr, editor, Management of Production, pages 71-101. Penguin Books, 1970.

90
United States General Accounting Office.
Information Technology Investment Management - A Framework for Assessing and Improving Process Maturity, 2000.
Available via: www.gao.gov/special.pubs/10_1_23.pdf.

91
F.N. Parr.
An Alternative to the Rayleigh Curve Model for Software Development Effort.
IEEE Transactions on Software Engineering, SE-6(3):291-296, 1980.

92
V.B. Parr and J.T. Webster.
A method for discriminating between failure density functions used in reliability predictions.
Technometrics, 7:1-10, 1965.

93
M.C. Paulk, C.V. Weber, B. Curtis, and M.B. Chrissis.
The Capability Maturity Model: Guidelines for Improving the Software Process.
Addison-Wesley Publishing Company, Reading, MA, 1995.

94
P.Y. Peng.
GFCURE - An S-PLUS Package for Parametric Analysis of Survival Data with a Possible Cured Fraction, 1999.
Available via: www.math.mun.ca/~ypeng/research/gfcure/.

95
Y. Peng, K.B.G. Dear, and J.W. Denham.
A Generalized F Mixture Model for Cure Rate Estimation.
Statistics in Medicine, 17:813-830, 1998.

96
J.C. Pinheiro and D.M. Bates.
Mixed-Effects Models in S and S-PLUS.
Springer Verlag, 2000.

97
M.E. Porter.
Competitive Strategy - Techniques for Analyzing Industries and Competitors.
The Free Press, New York, 1980.

98
P.H. Porter.
Revising R&D Program Budgets when Considering Funding Curtailment with a Weibull Model.
Master's thesis, Air University, Air Force Institute of Technology, Wright-Patterson Air Force Base, Ohio, USA, March 2001.

99
R.L. Prentice.
A log gamma model and its maximum likelihood estimation.
Biometrika, 61:539-544, 1974.

100
R.L. Prentice.
Discrimination among some parametric models.
Biometrika, 62(3):607-614, 1975.

101
L.H. Putnam.
A macro-estimation methodology for software development.
In Proceedings IEEE COMPCON 76 Fall, pages 138-143. IEEE Computer Society Press, 1976.

102
L.H. Putnam.
A General Empirical Solution to the Macro Software Sizing and Estimating Problem.
IEEE Transactions on Software Engineering, SE-4(4):345-361, 1978.

103
L.H. Putnam and W. Myers.
Measures for Excellence - Reliable Software on Time, Within Budget.
Yourdon Press Computing Series, 1992.

104
L.H. Putnam and R.W. Wolverton.
Quantitative management: Software cost estimating.
In Proceedings of the IEEE Computer Society First Computer Software and Applications Conference (COMPSAC 77), pages 8-11. IEEE Computer Society Press, 1977.

105
ReliaSoft Corporation.
Life Data Analysis Reference, rev. 2001 edition, 2001.
www.weibull.com/lifedatawebcontents.htm.

106
J. Reutter.
Maintenance is a management problem and a programmer's opportunity.
In A. Orden and M. Evens, editors, 1981 National Computer Conference, volume 50 of AFIPS Conference Proceedings, pages 343-347. AFIPS Press, Arlington, VA, 1981.

107
P. Rosin and E. Rammler.
The Laws Governing the Fineness of Powdered Coal.
J. Inst. Fuel, 7(31):29-36, 1933.

108
H. Rubin.
Worldwide Benchmark Report for 1997.
Pound Ridge, NY; Rubin Systems, Inc, 1997.

109
H. Rubin and M. Johnson.
What's Going On in IT? - Summary of Results from the Worldwide IT Trends and Benchmark Report, 2002.
Technical report, Metagroup, 2002.
Available via: metricnet.com/pdf/whatIT.pdf.

110
H.W. Sackman, W.J. Erikson, and E.E. Grant.
Exploratory experimental studies comparing online and offline programming performance.
Communications of the ACM, 11(1):3-11, 1968.

111
SAS Institute Inc.
SAS/QC User's Guide, Version 8, 8.2 edition, 2000.

112
Q. Shao.
Estimation for hazardous concentrations based on NOEC toxicity data: an alternative approach.
Environmetrics, 11(5):583-595, 2000.

113
H.M. Sneed and C. Verhoef.
Business process reengineering.
In J.J. Marciniak, editor, Encyclopedia of Software Engineering, pages 83-95. Wiley Inc., 2 edition, 2001.

114
SPSS.
SPSS 10.0 Syntax Reference Guide.
Prentice Hall PTR, 1st edition, 1999.

115
E.W. Stacy.
A Generalization of the Gamma Distribution.
Annals of Mathematical Statistics, 33(3):1187-1192, 1962.

116
E.W. Stacy.
Quasimaximum likelihood estimators for two-parameter gamma distributions.
IBM Journal of Research and Development, 17:115-124, 1973.

117
E.W. Stacy and G.A. Mihram.
Parameter estimation for a generalized gamma distribution.
Technometrics, 7:349-358, 1965.

118
T.D. Steiner and D.B. Teixeira.
Technology in Banking - Creating Value and Destroying Profits.
Irwin/McGraw-Hill, 1990.

119
P.A. Strassmann.
Information Payoff - The Transformation of Work in the Electronic Age.
The Information Economics Press, New Canaan, Connecticut, USA, 1985.

120
P.A. Strassmann.
The Business Value of Computers - An Executive's Guide.
The Information Economics Press, New Canaan, Connecticut, USA, 1990.

121
P.A. Strassmann.
The Policies and Realities of CIM - Lessons Learned.
In Proceedings of the 4th Armed Forces Communications and Electronics Association Conference, pages 1-19. AFCEA, Fairfax, VA, USA, 1993.

122
P.A. Strassmann.
The Politics of Information Management - Policy Guidelines.
The Information Economics Press, New Canaan, Connecticut, USA, 1995.

123
P.A. Strassmann.
The Squandered Computer - Evaluating the Business Alignment of Information Technologies.
The Information Economics Press, New Canaan, Connecticut, USA, 1997.

124
P.A. Strassmann.
Information Productivity - Assessing the Information Management Costs of US Industrial Corporations.
The Information Economics Press, New Canaan, Connecticut, USA, 1999.

125
P. Tate.
The Big Spenders.
Information Strategy, pages 30-37, December-January 1998-1999.

126
A.A. Terekhov and C. Verhoef.
The realities of language conversions.
IEEE Software, 17(6):111-124, November/December 2000.
Available at http://http://www.cs.vu.nl/~x/cnv/s6.pdf.

127
W.L. Trainor.
Software: From Satan to saviour.
In Proceedings of the National Aerospace and Electronics Conference, 1973.

128
J. Valett and F.E. McGarry.
A summary of Software Measurement Experiences in the Software Engineering Laboratory.
Journal of Systems and Software, 9(2):137-148, 1989.

129
W.N. Venables and B.D. Ripley.
Modern Applied Statistics with S-PLUS.
Springer Verlag, 3rd edition, 1999.

130
C. Verhoef.
Towards Automated Modification of Legacy Assets.
Annals of Software Engineering, 9:315-336, March 2000.
Available at http://www.cs.vu.nl/~x/ase/ase.html.

131
W. Weibull.
A statistical theory of the strength of materials.
In Proc. No. 151. The Royal Swedish Institute of Engineering Research, Stockholm, 1939.

132
S. Wolfram.
Mathematica Book.
Cambridge University Press, 4th edition, 1999.

133
J.Y. Wong.
Simultaneously estimating the three parameters of the generalized gamma distribution.
Microelectronics and Reliability, 33(15):2225-2232, 1993.

134
J.Y. Wong.
Simultaneously estimating with ease the three parameters of the generalized gamma distribution.
Microelectronics and Reliability, 33(15):2233-2242, 1993.

135
F. Wright.
Computing with MAPLE.
Chapman & Hall/CRC, 2001.

136
E. Yourdon.
Death March - The Complete Software Developer's Guide to Surviving 'Mission Impossible' Projects.
Prentice-Hall, 1997.

List of Abbreviations

In this section we give a lexicographical listing of the used abbreviations plus a brief clarification. Moreover we refer to formulas, tables, figures whenever appropriate. In the list below you will find all but one formula. The exception is the fixed ratio equation 27 for which no abbreviation was introduced. It is expressing that the operational cost allocation per time unit is 20% of the cost allocation per time unit for building the system.

a: accumulated cost function. This is the the integral of formula 38. See formula 39 for details.

adc: accumulated development costs. The formula for  ${\it adc}_s$ calculates the accumulated development costs, for a given system s over a given time frame. See formula 22. Likewise, the formula for  ${\it adc}_P$ calculates the accumulated development costs for a given IT portfolio P over a given time frame. See formula 25.

aoc: accumulated operational costs. The formula for  ${\it aoc}_s$ calculates the accumulated (minimal) operational costs, for a given system s over a given time frame. See formula 23. Likewise, the formula for  ${\it aoc}_P$ calculates the accumulated (minimal) operational costs for a given IT portfolio P over a given time frame. See formula 26.

atc: accumulated total costs. The formula for  ${\it atc}_s$ calculates the accumulated (minimal) total costs, for a given system s over a given time frame. See formula 21. Likewise, the formula for  ${\it atc}_P$ calculates the accumulated (minimal) total costs for a given IT portfolio P over a given time frame. See formula 24.

c: this system is overloaded. It stands for an amount of money, that is cost, but it is also a function name. It is the name we gave to the cost allocation function that generalizes all known cost allocation functions, and that we mainly use to base our cost-time analyses on. See formula 38 for elaborations.

ca: cost allocation. This formula calculates the cost allocation for a system for a given time. It consists of the operational and development cost allocations (see ${\it cad}$ and ${\it cao}$ below). See formula 17.

cad: cost allocation for development. This formula calculates for a given time, the cost allocation that you minimally need to develop a system. Typically you can calculate the monthly costs for development. See formula 19.

cao: cost allocation of operations. This formula calculates for a given time, the cost allocation that you minimally need to keep a system operational. You typically use this formula to calculate monthly operational costs. See formula 20.

cc: change in cost. The change in cost equation is the derivative of our proposed cost allocation equation (which is formula 38). See formula 40.

cco: corporate cost of ownership. This formula gives you for a given IT portfolio and a given time, the corporate cost that you need to spend to own the portfolio. See formula 45.

cf: chance on failing projects. There are several related formulas. We have  ${\it cf}_i$: the chance on late projects in the information systems industry. See formula 28 for this chance when a given amount of function points is known, and formula 29 if its development time in calendar months is known. We also have  ${\it cf}_o$, where the subscript now denotes the outsource industry. Analogously, we have a formula for a given amount of function points 30, and one if the development time is known 31.

cl: chance on late projects. There are several related formulas. We have  ${\it cl}_i$: the chance on late projects in the information systems industry. See formula 32 for this chance when a given amount of function points is known, and formula 33 if its development time in calendar months is known. We also have  ${\it cl}_o$, where the subscript now denotes the outsource industry. Analogously, we have a formula for a given amount of function points 34, and one if the development time is known 35.

d: duration. This symbol is overloaded. First it stands for the duration of a project, but it is also a function, that returns the duration for development given a deployment time of a system in years. See formula 10. When subscripted it stands for the amount of calendar months expressing the amount of calendar months for an information systems project (di) or an outsource development project (do), given its size in function points. See formulas 47 and 50 resp.

dd: development duration. This formula calculates the duration in calendar months of a development project given its cost. See formula 3. It also calculates the duration in calendar months of a development project given its staff, see formula 7. Finally, it returns the duration of a development project given a known amount of operational costs, see formula 12 for that version.

df: development fraction. This formula gives you the fraction of the total cost of ownership that is devoted to development. See formula 15.

ead: effort allocation for development. This is not a generic formula but is a specific formula tailored to an example. We calculated the effort allocation of an IT project described in the literature. The formula ${\it ead}$ is depicted in Figure 10.

f: this symbol is overloaded. First it stands for a given amount of function points. But is is also a function. It is the probability distribution function f(t) that belongs to our proposed cost allocation function (defined in formula 38). See formula 52. It also stands for the probability density function known as the generalized F distribution, as stated in formula 56.

F: this is the cumulative distribution function belonging to function f. See formula 53 for details.

fe: failure exposure. This formula accumulates the failure chances of all the individual projects in a portfolio. All you need to know are the (estimated) durations of all the projects in the portfolio. See formula 36.

h: hazard rate. Formula 55 is the hazard rate that belongs to the probability density function that we defined in formula 38.

le: late exposure. This formula accumulates the chances of all the individual projects in a portfolio on being late. All you need to know are the (estimated) durations of all the projects in the portfolio. See formula 37.

mco: minimal cost of operation. This formula predicts the minimal cost to keep a system running given its development duration. See formula 11.

md: maintenance duration. This formula calculates the maintenance duration for a given cost. See formula 4. It also calculates the maintenance duration for a given staff size, see formula 8.

mrt: minimal ROI threshold. This is not a generic formula but is a specific formula that calculates minimal ROI threshold of an IT investment example. This example is summarized in Table 4. The ${\it mrt}$ formula is depicted in Figure 7.

mtco: minimal total cost of ownership. This formula calculates for a given development duration the minimal total cost of ownership over the entire life-cycle of the system. Minimal means here without functional changes. See formula 13.

nsd: number of staff for development. This formula calculates for a given duration of a development project, the number of staff needed. See formula 5.

nsm: number of staff for maintenance. This formula calculates for a given duration of a maintenance project, the number of staff needed. See formula 6.

of: operatinal fraction. This formula gives you the fraction of the total cost of ownership that is devoted to maintenance. See formula 16.

p: productivity. We have several formulas for different industries. We have pi giving the productivity for developing information systems in function points per staff month for a given amount of function points. See formula 46. Analogously, formula 49 returns for a given amount of function points the productivity of outsourcers in function points per staff month.

pt: peak time. With formula 41 we can calculate the time when the peak cost needs to be allocated to a portfolio that satisfies a cost allocation function with the shape of formula 38.

: peak cost. This formula calculates the peak cost for a given cost allocation function of the shape proposed by formula 38. See formula 42.

r: the ratio of minimal cost of operation to development cost. See formula 14.

rs: retirement of system s. This formula is the absolute-time variant of formula 9: given an initiation time and a delivery time, it calculates the retirement time for a given system. See formula 18.

rf: repayment factor. This formula calculates for a given cost allocation function of the shape as proposed by formula 38, and a given time, the amount of money that you still have to invest to develop and operate the portfolio described by the given cost allocation function. See formula 44.

S: survival function. Formula 54 is the survival function that belongs to the probability density function 52 that we derived from our cost allocation function 38.

tcd: total cost of development. We have several of these. First there is a formula, that for a given development time calculates the cost. See formula 1. Then there are several formulas that are based on function points, and diversified to industry. We have  ${\it tcd}_i$ that for a given amount of function points gives the development costs of an information systems project, see formula 48. Analogously, we have  ${\it tcd}_o$ that returns for a given amount of function points, the development costs of a project done by outsourcers, see formula 51 for details.

tcm: total cost of maintenance. This formula calculates the total cost of a maintenance project for a given duration. See formula 2.

tco: total cost of ownership. his formula calculates the total cost of ownership for an IT portfolio that is described by an instantiation of formula 38. See formula 43.

y(d): years that a system is in its deployment phase. With formula 9 we can calculate for a given development time, the number of calendar years that a system is in deployment.



Footnotes

... true1
Apart from the arguments that we give in this paragraph, there is also a fundamental argument indicating that the premises for applying MPT are not fulfilled. This argument may be less easy to comprehend upon first reading this paper. In [41, p. 293] it is stated that if the number of activities in a software project is large, and if these activities are uncorrelated, then according to the central limit theorem, the cost allocation distribution will approach the Gaussian distribution. Empirical evidence indicates that cost allocation functions for R&D projects, software projects, and IT portfolios are not normally distributed. This is shown in [87,88,89,101,104,102,103,98] and in this paper.
... benchmark2
We will use the phrase according to benchmark in this paper to indicate that the quantitative data is in accord with public benchmarks, but not necessarily exactly accurate for a single instance.
... mathematics3
www.mathematicallycorrect.com/glossary.htm
... Group4
ING Group is a global player in the financial services and insurance industry, investing approximately 2.5 billion Euro per annum on information technology.

next up previous
X Verhoef
2002-07-20