Free University of Amsterdam, Department of Mathematics and Computer Science,
De Boelelaan 1081a, 1081 HV Amsterdam, The Netherlands
Keywords and Phrases: IT-governance, IT-governance rules, IT-portfolio analysis, Quantitative IT-governance, overperfect data, heterogeneous data, overregulation, underregulation, managing on time, managing on budget, managing on functionality, time compression, time decompression, seasonality effects.
Many organizations are experiencing that information technology is becoming not only a significant expense, but also their largest production means and cornerstone of the organization. This implies for corporate management that a simple cost reduction strategy is no longer the obvious guide to decision making, since their real means to gaining competitive edge is IT. We see the emergence of enterprise architectures, software process and product improvement programs, software productivity benchmarks, business cases, and other indications that corporate executives want more insight in the IT-function. Signs of effort that is put into control are also available in abundance: the development of IT-dashboards and balanced scorecards , the installment of IT-portfolio management departments, the initiation of IT-review boards, the development of the IT-investment maturity model , and so on. IT-governance amalgamates this into a holistic more formal whole. You could say that IT-governance is a structure of relationships and processes to direct and control an organization's IT-function in order to achieve its goals by adding value while balancing risk versus return over IT and its processes. Part of IT-governance is to design, apply, and evaluate a set of rules for governing the IT-function--the rules by which we play the IT-game. Just to mention an example that everyone is probably familiar with: the enterprise architecture. Once it is established we agree to use it all over the organization. There is no freedom whatsoever for ad hoc IT-investments or application of alternative technology.
But how effective are the controls in reality? Does such uniformity really deliver reduced costs and increase efficiency? And how to sift through all of these methodologies, strategies, models and outputs to ensure real added value and achieving the right balance of governance versus actual production? For instance, the effectiveness of such enterprise architecture groups can be brought seriously into question. Trying to get ``all the boats rowing hard but also pointing in the right direction'' is a worthwhile goal, but you have pockets of almost religious resistance, and there are truly-needed variations . Also, it can't be a one-size fits all, but instead a set of guidelines with different levels of commitment. Otherwise the enterprise architects are ineffective, or power struggles occur between developers supporting the business and the enterprise architect's perceived centralized ivory towers. Of course, you don't want to bog down the IT-organization with IT-governance. But what is the right level and combination? Such questions--let alone answers--are not part of most papers on IT-governance.
In this paper we quantify at least the effects of some important IT-governance rules and styles, so that insight is gained in such questions and answers are found in whole or in part. We will focus on issues regarding the five fundamental parameters of any IT-governance flavor: data, regulation, cost, time, and functionality. For, without a fact-based rationale IT-governance is null and void. More specifically seven patterns drew our attention: overperfect data, heterogeneous data, over- and underregulation, and a preoccupation with time, cost or functionality. These patterns reveal rules that deliberately or haphazardly style an IT-governance practice which easily induces unwanted negative effects. In each of the seven cases it was possible to take corrective measures to reduce these effects and/or amplify the intended purpose of the chosen direction. To scope the expectations for this paper, there is no best single approach. Instead, by continuous monitoring an IT-portfolio, we can often quickly spot unwanted side-effects of IT-governance, and by conservative modifications we can balance the level of governance to efficiency, the cost of data collection to its value, regulations to their effectiveness, and so on. Finding and keeping such an equilibrium is not easy, but flying blind will lead to asymmetry and instability . In this paper we will not try to find this dynamic equilibrium but to steer away from the extremes to ensure better balance and allow organizations to rightly pursue their own equilibrium.
Of course, IT-governance rules are meant to efficiently and effectively carry out the IT-function. Most rules are a result of common sense, standardization, experience, and best practices. But many organizations lack even the most basic rules. For instance, Meta Group surveyed whether organizations make a business case for information technology. It turned out that :
In addition, budgeting for one year in itself seems already daunting, and sometimes takes more than five months alone. Frankly, it cannot get much worse than this survey illustrates, so if only there were a few rules, like having a solid business case for IT-investments balancing risk and return, the performance could improve drastically. One such improvement was attempted by the federal government of the United States with their Clinger Cohen Act , regulating federal IT-spending. The basics from this act for major information system investments were summarized by Raines, then director of the Office of Management and Budget. His eight IT-governance rules are used as decision criterion by the government. The first four relate to capital planning, addressing the sore lack of such rules as surveyed by Meta Group. The fifth establishes the link between planning and implementation: the information architecture should align technology with mission goals. The last three rules establish risk management principles to assure a high level of confidence that the proposed investment will succeed (risk reduction is rarely seen in practice). All major federal information systems should comply with what has become know as Raines' Rules . Such investments shall:
Although some of these rules, and our lessons learned, may seem like open door governance, compliance with this Act and Raines' seemingly obvious Rules turned out not to be easy. This was pointed out by an evaluation of adherence to the Clinger Cohen Act, or better lack thereof . We quote:
The report released today reveals what we feared the most--that the Administration is not enforcing the laws that Congress passed over four years ago [..] The report shows that 16 agencies neither developed nor submitted IT management reports that included accomplishments, progress, and identification of areas requiring attention. One quarter of agencies listed projects that deviated significantly from cost or schedule goals. According to the report, agencies are not using sound business procedures before investing in information technology, so they are unable to improve program performance and meet their mission goals.
So once again what is the right level and combination of specific methods for each point made in, say, Raines' Rules? What is the real pay back? Exactly how to implement such issues? It all seems far from trivial. No wonder that the technology research company, Forrester Research claims that [10, p. 9]: ``Almost any structure for IT governance will work as long as senior business execs are involved--appropriately.'' This observation is understandable, given the deplorable state of IT-governance in the industry. But the crux is in the word appropriately as we will argue in this paper: some IT-governance rules turn out to have nasty side-effects. The word appropriately should cover roles, rules and processes. Structure as meant by Forrester and others [16,55,10,56], mainly denotes the roles. The summary ``almost any IT governance structure will do'' [10, p. 2] pertains to the decision structure that is needed for good IT-governance: to the competencies needed within an IT-government. However, a preoccupation on roles within an IT-government could easily distract us from the exposure of unwanted negative effects that may accompany the explicit or tacit way organizations tend to deal with data, control, time, cost and functionality, the five fundamental parameters for IT-governance.
For the US Government the situation is improving. In the fiscal year 2004, the Office of Management and Budget required agencies to submit business cases for over $34 billion in proposed spending (57% of the total IT-budget), up from less than $20 billion in 2003 (38%). For those projects that made an adequate case, the quality of justifications has gone up [9, p. 43]. Total federal IT-spending across the government in 2004 is approximately $60 billion ($52 billion in FY 2003 [8,13]).
In a sense IT-governance as a theme and specific rules of thumb like Raines' Rules aim to fit information technology in the framework of good governance. The earlier mentioned balanced scorecard testifies of this . IT is there part of the innovation quarter of the balanced scorecard. Even at the time of writing this paper, IT is often only seen as an innovation instead of gradually contributing to customer interaction, general work processes, worker productivity and customer convenience. Our approach can be seen as an attempt to apply established governance practice from other areas (human resource management, operations research, finance, accounting) to information technology. We illustrate that this viewpoint can contribute to IT, by exploring a rather basic set of data. For that we obviously need data, and this paper deals with organizations, that do collect data on IT-projects. We realize that this is not the majority of the organizations, and for those organizations in dire need to gain insight in their IT-function we refer to work where--despite the lack of internal information--decision making can be supported by a quantitative dimension, using industry benchmarks [50,49,53,52]. Collecting data is always preferable since often appropriate benchmarks are not known, and matching what data is present is an inexact science. Nevertheless, approximate benchmarks give you an idea of order of magnitude which is better than gut feel alone.
By exploratory data analyses of information residing in large IT-portfolio databases we discovered characteristic patterns for certain IT-governance rules, whether they were stated explicitly, assumed implicitly, and irrespective whether or not they are adhered to. Moreover, we were able to advise on modifications to the IT-governance system in place to reduce or even mitigate some unexpected negative side-effects. We will discuss this in the paper as follows. First we devote our attention to data and regulation, the building-blocks of IT-governance. For both data and regulation, we discuss a number of extremes that we characterize as overperfect data, heterogeneous data, overregulation, and underregulation. These characterizations each take a separate section. Then we turn our attention to three important dimensions of IT-projects: time, budget and functionality. We will identify unique patterns for exclusively managing on time and patterns revealing a style of IT-governance with a preoccupation on managing on budget. This takes two sections. Then a section discusses the much more diffuse patterns of (the more rare) uniform management on functionality. By exploring IT-portfolios from these 7 viewpoints, we discovered sometimes large exposures in IT-portfolios, which could be reduced or mitigated. We finalize each of the above sections with a few important lessons learned. To protect the interests of the organizations involved, this study is made anonymous: the data patterns are simulations of the real-world situations that we encountered. The portfolio's that we investigated stem from various industries: financial, insurance, government, defense, telecom, and others. They contained a mix of different technologies, ranging from legacy-mainframe technology to modern approaches. The seven patterns were not tied to a single industry, albeit that uniform management on functionality was more often found in the systems industry than in other sectors. The seven patterns are illustrated by simulated versions to reveal their most typical appearance. In practice, such patterns can be less outspoken, often since more than a single effect is present in an IT-portfolio.
We first turn our attention to data, which is one of the fundamental parameters of IT-governance. We will not focus in this stage on certain kinds of data, but address patterns in data that point to problems in the process of collecting them. In many organizations we encountered erroneous data points, but sometimes, we see the miracle of overperfect data, in other words, data that is too good to be true in the sense that the data is such that its suggests a perfect relationship between two or more variables, whereas there might be no relationship at all. True overperfect data is almost never found in applied statistics. On the contrary, we know from statistical practice that data collection is hardly ever fault-free. Routine data that is collected with little individual care, often under time pressure, can easily contain 10% stray values [26, p. 28]. So the chance that we will run into a true overperfect data set is close to zero in an uncontrolled environment--the practical reality in IT-portfolio management. A typical example of overperfect data is when the correlation between actual values and their corresponding estimates is unusually high. If this pattern is found, it is more likely that the estimates are retrofitted.
Often overperfect data is an effect of an IT-governance structure that allows for tampering the data, willfully or unwillfully. In Figure 1 we depict a typical view of an IT-portfolio sample immediately revealing a case of overperfect data. This view describes the discrepancy between actual hours worked on 150 projects and their estimates. We summarized the data in Table 1. We explain the abbreviations in Table 1, which will come back in other tables as well. Table 1 is the result of a so-called summary statistic (a statistic is just some function of the observed values). Often used summary statistics are the five-point summary statistics made popular by Tukey [48,38]. We use a six-point summary, since we also include the most well-known summary statistic: the mean. The lesser known are the so-called quartiles. Let us explain them briefly. For a start, a quantile is any of several ways of dividing your observations into equally sized groups. An example of a quantile is the percentile: this divides your data into 100 equally sized groups. Likewise, quintiles divide into 5 equally sized groups, and quartiles divide data into 4 equally sized groups. You can obtain a fairly good idea of the distribution of your data by dividing it into quartiles [48,38]. We explain the abbreviations:
Both from the numerical and visual aggregates it is clear that there is almost no discrepancy: the numerical summary shows large coherence between estimates and actuals, and the view shows this even more strikingly. A special aspect of the view is that there are no serious over-estimations and only a few under-estimations. The under-estimations can be quite large, and they reflect data input by inexperienced managers not yet initiated in the unwritten rule to retrofit the data in the system. So they report actuals without ``correcting'' their initial estimates. Moreover if we carry out a linear regression (not shown in Figure 1), we find a correlation coefficient of 0.9742, while a coefficient of 1 would have been a perfect fit.
If the resemblance is less striking, there are other ways to visualize a case of overperfect data. We mention them here. If two data sets have the same distribution, there must be a linear relation between both their quantiles (see e.g., [27, p. 244]). You can visualize that with a so-called Q-Q plot, where Q-Q stands for quantile-quantile. In Figure 2 we give a Q-Q plot of the estimated versus actual data points. Indeed, this Q-Q plot shows a very nice linear relation, a strong indication that both data sets have the same distribution. Another visualization aid is to compare the cumulative distribution functions of both data sets. We depict that in Figure 3. This clearly shows that both cumulative distributions are almost the same.
But how much different should data points be in order to conclude that they are not the same anymore? If the effect is less striking than with the data we used to illustrate the pattern of overperfect data, we can use formal statistical tests to decide whether there is overperfect data or not. Since in general we do not know anything about the distribution of the data, it would be best to use a test where this does not matter, a so-called distribution-free test. One such test is known as the Kolmogorov-Smirnov goodness of fit test, or KS-test for short [35,44,12]. The KS-test measures the maximal vertical distance between the cumulative distributions of two data sets. So, in Figure 3, the test measures the maximal vertical distance between the solid and dotted lines. This distance is known as the KS-test statistic. For our example the KS-test statistic is 0.0333, meaning the the maximal distance vertically between the two distributions is very small. Furthermore, the so-called p-value (for probability value) is 0.9999. If our null-hypothesis is that the two distributions are the same, we cannot reject that hypothesis with very high confidence (more than 0.001%). Of course, when data is somewhat blurred, the KS-test statistic is often larger, and the p-value less close to 1. But then you can decide with a standard confidence interval (like 95%) whether the distributions are different, or that you cannot exclude this. If we only have the retrofitted data, we will not learn anything and we lose the opportunity of predictive power, therefore, it is important to explore data for this type of pattern.
In order to develop predictive models you need historical data describing situations similar to the one you wish to have predictive power for. One of the ways to measure the quality of the historical data is to calculate the so-called estimating quality factor or EQF, proposed by Tom DeMarco  in the early 1980s. The idea is to divide some actual value (that we find in retrospect), by the difference with the estimates that were made before the actual value was known. So, suppose that a turns out to be the actual value, and e(t) represents the various estimates over time, and tais the time when the actual value is known. Then the following calculation gives us the EQF:
Equation 1 expresses that if the estimates are at all times exactly a, the estimating quality factor becomes infinite. So, in practice, the closer an estimate approximates the actual value, the higher the EQF will become. Indeed, the earlier mentioned overperfect data implies by definition a very high percentage of EQFs approaching infinite, whereas an EQF in the order of 10 (just 10% off) has never been observed by proponents of the EQF-methodology . So it is more likely that if much better EQFs are found that in real-world examples that the data has somehow been adjusted.
One of the important data points for IT-portfolio management are durations of a project. Time-to-market is often an important aspect of a project, and control of deadlines can turn out to be crucial for reaping business opportunities. Underestimation of the completion time for projects is endemic, and analysis of real data as opposed to overperfect data, reveals that there is sometimes a fixed factor between actual completion time and the estimated time for a project. For instance, in the late 1970s Augustine  found a fairly consistent factor of 1.33 between estimated completion times and the actuals for about 100 official schedule estimates within the US Department of Defense [6, p. 320-1].
To solve the overperfect data problem, a simple change was made to the IT-portfolio management system: all data fields were made nondestructive with the possibility to correct for errors for a certain timeframe, say a week. From the historical data and the actual values, an estimating quality factor was calculated, giving managers more or less credit and freedom depending on their quantified estimating accuracy. In this way, the data that needed to be collected was used, and feedback was given by way of rewarding good estimators. A limitation of this simple pattern is that it does not exclude people from booking to the estimate, so that aggregates look great but the data underlying the aggregates are fiction. To detect such patterns we need to analyze the micro-behavior of the cost data in correlation with duration and functionality. This requires sometimes sophisticated statistical analyses like (vector) autoregression techniques, that are out of scope for this paper. For more information on the plausibility of the micro-behavior of important IT-indicators we refer the interested reader to .
We have seen overperfect data, and why it exists. Now we address another pattern in data pointing to problems in the process of collecting them. You might say the opposite of overperfect data, of which the correlations are unusually high. Namely, we also encounter data sets that show no correlations whatsoever, whereas they should. For instance, larger IT-systems often take a longer time to construct, so you would expect a relation between those two variables. But in some IT-portfolios, we cannot infer this from the data. We call such data sets heterogeneous. Often the data is pooled from a number of more homogeneous data sets, but put together they result in a (seemingly) arbitrary data view. Of course, such data can in addition be overperfect, but for explanatory reasons we will discuss heterogeneous data in isolation.
We show an example of seemingly random data in Figure 4. In this view, we depict for a sample of 150 IT-projects their productivity in function points [1,2,19,34,33,21] per staff month, against their size in function points. From the view we can immediately spot that the linear regression line has no predictive power whatsoever. Indeed the correlation coefficient is 0.0005629, which is close to no correlation (then the coefficient would be exactly zero). In reality, there is a decrease in productivity when the software size increases [6,37,40,28]. For instance, when the size of software increases the number of people working on the project increases, and therefore the communication among the engineers increases, and overall productivity will be lower. But there are also other reasons why inevitably the productivity will be lower for larger systems than for smaller ones. This effect can be quantified: according to benchmark, this is a nonlinear relation. The sample viewed by Figure 4 comprises mainly of in-house developed MIS systems. For this type of development we use the following formula taken from [50, p. 61, formula (46)]:
This formula takes an amount of function points, and returns the productivity in function points per staff month according to benchmark for that particular software size. This formula is inferred from public IT-productivity benchmarks for in-house MIS development. Since the data does not seem to show any structure, we use Formula 2 as a fall-back scenario.
From Table 2, we learn that the productivity ranges between 4.5 and 15.5 function points per staff month. The mean productivity is around 10 function points per staff month. Moreover, there are extremely small sized projects of only a few function points up to medium sized 800 function points projects. We note that the numbers in both columns are not correlated, they just summarize minimal and maximal values, and a few crucial values in between.
This large range and the view in Figure 4 are strong signs of heterogeneous data. In order to assess the arbitrarily looking data, we conduct an experiment. We wish to know what the difference is between benchmarked productivity and the reported productivity. The reason for this is, that in this example the function point sizes are reasonable. We note that this is more often the case; measuring the amount of function points is somehow easier than inferring the IT-productivity. The data of the sample portfolio contains tuples of the form (pi,fi), where pi is the reported productivity for a software system of size fi measured in function points. To calculate the difference we use Formula 2: p(fi)-pi.
In Figure 5, we depict the data cloud that results from taking the differences. This plot is clearly an improvement over the random plot (viz. Figure 4). Next we carried out two curve fitting exercises. A linear regression, represented by the dashed line and a solid line which is a nonlinear fit. The linear fit has a much better correlation than the earlier fit with the raw data: 0.6889. Still we are not satisfied with the linear fit, since as we will see later on, it will not predict productivity in a natural manner: it will predict higher productivity for larger systems, which is not in accordance with reality. Therefore, we also conducted a nonlinear fit, and we assumed that the difference is similarly shaped to Formula 2. This assumption is based on our experience that productivity benchmarks are behaving somewhat like a logistic probability density function. We do not exclude that other families of curves are not appropriate, but we use the logistic family, which is often giving satisfactory results. This fitting exercise leads to the following formula:
The c stands for correction, since Formula 3 describes a correction between the benchmarked productivity and the reported one. Using this correction, we can infer a formula that is specific for the sample IT-portfolio that gives a reasonable idea of the productivity: p'(f) = p(f) - c(f).
In Figure 6 we plotted our newly inferred productivity Formula p'(f). What can be seen, is that for very small sizes, the productivity is high, then it slows down very quickly, and after a slight recovery, it slows down again. Investigation of the sample showed that there was a mix of small new development projects, small changes to very large systems, and minor and major enhancement projects. Since only the size of the change was reported, and not the context of the change, the data seemed arbitrary, but is in fact just heterogeneous. Based on an additional qualitative analysis we understood the dip in p': these productivity figures represent the projects where relatively small changes were made to very large existing systems. For that category of changes it is known that their productivity is very low. Productivity ranges between 5 to 15 function points per staff month for well structured systems where typically enhancements are implemented without the need for internal modifications to the existing system(s) [28, p. 630], and 0.5 to 3 function points per staff month for the classic form of maintaining poorly structured, aging legacy systems [28, p. 633]. Obviously this kind of low productivity is totally different from the average productivity of 27.80 function points per staff month for in-house developed MIS systems [29, p. 191]. Therefore, the data of Figure 4 showed such a wide variation that it looked random at first sight.
We note that you can spot potentially heterogeneous data also by estimating the empirical probability density function. If this leads to more local maxima, this is also a sign of heterogeneous data. In Figure 7, we provide such estimates. As can be seen there are more tops for the function point sizes, and these sizes seem to be the sum of maybe 3 density functions. But in fact, there is nothing wrong with that, since function point sizes often display asymmetric leptokurtic possibly heterogeneous distributions with heavy tails . Also the productivity distribution does not reveal too much heterogeneity. Fortunately, the plot in Figure 4 leaves nothing to the imagination in that respect: the data is heterogeneous.
As we already announced, we also tried a linear fit to correct for the productivity, but this turned out to be less natural. We compared both corrected productivity formulas in Figure 8, and this clearly shows that for the larger projects the linearly corrected productivity increases monotonically. This is not in accordance with reality, since larger IT-projects have lower productivity rates than smaller ones. Therefore, we rejected the linear fit, and took the nonlinear fit that also behaved relatively well for the larger function point sizes. Again, also this fit is far from optimal, but at least it gives us insight in the status of the current IT-governance rules. This status is that too many different types of activities are pooled into one type of data, which leads to rather unclear views as depicted in Figure 4.
We have seen two typical patterns when it comes to data. At this point we move to IT-governance rules, and dive into the subject of overregulation. Also here, we can see from the collected data what its effect is, and whether this is good or bad. For instance, in one case a benchmark among peers showed a too low productivity for IT-development within a large organization. How was this possible, while everything was perfectly under control? The answer lies in the word perfect, which is sometimes the enemy of the good. The productivity problems were due to overregulation.
As for comparison, we give an example of overregulation outside the software world. The US Government has many rules for all their acquisitions. One of those rules is called TINA, short for Truth In Negotiations Act. TINA implies that each contract must be accurate, complete, current, and certifiable. This holds for all cost and pricing data associated with each contract. Of course, TINA is meant to save costs, to prevent unnecessary high prices. But how much does TINA itself cost? The following case sheds some light on this question. After an initial acquisition of half a dozen F16s, six more turned out to be necessary despite the fact that they were declared to be superseded by other planes. The first acquisition was done according to TINA, but for the other 6 F16s a TINA-waiver was granted. This reduced the military specifications with 91%, the data deliverables with 61%, reduced the contract span period from 800 to 195 days, reduced proposal preparation costs with $1.5 million, and a unit price reduction of $0.3 million . It is obvious that the rules to save costs, have a cost-increasing effect. When governance rules cost more than they deliver, we speak of overregulation. This happens also for software projects, and there are ways to detect cases of overregulation or control mania from data in an IT-portfolio.
One of the key-indicators to look for is the approval duration of IT-projects, this is the time (here measured in days) it takes from submission to approval of artifacts demanded by governance, e.g., all kinds of documents. In Figure 9 we illustrate this. From a sample of 150 IT-projects we depicted the total approval duration for the most prominent deliverables that are submitted, reviewed, and approved during the project. In this case, a feasibility study, a requirements document, change control documents for each major deviation from the original plans, and a document dealing with the closure of the project, whether successful or not: the post mortem. For all but the change control documents, a project could only commence after a sign-off: no significant investment without an approved feasibility study, no design and implementation without proper requirements, and no deployment (or retirement) before the post mortem. The change control is not having such sequential impact since when some part needs change, other parts can progress. Still there is a small delay. We composed the total approval duration by adding the approval times of the three, and one-tenth of the change control approval time, thereby conservatively taking the approval duration of intermediate changes into account. As summarized in Table 3 the average total approval duration is over 75 calendar days, or more than 2.5 months of administrative delays. Also on average, the total average project duration was about 380 calendar days, so a little over a year, from which 78.1 days is just over 20%. Note that the row in Table 3 does not contain the sums of the columns, it is the summary of the totals. So, the approval process laid out in the IT-governance rules is responsible for 20% of total IT-project duration. Note, that the creation of the documents is not taken into account here. The 20% is only the time between submission and approval (this leads to time decompression, a subject we will discuss in another section). Such long waiting times not only stunt IT-productivity, but there is also an opportunity cost incurred. There is a longer time to market, and staff typically does not shift during unplanned delays. In such a situation, you start wondering whether no IT-governance at all pays off more than this fast-tracked governance (we will come back to underregulation in the next section).
The opposite situation of overregulation is what we call underregulation: there are no IT-governance rules. This does not exclude the existence of governance rules at all, just the fact that there are no specific rules to govern information technology. Lack of such rules gives rise to patterns in data collected for IT-portfolio analyses. We will illustrate a few effects testifying of the lack of rules. In fact you could say that people assume certain rules, even if they are lacking, and what we see in IT-portfolio analyses is the reflection of such implicit rules.
In one organization we found the following effect when looking at three important indicators: budget, time to market, and delivered functionality. Due to the lack of rules, technical personnel optimized towards functionality. Both budget and time to market were sacrificed for delivering the full solution. Furthermore, budget took precedence over time to market: first, deadline extensions were used to optimize to full delivery, and only as a last resort more budget was asked for. You could say that the IT-department optimized towards quality--an attribute that characterized the mission and strategy of this organization.
We know that optimizing to functionality is only important in a few cases, for instance, when obligatory changes to existing systems have to be made, you cannot afford yourself to deliver 80% of the solution. But in some other cases, the speed to market a solution is more important than having all the functionality, for instance to ascertain a market share. In yet different cases budget is a leading indicator for creating a business case: the perfect solution is way too expensive, but a partial solution often creates enough value for money.
An interesting phenomenon that is often seen in organizations lacking IT-governance rules is what is called seasonality. Almost all organizations have financial governance, and this has been traditionally organized around years and quarters. Some governance rules are also organized around fiscal years, that do not need to coincide with calendar years, but the effect is the same. The effect is that information technology becomes automatically organized around the financial ``seasons'' in this case.
This implies for IT that project management, budget allocation, and review processes are also organized around the seasonal deadlines that apply for general governance. We can see such effects in IT very clearly. For instance, in Figure 10 we depicted two lines: the solid line is the number of ongoing IT-projects, and the dotted line is the number of accompanying reports about the projects. In Table 4 we summarized the data for both the projects and their reports. The long-term means of both time series show that there are on average 500+ IT-projects and about 150 reports, at each time. However, from the lines in the figure we can see that there are short-term peaks around magic dates, like January 1st, July 1st, and smaller peaks around the 1st and 3rd quarter. We can clearly spot the effects of seasonality here.
We do not recommend governance around financial seasons, since they often emphasize on deadlines for delivery of financial data, which transposes to IT-delivery. We do however, recommend to take decisions on IT-portfolios on a regular basis, rather than a few projects each time. A formal submission process enables governors to give IT-projects a different priority, or to weigh projects against each other based on different criteria to obtain an optimized IT-portfolio with a proper risk/return and according investment policy.
We already alluded to the notion of seasonality, and more general there can be IT-governance rules that take the aspect of time into account. One of the things that keeps coming up is time-to-market. There is often a strong pressure on as short as possible durations of IT-projects. In fact, this is often viewed as a cost-control. But as we will see, managing on time can also increase costs. We can spot those effects in data collected during IT-portfolio analyses, and we can also measure the effect this has on the IT-budgets.
In Figure 11 we depicted a cost-duration view of a sample IT-portfolio containing 495 finalized IT-projects, with a total investment of $1.8 billion. On the vertical axis we set out cost in thousands of dollars ranging from IT-projects of an insignificant cost to large IT-investments in the millions of dollars. Horizontally, we depicted the absolute duration over time of these projects, but we refrained from giving actual time series: we start with 0 which represents January 1st, of some recent year. In this sample we see vertical concentrations of data. Such data clouds are characteristic for managing on time.
To obtain a better view of such data clouds it is a common technique to depict the same data on a log-log scale. We have done this in Figure 12. The difference between Figure 11 and Figure 12 is that the logarithm of the cost, and the logarithm of the time are plotted. The vertical and horizontal scales are therefore not linear anymore. A logarithmic scale clusters data points having the same order of magnitude. We put three vertical dotted lines in Figure 12: we call them tear lines. As can be seen, a lot of projects have fixed deadlines: 6 months, 12 months, and 2 years. In this case, when these deadlines were not met, it turned out that they jumped to the next tear line.
Another issue that is worth mentioning is that per tear line there seem to be almost arbitrary costs. To that end, we clustered the data around the three tear lines and summarized the result in so-called box and whiskers plot [48,38], or abbreviated a boxplot. We depicted three of them in Figure 13. A boxplot is just a visual form of a five-point summary. The shaded box is limited by the first and third quartile, and the white line inside the shaded box is the median so that skewness of the data can be spotted right away. So the shaded box encloses the middle 50% of the observed values. Extreme values (minimum and maximum) are highlighted by so-called whiskers and, if present, outliers are shown as well. From these boxplots, we indeed see that for very small timeframes there is an enormous variation in the cost range. Moreover, this cost range is not wide due to outliers, or potential outliers: the middle half of the data is responsible for the large spread.
After we found this data pattern, we discussed it with the involved organization to learn more about their IT-governance rules. It turned out that they managed heavily on time: everything they were doing was time-critical and it was of the utmost importance to market applications in short timeframes, so that customers could be served and profits be made. A question that we worked on subsequently, is the following: what is the trade-off between speed-to-market and IT-development costs? In other words, a shorter timeframe for an IT-development project brings additional costs, but this also enables earlier usage of the software, which can bring in profit (or market share for that matter). The idea was to quantify the extra costs, so that management could make a calculated decision on the benefits of time-to-market.
where c stands for cost, and d is the duration of an IT-project. We use this empirical relation between time and cost to estimate the effect of a shorter time-to-market than a normal technology-driven deadline would cost. Equation 4 indicates that when we try to compress time just a little bit, the pressure on the cost increases drastically. This is similar to fluids, where a minimal compression of its volume results in a significant increase of its pressure. Therefore, we sometimes refer to equation 4 as the hydraulic software law. A reasonable range for Formula 4 is that durations do not deviate more than 35% from nominal values; these can be found using your own data, or estimated via public benchmarks.
In Figure 14, we depicted the effects of time compression. Let's explain this. Suppose there is an IT-development project that costs a little over a million dollar ($1037174 to be precise). And suppose that the time-to-market for this system should be 12 months according to the business. We depicted this IT-project with a single dot in a cost-duration view; see Figure 14(a). For this IT-project, we calculate the constant of Formula 4. This amounts to:
Now we can vary for this IT-project the cost and time along the line that is defined by the following function:
The abbreviation h is short for hydraulic, and quantifies the effects of taking more or less time for an IT-project on the costs. In Figure 14(b) we depicted h, together with the dot representing the 12 month IT-project. Next we recall a formula taken from [50, p. 18]:
where tcd is short for total cost of development, r is the daily burdened rate in dollars, w is the number of working days per annum, and d is the duration of an IT-project in calendar months. Formula 5 is based on public benchmarks, and thus gives an indication what an IT-project will nominally cost. In this case, we had internal data, so we did not use this formula, but another one. Basically, the formula above is constructed like this:
where a in equation 6 is short for assignment scope. An assignment scope for a certain activity is the amount of software (measured in function points) that you can assign to one person for doing that particular task. Note that the assignment scope is relatively size independent. In Formula 5 we took a=150, representing the activity: IT-development. If we set a=750, this is for the activity: maintenance. Both numbers were taken from [28, p. 202-203]. Since we had data, we could carry out a regression to estimate a, which turned out to be a value between 150 and 750. This reflected the organization's governance rule that no distinction was made between new development, maintenance and renovation projects. For now it is only important to realize that you can come up with some internal benchmark with which you can compare IT-projects. For the sake of the example we use an industry benchmark to explain the methods we are using. So, in Figure 14(c) we added a dashed line representing Formula 5, to the hydraulic line determined by the IT-project visualized by the single dot.
To answer the question what this IT-project would cost if the deadline was according to public benchmarks, we have to combine Formulas 4 and 5: we have to travel alongside the hydraulic line h, until we meet the benchmark line tcd. The intersection is marked with an open inverse triangle--see Figure 14(d). In this case we have analytic relations, so that we can easily find an algebraic solution for the duration d0 that is both on line h and according to benchmark. It is the following relation:
In Figure 15 we depicted an exploded view of Figure 14(d), in order to visualize the time-cost trade-off more prominently. As can be seen, the original deadline of 12 months should for the price of about a million dollars be a little longer, namely about half a month longer. To calculate the cost reduction, we calculate dollar, so that the reduction is 140629.8 dollar, so approximately $141K. The horizontal line segment in Figure 15 represents the half month time delay, and the vertical line segment in Figure 15 stands for the potential $141K cost-reduction. Of course, you should then not work with fixed staff, since then delay equals added cost. Apart from the financial calculations for time compression and decompression, you must deploy proper capacity management, so that the relaxed time frame, and its freed effort is moved to other projects.
With this result, the business can reflect once more on their business case. Namely, is an extra cost of far over a hundred thousand dollar justified, for a time reduction of a small half month? If this is the case, then the IT-project must make more than $141K in two weeks or some other strategic goal must be met, e.g., to prevent permanent loss of market share as a consequence of not being the first mover. And can we deploy the extra effort on another project? Summarizing, at least now we can make a calculated consideration when it comes to speed-to-market, and what this brings as benefits to the business.
Of course, it is interesting to look at individual projects, and make such time-cost trade-offs. But what happens if we apply the above exercise to an entire IT-portfolio that is managed on time? In Figure 16 we depicted a sample of 50 IT-projects from a business unit within an organization. The dots above the dashed line are the actual project data. The solid line segments represent the same solid curves as in Figure 14. They are the hydraulic lines that belong to the original data points of each individual project by applying fifty times Formula 4, and finding the h function that belongs to the particular project. The dashed line is again Formula 5, of which we only need one: this is a line that represents what the relation between cost and duration is according to public benchmarks. Now if we apply Formula 7 fifty times, we find all the intersections of the time-compressed IT-projects with their benchmarked estimates. This leads to the fifty intersection points on the dashed benchmark line.
Next we calculate all 50 time delays, as depicted in Figure 15, and we calculate the 50 cost reductions as well. In Table 5, we summarized the data points, and their totals. We see on average that the time compressed schedule is 17.58 months, and that the total effort in time is 879 calendar months. The benchmarked durations are a bit longer, since this IT-portfolio sample is clearly time compressed, and take 19.68 months on average; an increase of 2.1 month per project, for 50 projects. This amounts to an increase in the number of calendar months of 105 over the 50 projects. For the total of 984 calendar months this is an increase in schedule of 11.95% at the IT-portfolio level. Likewise, the planned cost of these projects is on average about 8 and a quarter of a million, whereas the benchmarked costs are much lower: $5.3 million, so a decrease on average of 2.96 million dollars per IT-project. Of course, the results in the rows for time delay and cost saving are not calculated by subtracting the two rows above them, these numbers are the result of a standard summary statistic on the differences for all the data.
All in all, the IT-portfolio cost is a little over 41 million dollars, its benchmarked cost--at the expense of longer schedules--is 26.50 million. We call the difference between actual and benchmarked costs the IT-portfolio time compression risk. This time compression risk amounts to almost 15 million dollar, which is 35.86% of the total IT-budget. In this case, time compression forms a severe exposure for the organization. Once the indication for a serious time compression exposure is given, we have to assess individual time-compressed IT-projects to see whether the time-constraints can be relaxed such that they approximate nominal schedules.
Also the sample IT-portfolio depicted in Figure 15 clearly indicates the potential to create havoc in the IT-budget. Much more resources are necessary than anticipated, or with just a little more time we can deliver the same functionality for a much lower cost. In the case described, we advised to change the IT-governance rules slightly, so that default time-critical IT-development was abandoned. Instead all projects were assessed whether they were time-critical or not. In case they were not, other governance rules were used to monitor progress, and in case there was a time-critical component, the costs of speed-to-market were weighted against the benefits of having earlier access to an operational system. From these analyses a few important lessons can be learned.
Another uniform way of managing IT is nowadays often seen: managing on cost. While in some cases this may resort the correct effect, cost-leadership can have negative consequences, as we will show in this section. Just like managing on time, we can detect managing on cost by cost-time views. In Figure 17, we depicted a case of managing on budget: we see clouds of data around various cost-bandwidths. To improve the view, we also plotted a log-log view so that numbers with the same magnitude clutter together. This is clearly visible in Figure 18. As can be seen, the costs asymptotically approach certain magic lines, and then leap to the next level where another asymptote is found.
We collected the data clouds, and depicted them via boxplots in Figure 19. There we see a large spread of different time intervals, that are not in accordance with nominal time intervals for IT-projects of such price tags. One of the possible effects of managing on cost, is that some projects are systematically underfunded. When you do not put enough resources into an IT-project in the end it will cost more than with enough resources. We call this effect time decompression. Again, a limitation of this simple pattern is that it does not exclude people to game the system, so that aggregates look great but the data underlying the aggregates are arbitrarily scattered. We recall that in order to reveal such problems we need to investigate the microscopic properties of the underlying variables with more complex statistical techniques, like vector autoregression. More information on such analyses in the realm of IT-audits can be found in .
where c stands for cost, and is the stretched duration for the IT-project. Furthermore, c0 is the nominal cost, and d0 is the nominal duration. For the nominal relation between cost and duration, we take Formula 5 (but you can also take another, e.g., internal benchmark for nominal cost-duration relations). This leads to the following relation:
So, suppose we carry out a million dollar project in d=14 months. Then, we can calculate the nominal duration, and nominal cost as follows. Using equation 8 we find:
By substituting c0 in equation 5 using the above we find:
which gives us a nominal duration of 12.44987 months. The nominal costs that belong to this duration are: $889276.4, using equation 5. So, on this project we can save some time: 1.55013 months, and some cost: $110723.6. Note that for the sake of ease we refrained from discounting time-value for money. More involved calculations appraising IT-investments that do deploy discounted cash flows are presented elsewhere .
We can make such calculations at the IT-portfolio level as well. In Figure 20, we give an example of an IT-portfolio that is managed on budget. The dots to the right of the dashed line are the actual data points: real cost, and duration. The dashed line, is our benchmark formula giving the nominal cost-duration view. The line segments show the linear relation between time-stretchout and cost, as found by equation 8. We note that although it looks good to have many IT-projects under the nominal schedules, this is deceptive. If the majority of the IT-projects is under benchmark, this is often an indication of time decompression, as a consequence of uniformly managing on budget. Of course, when time bookings are manipulated to fit available budget, we can spot this with more involved analyses, for which we refer to .
In Table 6 we summarized the numbers. On average, the time-stretched IT-projects take 25.8 calendar months, and the total amount of calendar months for this IT-portfolio amounts to 1289 calendar months. The benchmarked schedule times are shorter: on average 19.1 calendar months. The total amount of nominal times sums up to 953 calendar months. This saves in total 26.1% calendar months with the original IT-portfolio. Since we are in the stretch-out zone, taking less time will also save costs. The actual costs are on average 7.1 million dollars per IT-project in an IT-portfolio that costs 353 million in total. The nominal costs are on average $5.7 million, with a total cost of 283 million. Again, the results in the rows for time gain and cost saving are not calculated by subtracting the two rows above them, these numbers are the result of a standard summary statistic on the differences for all the data. Summarizing, if the projects were done in the nominal time, this implies a cost saving of 19.6% at the IT-portfolio level. We call this the IT-portfolio time decompression risk. As with time compression, to mitigate this risk, we have to look for individual underfunded projects, and assess their business-criticality. Sometimes you will be surprised to find fairly critical projects that are underfunded since the project owners were overshadowed by politically shrewd managers, whereas a business-criticality check would put priorities entirely different. Time decompression is a way to spot such projects, likewise the super time-critical projects can turn out to be less critical than suggested by some ad rem managers.
The meaning of Figure 21 is in fact as follows. The minimum of this curve actually represents the most appropriate manner of production: if you look at the IT-project from a pure technological production viewpoint, this is the way to do it, at the most appropriate cost and time. Hence our term nominal cost and nominal duration. If we change time or cost beyond certain thresholds, indicated by the dotted lines, we are no longer capable of constructing the IT-system in the most appropriate manner, leading to higher costs. These higher costs can be analyzed via benchmarked formulas as given in this paper (see formulas 4 and 8), or with internally derived relations based on proprietary data. It is possible to derive such formulas internally (and externally, obviously), since during the construction of similarly typed systems we can assume serial-piece production: products that are identical technically and therefore considered identical economically. And then we can use approximately the same relations for new investments as well, to estimate the time and cost representing the most appropriate manner of production, and if that is impossible, the consequences in cost and time for that particular investment. In this way we can balance business-criticality, time-to-market, and economics of IT-projects.
There is a danger that if you no longer uniformly manage on time or budget, that managing on delivering full functionality is a consequence. Optimizing towards functionality implies higher costs, longer schedules, but not always the corresponding value creation. Management on functionality is easily spotted if the following combination is present: no IT-governance rules, and a high-quality positioning in the market. Both characteristics can be found without any data collection. Lack of IT-specific governance rules and quality are readily detectable aspects of an organization.
If the above patterns are not present, it becomes more complex to spot this type of management by inspecting IT-portfolio data, since there are no ``pathognomonic''1 IT-portfolio views revealing this. Many different minor indications together can diagnose exclusive management on functionality. When seasonality effects are present in the data it can be an indicator of lack of following IT-governance guidelines. Another sign is a relatively low percentage of low-cost projects, and a relatively low amount of projects with short time frames. Usually, the top 5-10 IT-projects in a business unit consume 60-80% of the total annual IT-budget. The rest is spent on a large number of small and medium-sized projects. As a rule of thumb, if not 20% of the IT-projects take 80% of the IT-budget (or 80% of the (smaller) IT-projects take 20% of the IT-budget), this is a sign of uniform solutions delivery management. Another sign is that a significant number of projects is perpetual, that is, there is no delivery date planned, but continuous evolutionary cycles are used to improve the IT-function in production. Yet another indication is that such production systems comprise an abundance of options and features, that are hardly ever executed in reality.
If there is detailed information available, it becomes possible to measure requirements creep as an indicator. Requirements creep (and churn), is the compound monthly growth rate of software after the requirements have been set. For instance, if you decide to build a 1000 function point information system, and in the end it turns out to be in the range of tens of thousands of function points, the requirements have grown enormously. This growth is much higher than is common for a 1000 function point information system. Namely, the benchmarked monthly growth-rate for in-house developed information systems is 1.2%, and the maximal rate reported in the literature is 5.1% [29, p. 186]. For 1.2% and a development schedule of say 8 months for a 1000 function point system after the requirements phase, we find function points upon delivery. Although this is also high, it still deviates enormously from the example of tens of thousands of function points. So apparently, requirements are adapted continuously, often both by developers and business, inducing unrestricted and undirected growth of the software. And this in turn is an indicator for exclusively managing on functionality.
Of course, all indicators on their own provide only circumstantial evidence: there can be other reasons that clarify the found patterns satisfactorily. But if you spot the majority of these somewhat diffuse signals simultaneously, chances are big that uniform management on functionality is in place.
In this paper we accumulated our experience with a number of exploratory data analyses of large IT-portfolios. We were able to reveal several IT-governance styles and their effects by specific data views. On the basis of such views it was often possible to detect unwanted side-effects. By changing the IT-governance rules (sometimes only slightly), we could establish significant improvements. These improvements were in cost savings, time savings, or risk reduction (sometimes even mitigation). We proposed 7 focal points in such an analysis. They are: overperfect data, heterogeneous data, overregulation, underregulation, and a preoccupation with unvariable management on time, on cost, or on functionality. One important conclusion is that the governance style should be flexible, in the sense that sometimes it is necessary to manage on time, or on budget, or on functionality but if one uses such management styles without assessing the need for them, counterproductive effects ensue, some of which can be revealed using the methods discussed in this paper.
In real IT-portfolios we often find combinations of the above 7 patterns. A typical example is an IT-governance structure where the small projects are relatively free of governance as long as they are under a certain threshold, say $100000. If you are above that threshold, more governance is mandatory, and apart from more formal approvals, budget is given for 12 months. So this is a governance style that mixes management on time and on cost. First managers try to break up projects to sub threshold size, and if this is not possible they cram the work into 12 months. As a consequence, in cost-time views of such a governed IT-portfolio we will see a combination pattern of horizontal and vertical tear lines. This indicates a preoccupation with management on budget around the horizontal $100000 cost-line, and persistent management on time indicated by a vertical 12 month-line. Moreover, due to the large amount of formal approvals, we detect patterns indicating overregulation, which is not as clear as it could, since there are also some underregulated smaller projects. But by removing the smaller ones, the effect becomes more evident. And by assessing the smaller projects, underregulation patterns become visible. These four effects together lead to a combination of time compressed and time decompressed projects, requiring a number of steps to be taken to solve the problems with the IT-governance rules.
For the sake of explanation, we separated such combinations in this paper into seven unique and recognizable patterns. An important lesson we learned is that management exclusively on time is endemic in some organizations, which does not lead to more benefits: the effects of uniform time-constraints in IT-governance rules more often cause time compression. This is easily solved by translating time constraints for non-time-critical projects into cost or functionality constraints. For instance, a project needs be finalized in 12 months, and it is not time-critical. Then it is better to provide resources for what a 12 month IT-project nominally costs in your organization. Then you reach the effect of the 12 months approximately, without the effects of time compression. If in the end, the project's scope should have been larger, then the side-effects of time compression are mitigated. On the other hand if exclusively was managed on cost, this can lead to time decompression, which is costly as well since knowledge loss is the consequence.
So, there is no best single approach. But continuously monitoring the IT-function reveals really unwanted side-effects of IT-governance, and by modest redirections we can balance the level of control to efficiency, the cost of data to its value, the amount of governance to its effectiveness, and more. Any set of straightforward guidelines for IT-governance will be an oversimplification, and it is highly unlikely that it will lead to the desired dynamic equilibrium. But one way of reaching this is to keep steering away from the seven found patterns to enable a better balance between the fundamental parameters of IT-governance (data, control, time, budget, and functionality). Mastering IT-governance is like mastering the art of juggling--if you are not keeping all the variables airborne you are not juggling anymore. But by continuously monitoring and measuring them you can gradually improve and in the end, juggling running chain saws--the reality of managing the IT-function.
Indeed, we realize that it is already a challenge to initiate IT-governance. If you manage to start such an initiative, it will not be optimal in the first year. The results in this paper can aid in assessing the current performance of the IT-governance rules, and will hint towards improvements of the rules. Also, the lessons learned will give you an idea of IT-governance rules that align best with your organization. Repeated assessments, or continuously monitoring your IT-portfolios, will spot some potentially dangerous undesired effects quickly. With (sometimes) conservative modifications of the IT-governance rules you can evolve them into a suitable governance structure. This will result in an IT-function causing less trouble in the future than it is doing now, where in 2003 about 290 billion dollars were spent on failing IT-projects .
Measuring application development productivity.
In Proceedings of the Joint SHARE/GUIDE/IBM Application Development Symposium, pages 83-92, 1979.
Software function, source lines of code, and development effort prediction: a software science validation.
IEEE Transactions on Software Engineering, 9(9):639-648, 1983.
Creativity Under the Gun.
Harvard Business Review, 80(8):52-61, 147, August 2002.
Augustine's Laws and Major System Development Programs.
Defense Systems Management Review, pages 50-76, 1979.
Numerical Analysis and Computation Theory and Practice.
Software Engineering Economics.
Prentice Hall, 1981.
Competing on the Edge - Strategy as Structured Chaos.
Harvard Business School Press, 1998.
Transforming IT Governance.
Technical report, Forrester Research, Cambridge, MA, USA, November 2002.
Reengineering Management - The Mandate for New Leadership.
Harper Business, New York, NY, 1995.
Practical Nonparametric Statistics.
Probability and Mathematical Statistics. John Wiley & sons, 3rd edition, 1980.
A Summary of First Practices and Lessons Learned In Information Technology Portfolio Management.
Technical report, Federal CIO Council, Best Practices Committee, 2002.
Flow - The Psychology of Optimal Experience.
SOS Free Stock, 1991.
Introduction: Avoiding IS/IT Implementation Failure.
Technology Analysis and Strategic Management, 15(4):403-407, December 2003.
Management Agenda: A Vote For IT Federalism - The concepts that shaped our country should also apply to the IT world, June 26, 1995.
Controlling Software Projects - Management Measurement & Estimation.
Yourdon Press Computing Series, 1982.
Peopleware - Productive Projects and Teams.
Dorset House, 1987.
Function Point Analysis.
Prentice Hall, 1989.
Software Product Line Migration and Deployment.
Software: Practice & Experience, 33:933-955, 2003.
Function Point Analysis - Measurement Practices for Successful Software Projects.
Clinger Cohen Act of 1996 and Related Documents, 1996.
The Business of IT Portfolio Management: Balancing Risk, Innovation, and ROI.
Technical report, META Group, Stamford, CT, USA, January 2002.
Reengineering the Corporation--A Manifesto for Business Revolution.
Harper Business, New York, NY, 1993.
Numerical Methods for Scientists and Engineers.
McGraw-Hill, 2nd edition, 1973.
Robust Statistics - The Approach Based on Influence Functions.
Probability and Mathematical Statistics. John Wiley & sons, 1986.
Introduction to Mathematical Statistics.
Prentice Hall, 6th edition, 2005.
Estimating Software Costs.
Software Assessments, Benchmarks, and Best Practices.
Information Technology Series. Addison-Wesley, 2000.
The Balanced Scorecard - Translating Strategy into Action.
Harvard Business School Press, 1996.
Quantifying the Costs and Benefits of Architectural Decisions.
In Proceedings of the 23th International Conference on Software Engineering ICSE-23, pages 297-306, 2001.
Making architecture design decisions: An economic approach.
Technical Report CMU/SEI-2002-TR-035, Software Engineering Institute, 2002.
Reliability of Function Points Measurement - A Field Experiment.
Communications of the ACM, 36(2):85-97, 1993.
Improving the Reliability of Function Point Measurement: An Empirical Study.
IEEE Transactions on Software Engineering, SE-18(11):1011-1024, 1992.
Sulla Determinazione Empirica di una Legge di Distribuzione.
Giornale dell' Istituto Italiano degli Attuari, 4:83-91, 1933.
Becoming a Better Estimator - An Introduction to Using the EQF Metric, 2002.
Available via www.stickyminds.com.
Cost Estimation for Software Development.
Data Reduction and Regression.
Information Technology Investment Management - A Framework for Assessing and Improving Process Maturity, 2000.
Measures for Excellence - Reliable Software on Time, Within Budget.
Yourdon Press Computing Series, 1992.
A Data verification of the Software Fourth Power Trade-Off Law.
In Proceedings of the International Society of Parametric Analysts - Sixth Annual Conference, volume III(I), pages 443-471, 1984.
Memorandum for Heads of Executive Departments and Agencies - Funding Information Systems Investments, 1996.
The Truth in Negotiations Act - What is Fair and Reasonable?
Program Manager, 26(6):50-53, 1997.
Sur les écarts de la courbe de distribution empirique.
Matematicheskij Sbornik. (Novaya Seriya)/Recueil Mathématique, 6(48):3-26, 1939.
In Russian with a French summary (pp. 25-6).
DSDM - Business Focused Development.
Addison-Wesley, 2nd edition, 2003.
DSDM - Dynamic System Development Method.
Investigative Report of Senator Fred Thompson on Federal Agency Compliance with the Clinger-Cohen Act, 2000.
Exploratory Data Analysis.
Getting on top of IT, 2002.
Quantitative IT Portfolio Management.
Science of Computer Programming, 45(1):1-96, 2002.
Quantifying the Value of IT-investments.
Science of Computer Programming, 2004.
To Appear. Available via:
Quantifying the Value of IT-investments.
Science of Computer Programming, 56(3), 2005.
Quantitative Aspects of Outsourcing Deals.
Science of Computer Programming, 56(3), 2005.
Quantifying Software Process Improvement, 2007.
Leveraging the New Infrastructure - How Market Leaders Capitalize on Information Technology.
Harvard Business School Press, 1998.
IT Governance - How Top Performers Manage IT Decision Rights for Superior Results.
Harvard Business School Press, 2004.