Performance Analysis 1 Last Updated: November 12, 2018; First Released: September 12, 2013 Author: Kevin Boyle, President, DevTreks Version: DevTreks 2.1.6 A. Introduction This reference explains how to start to collect, measure, analyze, and explain technical and economic performance data (1*). DevTreks believes that all performance data, from the cost effectiveness of climate change technologies to the net returns from public infrastructure investments, have stories to tell and lessons to teach. Those lessons can only be learned when data about performance is collected, measured, aggregated, analyzed, explained, and saved in online knowledge banks. Full, uniform, and accurate analyses of the technical and economic performance of hospitals, schools, public infrastructure investments, conservation practices, and widget production, should be one or two links away for everyone. If a business owner, lender, doctor, patient, teacher, government official, or citizen, needs to make a decision involving technical and economic performance, they should have ready access to the best data available. This reference introduces another DevTreks way to build these knowledge banks. Section Page Performance Measures 2 Net Returns and Net Savings 6 Savings to Investment Ratio 7 Adjusted Internal Rate of Return 8 Payback Period 8 Efficiency and Tradeoffs 9 Affordability 10 Productivity 11 Earned Value Management and Value Added 12 Incremental Change and Incremental Cost Effectiveness Analysis 14 Incremental Benefit Cost Ratio 15 Scheduling and Timeliness 16 Multi-Criteria Decision Scores 16 Risk and Uncertainty 18 Scenarios and Forecasts 20 Standards 21 Social Performance Measures 22 Summary and Conclusions 23 Performance Analysis Examples 30 B. Performance Measures Performance Measures interpret technical and economic performance data. The most basic performance measure, Net Returns, tries to answer the question “do benefits exceed costs?” Other measures try to answer questions such as “which technology mitigates climate change most cost effectively?”, “how productive is labor in company 1 versus company 2?”, “which development projects were most effective in creating jobs with middle income wages?”, “what are the most effective food nutrition practices for preventing malnutrition in Region 1’s children?”, “which of the hospitals in city 1 replaces hips most efficiently?”, and “what is the marginal cost of reducing one more unit of air pollution?”. Performance Measures answer practical questions with tangible, that is, online, evidence. The true worth of Performance Measures derive from an analyst’s ability to explain the numbers: why is labor more productive, hospitals more efficient, development more impactful, or conservation abatement more effective? And the true usefulness of the analyst’s conclusions is when the evidence and analysis can be carried out and retrieved from an online knowledge bank using a simple IRI, easily compared to the results of similar analyses, and easily interpreted because of the multimedia and stories that accompany analyses. Large public sector agencies, such as the US NASA 2011 use formal frameworks for developing and using Performance Measures. They use the following graphic to illustrate how Performance Measures should be tied directly to performance objectives: International organizations working to improve smallholder agriculture also use formal frameworks for measuring performance. The following images (Sustainable Food Lab, 2016) illustrate using Indicator frameworks for conducting Performance Measurement. Version 2.0.8 included 4 new algorithms that use a Resource Conservation Value Accounting Framework to conduct Social Performance Measurement. Note that the second image corresponds to the Monitoring and Evaluation “results chain” introduced in the M&E Calculator reference. The Social Performance Analysis references also introduce algorithms that demonstrate using causal chains, social impact pathways, impact transition states, and disaster risk reduction pathways, to measure performance. The Performance Measures that follow use the following terms: * COST = Total Cost * OC = Operating Cost * AOH = Allocated Overhead Cost * CAP = Capital Cost * BENEFIT = Total Benefit * Alternative = one of several base elements that are compared These numbers are obtained from Net Present Value, Life Cycle, Option Value, or other basic benefit and cost models. The costs and benefits have to be defined in terms of targeted beneficiaries, or stakeholders. For example, transportation project costs and benefits can be defined based on local ratepayers, public utilities, and greater society. Other factors also affect performance such as the timing, scheduling, and location of the costs and benefits. Appendix A, Performance Analysis Examples, demonstrates how to apply these measures. The US NASA 2011, US NIST 1996, and IPCC 2014 Working Group 2 and 3 references contain additional examples and guidance for using these types of measures. The WHO 2003 reference provides additional guidance about how to compare Performance Measures using techniques such as international dollars. The Social Performance Analysis tutorial demonstrates how to use the following Performance Measures and supplemental Social Performance Measures to improve the formal financial reporting requirements of public sector entities and private sector firms. 1. Net Returns Net Returns subtract total costs (COST) from total benefits (BENEFIT) for a project alternative relative to a base alternative (a comparator). Capital Budgets and Operating Budgets use the following calculations: COST Savings = (COST Total base) - (COST Total x) BENEFIT Additions = (BENEFIT Total x) – (BENEFIT Total base) Net Benefits = COST Savings + BENEFIT Additions 2. Net Savings This performance measure subtracts additional investment costs from cost savings for a project alternative relative to a base alternative, or comparator (NIST, 1996) (2*). Capital Budgets use the following calculations: OC Savings = (OC Total base) - (OC Total x) CAP Additions = (CAP Total x) – (CAP Total base) Net Savings = OC Savings - CAP Additions If AOH costs have been included in the budget, use the following equations: AOH Savings = (AOH Total base) - (AOH Total x) Net Savings = OC Savings + AOH Savings - CAP Additions Use the following calculation when monetary benefits are available in Capital Budgets: BENEFIT Additions = (BENEFIT Total x) – (BENEFIT Total base) Net Savings = BENEFIT Additions + OC Savings + AOH Savings - CAP Additions Use the following calculation for Operating Budgets: BENEFIT Additions = (BENEFIT Total x) – (BENEFIT Total base) Net Savings = BENEFIT Additions + OC Savings + AOH Savings 3. Savings to Investment Ratio This performance measure divides cost savings by additional investment costs for a project alternative relative to a base alternative, or comparator (NIST, 1996), Operating Budgets do not use this performance measure. Capital Budgets use the following calculations: OC Savings = (OC Total base) - (OC Total x) CAP Additions = (CAP Total x) – (CAP Total base) Savings to Investment Ratio = OC Savings / CAP Additions If AOH costs have been included in the budget, use the following equations: AOH Savings = (AOH Total base) - (AOH Total x) Savings to Investment Ratio = (OC Savings + AOH Savings) / CAP Additions Use the following calculation when monetary benefits are available in budgets: BENEFIT Additions = (BENEFIT Total x) – (Base BENEFIT Total) Savings to Investment Ratio = (BENEFIT Additions + OC Savings + AOH Savings) / CAP Additions 4. Adjusted Internal Rate of Return This performance measure calculates the annual percentage yield from an investment over its life (NIST, 1996). Operating Budgets do not use this performance measure. The number is measured using the following calculations: Adjusted Internal Rate of Return: ((1 + r) * (SIR^1/N) – 1 Where r = reinvestment rate (in DevTreks, the Real Interest Rate set using NPV or Life Cycle Input and Output calculators) SIR = Savings to Investment Ratio N = number of project years (in DevTreks, the Service Life Years set using LCA input and output calculators) 5. Payback Period This performance measure calculates the minimal number of years needed for an investment’s savings to offset the incremental initial investment cost of a project (NIST, 1996). Operating Budgets do not use this performance measure. Uniform cost series can be computed as follows: OC Savings = (OC Total base) - (OC Total x) CAP Additions = (CAP Total x) – (CAP Total base) Payback Period = CAP Additions / OC Savings If AOH costs have been included in the budget, use the following equations: AOH Savings = (AOH Total base) - (AOH Total x) Payback Period = CAP Additions / (OC Savings + AOH Savings) Use the following calculation when monetary benefits are available in budgets: BENEFIT Additions = (BENEFIT Total x) – (Base BENEFIT Total) Payback Period = CAP Additions / (OC Savings + AOH Savings + BENEFIT Additions) This measure is difficult to compute in DevTreks when costs are computed using non-uniform series (i.e. Life Cycle non-uniform escalating rates, Life Cycle biannual costs). Individual yearly changes are not displayed in LCA Analyzer results because too much data is involved. 6. Efficiency and Tradeoffs (3*) Efficiency measures how well inputs and activities are used to produce outputs and outcomes. Tradeoffs measure what is gained and lost when resources are reallocated among competing uses. Economists use the following measures to assess this category: Technical Efficiency: Measures the greatest amount of output that can be achieved using a given level of inputs (McGlynn, 2008, USAHRQ, 2008). The need to produce the output is taken as given. For example, a firm such as a hospital is technically inefficient if it can produce a greater amount of output, or patient outcomes, using the same levels of inputs. A country’s health care system is technically inefficient if it can use fewer hospital beds, doctors, and nurses, to achieve a given level of population health outcomes. It can be measured using the techniques such as those explained in Section 14. Optimality. Note that input and output prices don’t necessarily factor into the full definition. Allocative Efficiency: Measures how different resource inputs are combined to produce different combinations of outputs. Efficiency is achieved by using the optimal combination of inputs needed to produce a given level of output (McGlynn, 2008, USAHRQ, 2008). It’s a broader measure of efficiency than just Technical Efficiency, and questions the entire mix of inputs and outputs, including their prices. For example, a country’s health care system is inefficient if it can achieve higher returns to society using a different mixture of health care interventions than it currently employs, such as using more prevention-orient interventions rather than treatment-oriented interventions. It is also inefficient if an additional dollar spent on health care benefits consumers more than an additional dollar spent on police protection or schools. It can be measured using the techniques such as those explained in Section 10, ICER (WHO, 2003). For example, the results of an ICER for all a country’s health care interventions may reveal a more cost effective combination of interventions. Social Efficiency or Equity: Measures whether or not someone else can be made better off without making someone else worse off (McGlynn, 2008, USAHRQ, 2008). An important use of this measure is to analyze who gains and who loses from alternative allocations of resources. It can be measured using the benefit cost analysis techniques mentioned in Section 10, IBCA (USEPA, 2010). Economies of Scale and Scope: Scale economies is a measurement of the reduction in the average cost per unit output associated with increasing the production of the output or service. Economies of scope is a measurement of the reduction in the average cost per unit output associated with increasing the number of outputs or services produced (Goodie and Goddard, 2011). An important use of this measure is to analyze whether firms organize themselves in ways that prevent fair competition. Economic Efficiency: Economic efficiency is achieved when the additional cost to produce one more unit of input equals the additional benefit gained from one additional unit of output. This is equivalent to solving an optimality problem (see the Optimality section below) by analyzing the point where marginal costs equal marginal benefits. 7. Affordability The NASA, 2008 reference explains many techniques that can assist determining the affordability, or performance measurement, of project alternatives. Examples include: Trade Studies: These are similar to Tradeoff Analysis and examine Performance Parameters as functions of Cost Elements, or Cost Alternatives. These studies try to determine the most important drivers of cost and performance. Learning Curves: Costs decrease as more time is spent learning how to produce outputs. Real Option Value: When the value of an investment is uncertain, the overall investment valuation can increase by accounting for the uncertainty at some time in the future. 8. Productivity Productivity is measured by dividing Input quantities and costs by Output quantities and benefits, and vice versa (McGlynn, 2008). The resultant measures can be compared to best practice targets and standards to assess how well resources are being used. Typical calculations include: Output per Unit Input Output Total Quantity / Input Total Quantity Input per Unit Output Input Total Quantity / Output Total Quantity Cost per Unit Output COST / Output Total Quantity Revenue per Unit Input REVENUE / Input Total Quantity Cost per Dollar Revenue COST / REVENUE Revenue per Dollar Cost REVENUE / COST Although less common, productivity can also be measured by substituting Operation/Component activities and costs and Outcome results and benefits in these types of equations. 9. Earned Value and Value Added Earned Value Management (EVM) integrates budget, scheduling, and risk analyses. A key requirement is to measure budget variances and scheduling variances. Budget variances measure the costs (and benefits) of work planned versus actual work completed. Scheduling variances measure the amount, quality, and timeliness of work planned versus actual work completed. EVM measures variances in the value of work planned versus actual work completed (USGAO, 2010). Typical measures include: Actual Period Total (APTotal) is the actual total for the last period found in the actual budget or base elements. The last period is defined as the last date-ordered aggregated base element. For example, a crop budget may contain aggregated crop operations in the following date order: 1) tillage, 2) planting, 3) nutrient management, 4) cultivation, 5) pest management, and 6) harvest. The Actual Period Total is the last of these crop operations found in the current date-ordered actual element list. An actual crop budget completed through June may find that the Actual Period Total will be for 4) cultivation. Actual Cumulative Total (ACTotal) is the cumulative total for all of the periods found in the actual budget or base elements. Using the crop budget example, the Actual Cumulative Total is the sum of 1) tillage, 2) planting, 3) nutrient management, and 4) cultivation in the actual budget. Planned Full Total (PFTotal) is the planned total for the full planning budget or base elements. Using the crop budget example, the Planned Full Total is the sum of 1) tillage, 2) planting, 3) nutrient management, 4) cultivation, 5) pest management, and 6) harvest in the planned budget. Planned Period Total (PPTotal) is the planned total for the period found in the planned budget or base elements that corresponds to the same period as the Actual Period Total. Using the crop budget example, if the Actual Period Total finds 4) cultivation as the last date-ordered element in the actual budget, the Planned Period Total will use the cultivation’s WBS Label to find the corresponding element total in the planned budget (i.e. both might use the same WBS Label, A1010). Planned Cumulative Total (PCTotal) is the cumulative total for the actual budget, or the actual base elements. The Actual Cumulative Total determines which element will be the last element to include in the cumulative totals. Using the crop budget example, the Planned Cumulative Total is the sum of 1) tillage, 2) planting, 3) nutrient management, and 4) cultivation in the planned budget. The Actual Cumulative Total stopped at 4) cultivation, so the Planned Cumulative Total will do the same. Actual Period Change (APChange): Actual Period Total – Planned Period Total Actual Cumulative Change (ACTotal): Actual Cumulative Total – Planned Cumulative Total Planned Period Percent (PPPercent): (Actual Period Total / Planned Period Total) * 100 Planned Cumulative Percent (PCPercent): (Actual Cumulative Total / Planned Cumulative Total) * 100 Planned Full Percent Percent (PFPercent): (Actual Cumulative Total / Planned Full Total) * 100 Additional information about Earned Value measurements can be found in the Earned Value Management 1 reference Value-added Performance Measures analyze the economic gains accruing from different alternatives. The simplest technique assesses the additional BENEFIT accruing from competing alternatives, such as alternative production processes and activities. For example, an apple grower may choose to process apples into apple cider rather that to sell the raw apples. If NET RETURNS increase, the increase is the value-added from changing the production process. 10. Incremental Change Incremental Change measures incremental changes between aggregated base elements. Changes are analyzed as differences between aggregated base elements that have different properties, such as dates, ids, or alternatives. Common uses include comparisons of base elements and trends in base elements. Many of the optimality and numeric algorithmic techniques explained with other Performance Measures, such as constrained optimality, use advanced incremental change analysis. Typical measures include: Base Change: (Total x) – (Total base) (with Total base equal to the first Total in the sequence being analyzed). Base Percent Change: (Change base) / (Total base)) * 100 Amount Change: (Total x) – (Total x-1) (with the first calculation subtracting zero and “x-1” being “x minus one”, the Total from the previous element). Percent Change: (Amount Change x) / (Total x-1)) * 100 Additional information about Incremental Change measurements can be found in the Change Analysis 1 reference. 11. Incremental Cost Effectiveness Ratio (ICER) This performance measure divides incremental costs by incremental outputs, or outcomes, for a project alternative relative to one another (WHO 2003). This measure is appropriate when monetary benefits can’t be assigned to Outputs. A specific Output (i.e. Disability Adjusted Life Year), or Outcome (i.e. Patient Health Status Index), must be chosen for the calculations. DevTreks’ M&E and Resource Stock tools can be used to track these Outputs and Outcomes. Capital Budgets and Operating Budgets use the following calculations: Output Additions = (Output Total x) – (Output Total base) COST Additions = (COST Total x) – (COST Total base) ICER = Cost Additions / Output Additions A variation of this ratio uses Outcomes rather than Outputs. For example, an Outcome Index may be derived as a weighted average, or some other mathematical combination, of the Outputs. Outcome Additions = (Outcome Total x) – (Outcome Total base) COST Additions = (COST Total x) – (COST Total base) ICER = Cost Additions / Outcome Additions The WHO 2003 reference explains how to use this measure in practice. The general process is to rank alternatives in ascending order by their ICER, delete alternatives that have higher costs and higher ICERs (dominated), and choose the top alternatives that fit within a budget constraint. 12. Incremental Benefit Cost Ratio (IBCR) This performance measure divides incremental benefits by incremental costs for a project alternative relative to one another. This measure is appropriate when monetary benefits can be assigned to Outputs. Capital Budgets and Operating Budgets use the following calculations: BENEFIT Additions = (BENEFIT Total x) – (BENEFIT Total base) COST Additions = (COST Total x) – (COST Total base) IBCR = BENEFIT Additions / COST Additions Multiple alternatives can be analyzed in the same manner as Example 9, ICER. 13. Scheduling and Timeliness These performance measure are related to the timing of costs and benefits. Additional costs are incurred or revenues reduced when an activity isn’t completed within a scheduled period, or by a targeted date (USGAO, 2009). Some measures include: Timeliness Penalty: This penalty is incurred when production activities that can’t be completed by targeted dates result in losses of revenues (Siemens, 1988). Common examples are when weather conditions, such as wet farm fields, prevent crop planting in a timely manner. Crop yields begin to decline when planting takes place later than recommended planting dates. Schedule Risk Loss: This loss is incurred when production activities that can’t be completed by targeted dates result in increased costs (GAO, 2009). These cost overruns are common when projects use overly optimistic projections of the probability of activities being completed within scheduled periods. 14. Multi-Criteria Decision Scores This performance measure uses scoring systems, rather than strictly monetary measures of benefits and costs, to help decision makers. The general process (Great Britain, 2009) is to define objectives, define alternatives to accomplish the objectives, define criteria that can be used to compare the alternatives, analyze the alternatives in terms of the criteria, and make decisions. Performance Matrix: The simplest analysis uses a performance matrix where each row defines an alternative and each column defines a criteria. The row-column cells hold scores for each alternative-criteria pair. The scores for each alternative are summed and compared. An example of a simple equation is: Alternative A score = Weighted average criteria scores Base elements in DevTreks use the Indicators found in the M&E and Resource Stock tools to calculate and analyze this measure. The Technology Assessment 2 tutorial includes an example of current tools available in DevTreks for carrying out Multi-Criteria Analysis. The following example uses M&E tools where each base element is an alternative and each Indicator is a criteria holding scores. M&E Calculators: Indicator (criteria) Total Score = Weight * Q1 * Q2 M&E Total Analyses: Alternative Total Score = Sum of Indicator Scores Constructed Scale: When Performance Measures don’t have quantifiable scales (i.e. Total Cost), a constructed scale can be used to measure performance that is subjective, or qualitative, in nature (NASA, 2011). For example, a project objective such as Maximize Stakeholder Support, might use a scale containing scores such as “Low Support” and “High Support”. 15. Optimality and Economic Efficiency Optimization techniques often use the term, objective function, to express budgets in mathematical form. These techniques seek to optimize that function by finding least cost, highest profit, shortest distance, and similar optimal, combinations of resources (Inputs and Outputs), activities (Operations, and Components), or outcomes (Outcomes) to use in the budgets. The resources and activities in the budget are not limitless and face constraints such as the amount of land available to farm or the quantities of ingredients that can be used to produce a product. In the case of a least cost solution, the shadow price of the constraint can reveal the marginal cost of the constraint. Varying the level of the constraint can allow marginal costs to be compared to marginal benefits. The point where marginal costs equals marginal benefits is a measure of economic efficiency. Several algorithms can be used to solve these optimization problems. The most common example in economics may be: Constrained Optimization: The objective function is constrained by one or more variables and mathematical techniques such as linear programming are used to find an optimum solution. For example, a farmer might seek to find the best combination of farm operations and crops given constraints on the amount of land, labor, water, combination of crops, and machinery, which are available. 16. Risk and Uncertainty The results of calculations and analyses are never 100% correct. All results have some degree of uncertainty. That is, their fully correct results are not known and can’t be fully quantified. The likelihood of the result is also never 100%, but it can be quantified in terms of probability or bounds. US NASA (2011) defines risk as the potential for shortfalls with respect to performance requirements. Risk can be measured and expressed using the following techniques (US GAO 2009, US NASA 2011): Statistical Mean and Standard Deviation: Regular Statistical Analyzers are run and the mean and standard deviation results are expressed as displayed in the following table. This measure assumes a normal distribution for the number being analyzed. High Estimate (with 1 sd or 68% probability) 120,000 Mean Estimate 100,000 Low Estimate (with 1 sd or 68% probability) 80,000 Sensitivity: Input and Outputs which have large effects on total costs and benefits vary a parameter, such as quantity or price, and the resultant numbers are compared to the actual results. For example, the following table shows that a 10% increase in Input Quantity leads to a 15% increase in overall costs for Input 1, and 5% for Input 2. Total Costs are more sensitive to changes in Input 1 than Input 2. Because of the importance of Input 1, final results of analyses may want to list how total costs and net returns vary as Input 1’s quantity is varied. Input 1 Input 2 Initial Input Amount 100,000 100,000 % Change in Amount +10% +10% % Change in Total Costs +15% +5% Probabilistic: Monte Carlo, or bootstrapping, techniques are used to draw random samples of cost and revenue parameters that have a defined distribution. The whole set of random samples are used to develop confidence intervals for the final results. The resultant number is expressed: High Estimate (at 95% confidence interval ) 120,000 Most Likely Estimate 100,000 Low Estimate ( at 95% confidence interval ) 90,000 Statistical Models: The majority of researchers use statistical models to analyze data. Most of the model results are presented in terms of likelihoods and probabilities. For example, some health care analysts use Stated Preference Discrete Choice Experiment models that survey patient opinion about alternative medical treatment options. The models produce confidence intervals for Cost Effective Ratios and Cost Benefit Analyses. In DevTreks, these types of models are called custom, or “domain-specific”, and developed by technologists to meet the specific requirements of a network. Numerical Algorithmic Models: These models are listed separately from statistical models because they employ numerical models that are used more often by practitioners, such as engineers, rather than researchers. Many of these models employ well known optimality or sampling algorithms that produce the final results as ranges or confidence intervals. Examples of these types of models can be found in the Technology Assessment 1 tutorial. 17. Scenarios and Forecasts These measures predict how revenues and costs will change based on future conditions. The conditions can include weather, global prices, government policy, and general supply and demand factors. Common techniques used to analyze these measures include: Scenarios and Expected Values: A scenario is defined in the IPCC 2013 reference as “A plausible description of how the future may develop based on a coherent and internally consistent set of assumptions about the key driving forces (e.g. rate of technological change, prices) and relationships. Note that scenarios are neither predictions nor forecasts, but are useful to provide a view of the implications of developments and actions.” The IPCC 2013 uses scenarios of likely future weather conditions to assess the trajectory of global warming. Expected values are similar to scenarios, but can be less comprehensive in scope –such as using three expected levels of output amounts to explain anticipated performance. Forecasts and Predictions: Forecasts and predictions often use the historical results of previous events to assess what will happen in the future. For example, the IPCC 2013 reference contains examples of climate change models that use historical weather data to help forecast the impacts of future climate change. They can be measured using mathematical techniques such as Bayesian, Artificial Intelligence, Machine Learning, and prediction algorithms. 18. Standards Technology standards are mandated production processes that regulators require the regulated to use. Performance standards define a desired outcome that the regulated must comply with, such as quantity of carbon emissions, but allows the regulated to choose how to comply (USEPA 2010). Voluntary Sustainability Standards (VSS) support industry-specific efforts to certify the social soundness of their products (United Nations Forum on Sustainability Standards, 2016). For example, the coffee industry uses several VSS systems developed by organizations such as the Rainforest Alliance, Fair Trade, and the Global Coffee Platform. In the case of smallholder agriculture, Performance Standards have been developed that use common sets of Indicators and metrics “for measuring farm-level sustainability in smallholder agricultural supply chains.” (Sustainable Food Lab 2014, 2016). The following image (ISEAL, 2014) introduces how standard-certifying organizations relate performance assessment and Monitoring and Evaluation (M&E). In effect, they consider performance assessment to be a requirement within M&E systems. 19. Social Performance Measures The Social Performance Analysis references introduce algorithms that demonstrate using causal chains, social impact pathways, impact transition states, disaster risk reduction pathways, and life cycle techniques, to measure performance. They demonstrate how to use these measures to provide evidence of sustainable production, particularly in support of sustainable accounting systems and certification schemas (i.e. organic farming standards). The references introduce several instruments that help companies and communities report on their sustainability accomplishments, including Business RCA Reports, M&E SDG Reports, Stakeholder Reference Case CEAs, Scenario Analyses, and Hot Spots Analyses. These measures also help companies to eliminate suppliers from their supply chains who do not produce products in sustainable ways. In the future, they will help consumers eliminate products, goods, and services, from their shopping baskets produced by companies that don’t produce sustainable products, or that allow company executives and investors who undermine their value systems to profit at their expense (4*). Summary and Conclusions Clubs using DevTreks can start to carry out the basic analysis of technical and economic performance data. Clubs can solicit help understanding performance better and share structured evidence explaining performance. Networks can build knowledge banks that explain performance and pass that knowledge down to future generations. Measuring performance is a precursor to improving performance. Work gravitates to countries whose workers work productively. Companies succeed when they can produce higher quality goods and services for lower cost than their competitors. Health care costs stay in budget for countries that can provide efficient incentives for medical treatments. Pollution control becomes affordable when communities invest in cost effective abatement technologies. Children become more productive citizens when their schools educate them more effectively. Potholes get fixed when city managers figure out the most productive way to manage transportation. Doing a better job of collecting, measuring, aggregating, analyzing, and sharing, technical and economic performance data can help people to improve their lives and livelihoods in sustainable ways. Footnotes 1. Economists have been measuring economic performance, such as labor productivity, for years. Management analysts have been measuring technical performance, such as teacher effectiveness, for as long. These analysts have developed a large number of respected techniques. Several of these techniques will appear in future upgrades of this reference. 2. The NIST 1996 reference uses Performance Measures to support decisions related to energy conservation. In fact, the Performance Measures support decisions that involve any cost and benefit context, including health care, human capital, environmental improvement, food security, safety efficacy, and disaster prevention/recovery. 3. Because of the importance of these Performance Measures in most economic sectors, additional explanation is in order. The following graphic (US ARHQ, 2008) further explains efficiency. Tradeoffs measure what happens by moving to and from points in this type of conceptual framework (i.e. c to a) (IPCC, 2014). This example of crop production shows the tradeoffs between an environmental impact and crop yield. Moving from point a to point c is a win-win tradeoff because yield goes up and impact goes down. Moving from point a to point b trades off increased yield for increased environmental impacts. This tradeoff measures technical efficiency because prices for impacts and yield are not considered, nor are alternative environmental interventions. 4. One straightforward way for consumers to get involved with accountable sustainability is to demand that the government administrators in their states, cities, and communities, to only purchase products that can produce evidence of sustainable production. 5. Although the author is a professional economist, he has primarily worked on software development in recent years. Economists and business analysts who are not software developers can probably come up with much better examples of how to develop and use Performance Measures for social budgeting data. We encourage clubs and networks to produce their own references relevant to their own members. References Boyle, Kevin P. Landtrt. Economic tools for watershed planning. USDA, Natural Resources Conservation Service. California, USA 1994 (internal agency publication). Great Britain Department for Communities and Local Government. Multi-criteria analysis: a manual. 2009 Goodie and Goddard. Review of Evidence on what Drives Economies of Scale and Scope in the Provision of NHS Services, Focusing on A&E and Associated Hospital Services. A report for OHE Commission on Competition in the NHS. 2011 International Social and Environmental Accreditation and Labelling (ISEAL) Alliance. Assessing the Impacts of Social and Environmental Standards Systems ISEAL Code of Good Practice. Version 2.0 – December 2014 IPCC. Climate Change 2013, The Physical Science Evidence. Working Group 1 Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. [Stocker, Qin, Plattner, Tignor, Allen, Boschung, Nauels, Xia, Bex, and Midgely (EDS)]. Cambridge University Press, Cambridge, UK and USA IPCC. Climate Change 2014, Impacts, Adaptation, and Vulnerability, Part A Global and Sectoral Aspects. Working Group 2 Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. [Field, Barnes, Barros, Dockken, Mach, Mastrandrea, Bilir, Chatterjee, Ebi, Estrada, Genova, Girma, Kissel, Levy, MacCracken, Mastrandea, and White (EDS)]. SUBJECT TO FINAL EDIT IPCC. Climate Change 2014, Mitigation of Climate Change. Working Group 3 Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. [Field, Barnes, Barros, Dockken, Mach, Mastrandrea, Bilir, Chatterjee, Ebi, Estrada, Genova, Girma, Kissel, Levy, MacCracken, Mastrandea, and White (EDS)]. SUBJECT TO FINAL EDIT McGlynn, EA. Identifying, Categorizing, and Evaluating Health Care Efficiency Measures. Final Report (prepared by the Southern California Evidence-based Practice Center—RAND Corporation, under Contract No. 282-00-0005-21). AHRQ Publication No. 08-0030. Rockville, MD: Agency for Healthcare Research and Quality. April 2008. Siemens, University of Illinois User Guide Farm Machinery and Selection Program, Version S10, 1988 Sustainable Food Lab. Performance Measurement in Smallholder Supply Chains: A practitioners guide to developing a performance measurement approach. 2014 Sustainable Food Lab. Towards a Shared Approach for Smallholder Performance Measurement: Common indicators and metrics. 2016 T. Tan-Torres Edejer, R. Baltussen, T. Adam, R. Hutubessy, A. Acharya, D.B. Evans, C.J.L. Murray. WHO Guide to Cost Effectiveness Analysis. 2003 United Nations Forum on Sustainability Standards (UNFSS). Meeting Sustainability Goals. Voluntary Sustainability Standards and the Role of Government. 2nd Flagship Report of the United Nations Forum on Sustainability Standards (UNFSS). 2016 U.S. Department of Commerce, National Institute for Standards and Technology. Handbook 135, Life-Cycle Costing Manual. 1996 Edition. US Environmental Protection Agency. Guidelines for Preparing Economic Analyses. 2010 U.S. Government Accountability Office. Applied Research and Methods. GAO Cost Estimating and Assessment Guide. Best Practices for Developing and Managing Capital Program Costs. March, 2009. U.S. Agency for Healthcare Research and Quality Appendix A: Technical Typology: Health Care Efficiency Measures: Identification, Categorization, and Evaluation. April 2008, Rockville, MD. http://www.ahrq.gov/research/findings/final-reports/efficiency/hcemappa.html U.S. National Aeronautical and Space Administration. NASA Risk Management Handbook, NASA/SP-2011-3422, Version 1.0, November, 2011. U.S. National Aeronautical and Space Administration. 2008 NASA Cost Estimating Handbook, November, 2008. World Health Organization. Guide to Cost-Effectiveness Analysis. 2003 References Note We try to use references that are open access or that do not charge fees. Improvements, Errors, and New Features Please notify DevTreks (devtrekkers@gmail.com) if you find errors in these references. Also please let us know about suggested improvements or recommended new features. Video tutorials explaining this reference can be found at: https://www.devtreks.org/commontreks/preview/commons/resourcepack/Performance Analysis 1/509/none Appendix A. Performance Analysis Examples C. Examples (4*) This section uses online datasets to explain how to manipulate social budgeting data to support decisions involving performance. The examples also demonstrate how to use communication aids, such as graphs and tables, to explain the results of analyses. The examples and their datasets are used to illustrate performance analysis and should not be interpreted differently. 1. Enterprise Net Returns This example demonstrates how to summarize and communicate the results of basic budget Performance Measures, such as Total Costs and Net Returns, to decision makers. The following two images summarize Costs, Revenues, and Net Returns, for the referenced dataset. Datasets: Net Present Value Calculation at https://www.devtreks.org/agtreks/preview/crops/budget/2- Corn Soybean Rotation/273071632/none These images and tables demonstrate the following data management techniques: * Because this is summary data, rather than data that requires additional analysis, any format (XML, XHTML, or CSV) can be used in the package being downloaded, or even copied directly from the HTML views displayed on the Views panel. In this example, a TEXT dataset was downloaded. The specific data file chosen from the package was a summary NPV calculation. The dataset contained two rows of data, one row for each Time Period. The dataset was opened in a spreadsheet program. * A summary table, similar to the one shown above, was built. The data needed to complete this table was copied from the two rows of raw data. The table was further refined based on the data available in the dataset. * The summary table was copied to another worksheet and edited so that the graphical chart shown above could be produced. The chart was saved as an image for display using the Media view. 2. Machinery Productivity This example uses the same IRI as Example 1 to demonstrate how to summarize and communicate the results of capital input Performance Measures, such as machinery labor amount per bushel corn, to decision makers. The following two images summarize machinery productivity measures for the referenced dataset. Datasets: Machinery Analysis at https://www.devtreks.org/agtreks/preview/crops/budget/2- Corn Soybean Rotation/273071632/none These images and tables demonstrate the following data management techniques: * Two datasets were used to produce these numbers. The summary tables shown in Example 1 were used to obtain the Revenue data while a Machinery Analysis dataset produced the machinery costs data. * A spreadsheet was used to simply divide appropriate Input and Output data elements to produce the productivity measures. 3. Investment Net Savings, Savings Investment Ratio, Adjusted Internal Rate of Return, Payback Period, and Breakeven Cost This example demonstrates how to carry out several Performance Measures for alternative building construction investments. Additional examples of these measures are documented in the NIST (1996) reference. The following images summarize a Life Cycle Compare by Alternative analysis. Datasets: Change by Alternative at https://www.devtreks.org/buildtreks/preview/commercial/componentgroup/Life Cycle Component Examples/552/none Life Cycle Component Examples Component Alt. 0 Alt. 1 Name NIST Table 5-4 NIST Table 5-5 Date 12/31/2012 12/31/2012 Label NIST135 NIST135 Observations 1 1 Alternative A B OC Total 412,689.43 345,593.30 OC BaseChange 0 -67,096.13 OC BasePercentChange 0 -16.26 AOH Total 0 0 AOH BaseChange 0 0 AOH BasePercentChange 0 0 CAP Total 132,400.73 111,767.98 CAP BaseChange 0 -20,632.75 CAP BasePercentChange 0 -15.58 LCC Total 545,090.16 457,361.28 LCC BaseChange 0 -87,728.88 LCC BasePercentChange 0 -16.09 EAA Total 36,638.62 30,741.86 EAA BaseChange 0 -5,896.76 EAA BasePercentChange 0 -16.09 Unit Total 545.09 457.36 Unit BaseChange 0 -87.73 Unit BasePercentChange 0 -16.09 SubCost Totals SubCost 1 Name Replacement Fan Replacement fan 87,729 Net Savings = Alternative A LCC 545,090.16 – Alternative B LCC 457,361.28 3.25 Savings Investment Ratio (SIR) = (67,096.13 Change In OCCOST) / (20,632.75 Change In CAPCOST) 9.3% Adjusted Internal Rate of Return = (((1 + .03 Real Rate) * (3.25 SIR ^ (1/20 Life Span)) -1) * 100 The following table (NIST, 1996) illustrates three related Performance Measures. NIST 136 Table 6-2 Alternative A Alternative B Initial Investment $ 103,000.00 $ 110,000.00 Replacement Cost, Fan, 12 years $ 12,000.00 $ 12,500.00 Residual Value, Fan, 12 years $ 3,500.00 $ 3,700.00 Annual Electricity Costs $ 20,000.00 $ 13,000.00 Annual Maintenance and Repairs $ 7,000.00 $ 8,000.00 Payback Period = Changes in Initial Investment / Changes in Annual Operating Costs 1.17 Payback Period = ($110,000 - $103,000) / (($20,000 - $13,000) + ($7,000 - $8,000)) Breakeven Cost in Change of Energy Savings 8,300 Energy savings have to be at least $8,300 for Alternative B to be considered (i.e. Year 1.17) = -(-$1,000 OMR Change) + (7,000 Initial Investment Change + 500 Fan Change – 200 Residual Change). These images and tables demonstrate the following data management techniques: * Most alternative investments have savings in either operational costs or capital costs, rather than both. The two alternatives being compared happened to come from two adjoining cost estimates in the NIST (1996, Table 5-4 and Table 5-5) reference. * The components being analyzed had to have correct Alternative Type properties (A and B). These properties are set for Components, and all other base elements except Inputs and Outputs using NPV calculators. This required opening the NPV calculator for each Component and changing the Alternative Type property (the Target Type property was also changed to demonstrate Progress analysis). The parent Component Group then had to have a new base document built, new NPV calculations run and saved (while not overwriting the newly set Alternative Type property) before the Life Cycle Comparative analysis could be run. The Net Savings of over $87,000 justified the time taken to complete this analysis. A professional interested in providing decision support for this investment decision will take the time to do it right, while others may not. * No data was downloaded. The HTML Desktop view was just copied after running the analysis and inserted into a spreadsheet program. Several rows were deleted and some formatting changed. Producing this graph required little additional work. * The table displayed used the raw data copied from the data set. Naming conventions, such as BaseChange, should be displayed in a more professional way. Tell the technologist in charge of this analyzer about how to improve the naming conventions. Names shown in these types of graphics and tables should be changed to something your clients will find meaningful. 4. Malnutrition Comparative Incremental Change The following graph is taken from the USDA, Dietary Guidelines reference that can be found in the Malnutrition Analysis tutorial. The graph summarizes Actual Dietary Consumption Amounts versus Targeted Consumption Amounts. This example demonstrates how to use Malnutrition datasets to produce similar types of Performance Measures. Datasets: Change by Year at https://www.devtreks.org/hometreks/preview/smallholders/budget/Food Nutrition Subsistence Stocks SR01/273083905/none The following images illustrate nutrition comparative Performance Measures for the referenced dataset. The dataset uses real food nutrition Input and Output data (see the Malnutrition Calculation reference), but the amounts consumed and targeted are fictitious. The Meal data came from a subset of the Change by Year Analysis, and another meal (i.e. 2011) was used as a Base Meal. This type of pro forma data might identify meal patterns associated with obesity that are worsening over time. Statistical Analysis at https://www.devtreks.org/hometreks/preview/farmworkers/outcomegroup/Food Subsistence Meals/38/none This crop output data came directly from the dataset, but the fictitious title makes a better point than the actual data. That is, crop nutrient composition can vary based on environmental and management factors. Those factors are often studied to produce more elaborate analyses. These images and tables demonstrate the following data management techniques: * This food nutrition dataset includes the nutrient composition of over 7200 food items. The dataset is maintained by a government agency. These type of datasets should not be entered manually. In this case, the data was extracted from the government database, copied into a spreadsheet program, and then manipulated to produce the same data found in a food nutrition calculator. The 7200 calculators were then bulk uploaded into a DevTreks database. * The 7200 food items can be combined in any combination to produce a wide assortment of meals, such as all of the meals consumed in a particular country. This type of dataset, with its potential importance, needs full time commitment by club members working with a professional food nutrition network. * An HTML dataset was downloaded for a Change by Year analysis of three years of a luncheon meal. The Desktop view of the HTML data displayed the data in three columns. The HTML page was opened in a web browser and the three columns were copied and pasted to a spreadsheet program. The rows and columns aligned correctly but needed to be edited to produce the table and graphic. * The crop summary table was simply copied from the HTML statistical analysis on the Views panel and rearranged a bit in spreadsheet. 5. Global Warming Environmental Incremental Changes This example demonstrates how to summarize and communicate the results of natural resource stock Performance Measures, such as Global Warming Equivalents, to decision makers. The following two images summarize natural resource stock trends for the referenced dataset. As with crop nutrients, emissions can vary based on environmental and management factors. Datasets: Change by Year at https://www.devtreks.org/greentreks/preview/carbon/operationgroup/LCA Conventional Orange Crop Operations/760/none/ These images and tables demonstrate the following data management techniques: * An HTML dataset was downloaded for a Change by Year analysis of three years of an orange crop. The HTML page was opened in a web browser and the three columns were copied and pasted to a spreadsheet program. The rows and columns aligned correctly but needed slight editing to produce the table and graphic. 6. Timeliness Penalty This example demonstrates how to add penalties to analyses for not meeting threshold dates for completion of crop planting dates. Datasets: Feasible Timeliness Penalty (for Plant, Corn Grain, medium tractor, Example 1 operation) at https://www.devtreks.org/agtreks/preview/crops/operationgroup/Seeding and Planting, corn/44/none This image demonstrates the following data management techniques: * No data was downloaded but the HTML displayed with the Desktop view for a Feasible Timeliness Penalty analysis of several corn crop planting operations was copied to a spreadsheet. This type of analysis is explained in the Capital Input tutorial. The rows and columns provided the basic data needed to complete the analysis, but information contained in the underlying calculator, including corn yield and corn price for one of the planting operations had to be manually added to the spreadsheet and then manipulated to produce the two columns of data displayed. The fact that additional data had to be added to produce the graph means that the analysis must display more data. One of the responsibilities of the club who owns this data is to tell the technologist who built the analyzer that more data must be displayed with the analysis. * The threshold planting dates causing yield loss will change due to global warming. As with a lot of scientific data, publications that contain the threshold dates tend to be updated infrequently. Modern information technology should be used to automatically keep these types of datasets up to date. 7. Investment Sensitivity Analysis This example demonstrates how to use Performance Measures to reach decisions involving the performance of investments that can change dramatically based on changes in certain budget parameters, such as energy costs. The following image uses the best alternative from Example 3’s building construction investment. Energy prices have been changed for this alternative in 10% increments. Note that this is similar to the graph as shown on page 8-4 of the NIST (1996) reference. Datasets: Change by Alternative at https://www.devtreks.org/buildtreks/preview/commercial/componentgroup/Life Cycle Component Examples/552/none These images and tables demonstrate the following data management techniques: * This dataset used Life Cycle SubCosts in the cost analysis. The data came directly out of the NIST 1996 reference which showed which SubCosts had the greatest impact on Net Savings. That SubCost is the 0% Net Savings shown in the graph. Normally, each SubCost has to be analyzed separately to determine the drivers of Net Savings. Since this has not been automated in this analyzer, spreadsheet programs have to be used. 8. Risk-adjusted Multi-criteria Analysis This example demonstrates how to use risk-adjusted multi-criteria decision analysis to reach decisions involving the performance of alternative investments. This example is strictly for illustrative purposes. Experts in MCDA should be consulted before carrying out this type of analysis. The Technology Assessment 2 tutorial includes an example of algorithms that are currently available in DevTreks for producing these analyses. The image shows the scores for five criteria and two alternatives. The first table summarizes fictitious Monitoring and Evaluation (M&E) data copied from a Statistical Analysis of alternative malnutrition projects. The second table summarizes those results in a performance matrix that is used to carry out the multi-criteria analysis. The following equations were used to score the five criteria: Risk factor = log standard deviation / log mean Risk adjusted criteria score = (log mean * weight) – (risk factor * (log mean * weight)) Total alternative score = sum of criteria scores / number of criteria Datasets (this analysis used localhost because it had more data than the cloud): Statistical Analysis at: http://localhost/hometreks/linkedviews/farmworkers/investmentgroup/ME2 Malnutrition Projects/275505679/none https://www.devtreks.org/hometreks/preview/farmworkers/investment/M and E Malnutrition 2 Project A/426/none Adjusted raw data: Time Period 1 Alt. 0 Alt. 1 Name 2013 Malnutrition Project 1 2013 Malnutrition Project 2 Label TP122 TP122 Indicator 1 Q1 Food Security Q1 Food Security Observations 1 4.000 4.000 Label 1 TP122 TP122 Total Unit percent hh secure percent hh secure Total 118.000 240.909 Total Mean 29.500 60.227 Total Median 25.000 56.818 Total Variance 494.333 266.873 Total SD 22.234 16.336 Log Mean 1.470 1.780 Log SD 1.347 1.213 Weight 0.250 0.250 Risk Factor 0.916 0.682 Risk Adjusted Score 0.031 0.142 Outcome Alt. 0 Alt. 1 Name 2013 BM Food Consumed 2013 Act Food Consumed Label OC122 OC122 Indicator 1 Q1 Food Consumed Q1 Food Consumed Observations 1 4.000 4.000 Label 1 OC122 OC122 Total Unit dollar cost dollar cost Total 6772000.000 8001000.000 Total Mean 1693000.000 2000250.000 Total Median 1674000.000 1912500.000 Total Variance 69988000000.000 378366916666.667 Total SD 264552.452 615115.369 Log Mean 6.229 6.301 Log SD 5.423 5.789 Weight 0.125 0.125 Risk Factor 0.871 0.919 Risk Adjusted Score 0.101 0.064 Component Alt. 0 Alt. 1 Name 2013 BM Food Delivery 2013 Act Food Delivery Label C122 C122 Indicator 1 Q1 Food Delivery Q1 Food Delivery Observations 1 4.000 4.000 Label 1 C122 C122 Total Unit dollar cost dollar cost Total 921900.000 965000.000 Total Mean 230475.000 241250.000 Total Median 224500.000 242500.000 Total Variance 3220009166.667 14139583333.333 Total SD 56745.125 118909.980 Log Mean 5.363 5.382 Log SD 4.754 5.075 Weight 0.125 0.125 Risk Factor 0.886 0.943 Risk Adjusted Score 0.076 0.038 Time Period 2 Alt. 0 Alt. 1 Name 2014 BM Progress 2014 Actual Food Security Label TP122A TP122A Indicator 1 Q1 Food Security Q1 Food Security Observations 1 4.000 4.000 Label 1 TP122 TP122 Total Unit percent hh secure percent hh secure Total 100.000 237.500 Total Mean 25.000 59.375 Total Median 20.000 58.333 Total Variance 316.667 85.359 Total SD 17.795 9.239 Log Mean 1.398 1.774 Log SD 1.250 0.966 Weight 0.250 0.250 Risk Factor 0.894 0.544 Risk Adjusted Score 0.037 0.202 Outcome Alt. 0 Alt. 1 Name 2014 BM Food Consumed 2014 Act Food Consumed Label OC122 OC122 Indicator 1 Q1 Food Consumed Q1 Food Consumed Observations 1 4.000 4.000 Label 1 OC122 OC122 Total Unit dollar cost dollar cost Total 7211000.000 7116000.000 Total Mean 1802750.000 1779000.000 Total Median 1752500.000 1758000.000 Total Variance 37431583333.333 61764000000.000 Total SD 193472.436 248523.641 Log Mean 6.256 6.250 Log SD 5.287 5.395 Weight 0.125 0.125 Risk Factor 0.845 0.863 Risk Adjusted Score 0.121 0.107 Component Alt. 0 Alt. 1 Name 2014 BM Food Delivery 2014 Act Food Delivery Label C122 C122 Indicator 1 Q1 Food Delivery Q1 Food Delivery Observations 1 4.000 4.000 Label 1 C122 C122 Total Unit dollar cost dollar cost Total 985000.000 852500.000 Total Mean 246250.000 213125.000 Total Median 250000.000 213750.000 Total Variance 789583333.333 1697395833.333 Total SD 28099.526 41199.464 Log Mean 5.391 5.329 Log SD 4.449 4.615 Weight 0.125 0.125 Risk Factor 0.825 0.866 Risk Adjusted Score 0.118 0.089 Wtd. Average Score 0.081 0.107 Performance matrix: Multi-criteria Risk-adjusted Scores for Alternative Projects Time Period 1 Food Security Outcome 1 Food Consumed Component 1 Food Delivered Alternative 1 0.031 0.101 0.076 Alternative 2 0.142 0.064 0.038 Time Period 2 Food Security Outcome 2 Food Consumed Component 2 Food Delivered Weighted Average Score Alternative 1 0.037 0.121 0.118 0.081 Alternative 2 0.202 0.107 0.089 0.107 These tables and images demonstrate the following data management techniques: * The raw data used to do the analysis was copied from the Desktop view of the statistical analysis into a spreadsheet. The raw data had to be manipulated (logs of means and logs of standard deviations, scores) before it could be viewed using a graph. * The dataset on localhost was more complete than the dataset on the cloud. That is generally unacceptable data management. The localhost should contain development and test datasets, while the cloud should contain the final, polished data. If needed, the database technology allows synchronization between two hosts. For example, field work may collect data using localhost, edit the data on localhost, and synchronize with the cloud site when an Internet connection is available. * The dataset uses an underlying Monitoring and Evaluation framework. That framework fundamentally seeks to increase the results, or impacts, of these types of projects. Those impacts are measured using the Time Period element of this dataset. Although that element has twice the weight factor of other criteria, an expert in M&E may believe it should be weighted considerably higher. It’s not enough to just know how to manipulate numbers, the underlying frameworks used with the datasets have to be understood as well. 9. Incremental Cost Effectiveness Ratio (ICER) This example demonstrates how to use ICER analysis to reach decisions involving the performance of alternative investments. DevTreks does not currently have good datasets demonstrating this technique, but the following basic health care dataset was stylized to produce a table similar to Table 1.1 in the WHO 2003 reference. The Social Performance Analysis 2 contains 3 more recent examples of conducting cost effectiveness analysis. Datasets https://www.devtreks.org/healthtreks/preview/urbandelivery/investmenttimeperiod/TP120- 2007 Hip Replacement Treatment/2108448209/none https://www.devtreks.org/healthtreks/preview/urbandelivery/investmenttimeperiod/0SR9019TP- 2007 Exam 2 Hip Replacement Treatment/2108448210/none Alternative 3 is not included on the Cost-Benefit frontier because it is dominated by the more cost effective treatments. The WHO 2003 reference contains more complete examples. These tables and images demonstrate the following data management techniques: * No data had to be copied or downloaded to produce the table. Instead, the total benefit and cost number for each budget was simply recorded in the table. * Alternative 1 used NPV to determine costs, while Alternative 2 used LCA. Both alternatives used the exact same Outcome Monitoring and Evaluation dataset to determine benefits. Real analyses use the same costing techniques and monitor and evaluate each technology separately. * Benefits were measured using M&E indicators. That framework makes use of multiple indicators and multiple base elements. For simplicity, only one of the Outcome indicators was chosen to analyze, but analysts should come up with ways to use all of the indicators in these analyses to help decision makers understand efficiency (i.e. similar to Example 8, Multi-criteria Analysis). * Even though this is not a good dataset, the cost and benefit tools are on hand to do professional ICERs that can be easily understood and shared. 10. Conservation Technology Assessment (CTA) and Tradeoffs The DevPacks Analysis tutorial introduces techniques being developed to carry out advanced analysis of calculator and analyzer data. The term CTA is defined in the Resource Stock Analysis tutorial. The term is extended further in the Technology Assessment 2 tutorial. The word Conservation in this term is used in a general microeconomic sense -households, firms, and governments should spend money in a manner that conserves scare resources, including physical capital, natural resource capital, human capital, social capital, and institutional capital. The following image summarizes natural resource stock Tradeoffs for the referenced dataset. Datasets (this dataset was analyzed in a precursor to DevTreks and is out of date): https://www.devtreks.org/agtreks/select/cropsconservation/devpack/Iowa, ARS-NRCS 2, Treatments 1 through 35, Full Set/80/none/ These images and tables demonstrate the following data management techniques: * The DevPacks Analysis tutorial explains the steps taken to produce this graph. This type of analysis may require a large of time to compete initially. But the process of producing the analysis produces feedback to software developers that can make these analyses easier in the future. In addition, this analysis was completed using the R project and the scripts are available for use by other analysts. * Most meaningful Tradeoff Analysis involve more than just two variables (Net Profits and NO3 emissions). 11. Optimality This example uses a watershed soil sedimentation dataset (Boyle 1994) that has not yet been put online. The author used an automated Excel worksheet that included linear programming to solve for the least cost combinations of soil conservation practices that would control sediment flow in a California, USA watershed. This example demonstrates how to summarize and communicate the results of natural resource stock Performance Measures, such as Efficient Soil Erosion Control, to decision makers. The following two images summarize natural resource stock efficiency measures taken from the referenced dataset. In this example, efficiency is defined as the point where Marginal Benefits equal Marginal Costs (~40,000 tons sediment controlled). Datasets: The dataset has not been put online yet. These images and tables demonstrate the following data management techniques: * The Excel spreadsheet program used the Solver linear programming feature to conduct the analysis. That program derived from a geologist’s Lotus spreadsheet program that used “brute force”, rather that optimization algorithms, to determine least cost sediment control. Marginal costs were obtained from the shadow price of each constrained sediment control level. A simplistic, stylized definition of benefits was used (aquifer pumping costs avoided). Twenty years have passed since that spreadsheet program was developed (it’s only available on a floppy disk). Modern software can carry out the same analysis using online data sets that can be solved using modern algorithms and modern software. An investigation by the author revealed that information technologists do not appear be producing, or updating, open source modules that can be readily used on modern servers (cloud servers) for this purpose. * Decision makers should expect, and demand, this type of decision support and they should expect to pay information technologists to keep decision support tools updated. An obstacle may be that the people who control sources of funding are older and simply don’t understand modern information technology. 12. Sustainable Supply Chain Analysis This example demonstrates how to carry out Social Performance Measures for either supply chain analysis or disaster risk reduction analysis. The first of the following images, from the Social Performance Analysis 2 reference (SPA3), summarize the results of an Organization Life Cycle Analysis of a company supply chain. The next 2 images come from the SPA2 and SPA3 references and demonstrate how to collect and analyze this data for either private sector companies or communities. URLs DevPacks example: https://www.devtreks.org/greentreks/select/carbon/devpackgroup/RCT Emissions and Env Performance/48/none http://localhost:5000/greentreks/preview/carbon/devpackgroup/Carbon Budgeting DevPack Group/43/none or Example 6A in SPA3 https://www.devtreks.org/greentreks/preview/carbon/output/Disaster Risk Management, Example 6A/2141223486/none http://localhost:5000/greentreks/preview/carbon/output/Disaster Risk Management, Example 6A/2141223501/none These images and tables demonstrate the following data management techniques: * These examples can be found in the DevPacks tutorial and the Social Performance Analysis 2 and 3 references. * The Social Performance Analysis references explain further data management techniques for social performance data. In particular, the following software pattern is recommended in that reference. Indicator metadata – TEXT datasets – custom algorithm – mathematical/statistical library DevTreks –social budgeting that improves lives and livelihoods 1