CTA-Prevention (CTAP) Last Updated: September 18, 2018; First Released: July 30, 2015 Author: Kevin Boyle, President, DevTreks (1*) Version: 2.1.6 This tutorial uses online datasets to illustrate how to carry out Disaster Risk Management Analysis. Version 2.1.4 upgraded this reference to make it part of the Social Performance Analysis tutorial. That tutorial includes 3 references that introduce tools that support the monitoring and evaluation of the SDG and Sendai DRR Targets and Indicators. This reference also supports these Monitoring and Evaluation business and community accounting systems by using introductory disaster risk management algorithms to further achieve the Sendai DRR targets at local scale. Sections and Examples Page * CTAP Examples Introduction 39 * Appendix A. Disaster Risk Reduction (DRR) Algorithms 40 * Algorithm1. Subalgorithm 9. Disaster Risk Reduction 60 * Example 1. Hurricanes, St. Lucia, Caribbean 65 * Example 2. Floods, Semarang, Indonesia 97 * Appendix B. Risk Management Indicator (RM) Algorithms 117 * Algorithm 1. Subalgorithm 10. Disaster Risk Index 125 * Example 3. Earthquakes, Bogota, Columbia 128 * Algorithm 1. SubAlgorithm 11. Risk Management Index (RMI) 142 * Example 4. Risk Management Index 146 * Algorithm 1. SubAlgorithm 12. Resiliency Index (RI) 156 * Example 5. Generic RI Indicators 164 * Appendix C. Decision Support System Algorithms 168 * Example 8. Drought, Uttar Pradesh, India and Sana’a Water Basin, Yemen 170 * Example 9. Drought Vulnerability Index 211 * Appendix D. Integrated Local, National, and International CTAP Systems 217 * Appendix E. Resource Stock and M&E Analyzers Example 6. Stock Analyzers and Example 7. M&E Analyzers 218 * Appendix F. Testing on localhost 257 All of the algorithms in this reference were tested using the upgraded Version 2.1.6 calculator patterns. A. Introduction People want to know how to prevent climate change from wrecking their one and only world. Many U.S. residents have seen their houses destroyed by hurricanes, their ranges dry up from drought, their cities finances stretched from snow storms, their watersheds mangled by forest fires, their reservoir levels drop from less snowmelt, and their farm jobs disappear from irrigation water shortfalls. At an international scale, people have seen their villages wiped out from typhoons, their children become stunted from disruptions in food supplies, their elderly relatives die from heat waves, and their livelihoods disrupted from agricultural losses due to drought. Walsh et al. (2014) find that the following natural resource trends in the United States will reinforce people’s perception of impending peril to their planet: * Temperatures are expected to continue to rise. * The growing season [for agriculture] is projected to continue to rise. * Average precipitation has increased since 1900. * Heavy downpours are increasing. * The intensity, frequency, and duration of hurricanes are projected to continue increasing. * Changes in extreme weather events is occurring. * Winter storms have increased in frequency and intensity since the 1950s, and their tracks have shifted northward. * Oceans will continue becoming more acidic causing disruptions to marine ecosystems. * Global sea levels are projected to continue increasing. The following image (Walsh et al, 2014) demonstrates that, although these types of damages have always occurred, climate change is making them worse. The following image (World Bank and UN, 2010) demonstrates that, factoring in population increases, these worsening damages are global in scale. Putting numeric estimates on ways to prevent the damages may help citizens, organizations, firms, and governments, to understand the corresponding investments they must make to mitigate and adapt to the changes. A primary conclusion by experts in this area (World Bank and UN 2010, Moench et al 2008, V. Meyer et al 2013, European Commission 2014) is the importance of making disaster prevention information transparent. This reference uses concrete examples showing how to use CTA-Preventions (CTAPs) to quantify the probable costs and benefits of prevention interventions that prevent or correct damages caused by climate change. The overall framework and information technology used to conduct the prevention assessments is introduced in the associated Conservation Technology Assessment (CTA) reference. That reference should be read prior to this reference. B. Damage Assessment and Disaster Risk Reduction The World Bank and UN (2010) reference provides recent science, with contributions from dozens of analysts, explaining the economics of natural resource disaster prevention. The reference shows that several Nobel Prize laureates endorse that science (especially the important nuances they discovered). Mechler (2005) uses the following image to summarize a standard approach, also endorsed in the World Bank and UN reference, for assessing damages from disasters. The following image (UN, 2015, Annex 3) demonstrates that climate change is causing the bottom curve, Step 4, to shift upwards –damages are getting worse (the image is the inverse of Step 4’s curve). The following table defines key terms (UN, 2015) used to carry out these types of assessments. Additional damage, and disaster risk reduction, assessment models and frameworks are detailed in the Surminski et al (2015) reference and include Integrated Assessment Models, IIASA CATSIM model, Disaster Loss Assessment Guide, and about a dozen more. The UN ECLAC (2014) reference provides a comprehensive manual for assessing a broad range of losses from natural resource disasters. The European Commission references (2013, 2014) assess additional models and frameworks (i.e. DesInventar, EMDAT) and use the following image to provide practical context for the importance of disaster loss data. Some of the recommendations found in these approaches will make their way into this reference in the future. In practical terms, these assessments are used to determine which dam size (if any) controls floods best, which coastal barrier reduces storm surges most effectively, and which building standard reduces earthquake damages most efficiently. The general approach is not only used to quantify damages from disasters, but can also be used to evaluate which health technology does the most good and least harm, which conservation collaborative processes improve habitat preservation best, which wind technology can ease energy grid deficiencies best, and what steps need to be taken to increase the resiliency of cities to withstand catastrophes. C. CTA-Prevention Portfolios The following tables (IPCC WG2, 2014; Bierbaum et al., 2014) define key terms needed to understand how to prevent climate change from damaging the planet. These terms, as further illustrated in the following images (IPCC WG 2 2014, Surminski et al, 2015), show that prevention will ultimately involve taking mitigation action, adopting adaptation measures, increasing resilience, enhancing adaptive capacity, and reducing vulnerabilities. The goal of this reference is to help decision makers determine the probability that prevention interventions with these characteristics are cost effective. Experts cited throughout this reference consistently point out that no single policy, technology, or intervention, will work to prevent damages from climate change. Most agree that a portfolio of prevention, or mitigation and adaptation, interventions are needed. The following images show some of the ingredients needed in natural resource disaster prevention portfolios (Surminski et al 2015, IPCC WG2 2014). The Resource Stock Analysis and Monitoring and Evaluation (M&E) tutorials demonstrate important techniques for ensuring that prevention interventions have a better chance of achieving their planned outcomes and impacts. Appendix B contains examples demonstrating how to use the joint M&E, CTAP, and Resource Stock (i.e. Earned Value Management, or EVM), tools. D. CTA-Prevention This reference updates the traditional terms, Damage Assessment (2*), or Disaster Risk Reduction, with the new term, CTA-Prevention. The new term is defined as follows: CTA-Prevention (CTAP) is the numeric assessment of the costs and benefits of a portfolio of mitigation and adaptation interventions that prevent or correct resource stock damages. Assessments use relevant Conservation Technology Assessment (CTA) algorithms to quantify the risk and uncertainty associated with resource stock measurement and valuation. The new term and definition have several advantages: 1. They reinforce natural resource experts’ emphasis on preventing disasters from occurring in the first place. They make it easier to use the newer concepts and terms devised by the experts for preventing undue losses from climate change. For example, if some level of damages from disasters is inevitable, selected populations and ecosystems can be protected better by reducing their vulnerability, increasing their resilience, and teaching people how to adapt (see the tables in the previous section that define these terms). 2. They are consistent with the health care sector’s emerging emphasis on prevention rather than treatment. They also help analysts who complete the assessments communicate prevention strategies, rather than post-mortem actions taken after the “patient” has already suffered excessive damages. The “patients” include natural resource stocks. 3. They clearly emphasize the need to use the modern information technology techniques, embodied in CTAs, for managing data, carrying out the assessments, making the evidence transparent, and helping decision makers understand the results. They reinforce the CTA emphasis on using modern algorithms to determine likelihoods and probabilities, make forecasts, provide recommendations, and to analyze data in new ways. 4. They expand the number of assessment techniques that can be used to evaluate disaster prevention risk and savings. New algorithms, such as the Disaster Risk Index algorithm that measures indirect damages, use new ways to support decisions involving uncertain resource stock metrics. 5. They recognize that prevention won’t be effective unless accompanied by changes to institutions, social norms, and personal behavior. Using the term, resource stocks, in the definition, rather than natural resource stocks, ensures that human capital stocks, social capital stocks, institutional capital stocks, and cultural capital stocks, can be fully addressed in assessments. In fact, the definition can be used as-is for assessing how well health care interventions improve human capital stocks. 6. Using the term, costs and benefits, in the definition emphasizes the very real and very large amounts of money that disasters impose on communities. The UNISDR GAR (2015) describes these costs as “[Probable disasters impose] contingent liabilities [that are] another category of toxic assets. If a country ignores disaster risk, … it is in effect undermining its own future potential for social and economic development”. They further state “the costs and benefits of disaster risk management need to become fully encoded into public and private investment at all levels, into the financial system and into the design of risk-sharing and social protection mechanisms”. 7. They help the public make the needed cultural shift from complacency to urgency. In fact, harbingers of cultural change (i.e. celebrities such as Neil Young during DesertTrip) have been holding concerts with songs that highlight this issue. Recently (September, 2016), the author has spotted people in Portland, Oregon, USA, wearing tee shirts with only the term “Adapt” printed on them. Movie industry professionals (i.e. Leonardo DiCaprio and Jenny Beavon) have used Oscar acceptance speeches to raise general public awareness of the urgency of this issue. Cultural shifts without good understanding of concomitant costs and benefits can lead to wasted resources and opportunities, so this reference is a technical manual, rather than a public relations document (3*). Estimating numeric savings from prevention is fraught with frailties –one farmer’s lost harvest is another farmer’s improved crop price, some people value lost lives much higher than analysts, many citizens want to prevent environmental damages regardless of discount rates, most city residents may experience minor discomforts while their city’s finances take a major beating, and the cost for one million minor actions taken to prevent a disaster can be significantly more expensive than the full recovery costs from allowing the disaster to occur. Although damages from disasters are often related to high profile, single, catastrophic weather events, many of the damages from climate change, such as rising sea levels, will be chronic and persistent. Their full magnitude may not be realized until an unrecognized threshold is crossed and it is too late to prevent the worst outcomes (i.e. shifts in ocean currents that moderate weather). E. CTA-Prevention Communication A primary conclusion by climate change and natural resource disaster experts is the importance of making disaster information transparent. The examples in this reference demonstrate how to make CTAPs transparent at scale (i.e. the Internet) and scope (i.e. multiple resource stocks assessed in multiple economic sectors by multiple clubs in multiple networks). The UNISDR (2009) and European Commission (2014) references provide guidance about developing and using general disaster risk reduction networks. Several references (Appendix 2 UN GAR 2015, IPCC 2012) provide guidance for “[taking the data from] bottom-up [data loss systems], derived from microeconomics [i.e. DevTreks], [and scaling up] data from sectors at the regional or local level to aggregate an assessment of disaster costs and impacts [at national and international scales]”. The following image (IPCC 2012) demonstrates the importance of linking this local knowledge and action with global actors and their public goods responsibilities (i.e. to prevent climate change from wrecking our world). Most of the climate change references demonstrate the value of GIS techniques in analyzing and communicating information about climate change to lay persons. The following image (Khantisidhi, 2014) demonstrates how numeric damage assessments are incorporated into commonly-used risk maps. The previous images imply that CTAP data must be easily accessible to machines because the machines are getting smarter and more communicative. In information technology terms, CTAPs are best completed and communicated using an overall software library (4*). The following image (UN 2015) demonstrates how numeric Disaster Risk Reduction (DRR) assessments fit into more comprehensive decision support frameworks. These communication and decision support pathways will continue to be addressed in future releases. F. Knowledge Banks Most DevTreks references discuss the importance of data standards, such as using “best practice” recommendations for calculating costs and benefits, employing Work Breakdown Schedules to classify “high quality” data, and using automated software object models to generate uniform calculations. At the most basic level, this allows cost estimate A and Benefit Cost Ratio Y, to be meaningfully compared to cost estimate B and Benefit Cost Ratio Z. At the most relevant level, it overcomes the problem of not having data to support extremely expensive decisions, such as what to do about preventing disasters (see V. Meyer et al, 2013). At the most practical level, the following image (European Commission, 2014) demonstrates how the use of best practice cost and benefit estimates, and high quality data standards, support the sharing and aggregation of data across countries. Appendix 2 in the UN GAR (2015) reference discusses national and international knowledge banks of disaster loss data. These systems (i.e. DesInventar, EMDAT) are similar to the tabular accounting systems demonstrated in the previous image. Their purpose is to summarize critical dimensions of disaster losses. Evidence of their importance is clear because most countries have adopted, or are adopting, these data loss systems. These data systems shouldn’t be confused with CTAPs. CTAPs assess concrete mitigation and adaptation technologies, similar to the techniques explained in the USACOE and USFEMA references. In effect, these existing national data loss systems can be considered metadata that describes, or summarizes, the more comprehensive results generated from actual disasters or from “bottom-up” CTAP damage assessments. The UN GAR reference clearly demonstrate the difference, stating “economic data collected and reported in disaster loss databases is very scare and inconsistent”. They have to extrapolate the economic losses from the disaster loss systems. In contrast, the primary characteristic of CTAPs are their emphasis on carrying out and reporting technology assessments that include uniform cost and benefit, and cost effectiveness, damage reduction estimates. The two systems complement one another, but point to the need to develop an automated way for machines to transfer and aggregate CTAP data to summary regional, national, and global data loss systems (see Appendix D). The following image (European Commission 2014) demonstrates how formal data loss collection systems, or knowledge banks, are used to answer practical, important, questions about disasters. The best practice estimation techniques coupled with the use of high quality data standards allow knowledge banks of high quality disaster-related data to be passed down to future generations. Those future generations are facing imminent demise unless 195 nations carry out their climate change pledges affordably and transparently. Countries have a moral responsibility to transfer the lessons they have learned about mitigating and adapting to climate change (i.e. passing down knowledge banks) to future decision makers. Reporting disaster losses, or GHG reductions, or Sendai Framework accomplishments, alone won’t suffice –the quantified “how and why”, or, in CTAP parlance, the digital Conservation Technology Assessments, behind that metadata must be passed along too. G. CTA-Prevention Unresolved Concerns Besides the difficulties already mentioned about measuring and valuing changes in resource stocks, major concerns remain unresolved in many aspects of the stock changes (World Bank and UN, 2010; AAAS and RFF, 2014, IPCC WG2 2014, UN GAR 2015, Moench et al 2008): Intangibles: Planning officials have a tendency to be engineer-oriented and pay more attention to physical infrastructure damages (i.e. bridges, highways, buildings) than savings associated with managerial adjustments (i.e. fertilizer that replaces nutrients washed away), institutional disruptions (i.e. children that leave school), and ecosystem service losses (i.e. marshlands that reduce floods). Indirects: Savings from many resource stock changes don’t directly involve preventing bridges from washing away, people being killed, or crops destroyed. They are associated with the indirect consequences of these changes: the time residents spend commuting because of transportation disruptions, the stunted children who develop impaired intellects, and the yield losses from soils that lose topsoil and nutrients. Fat Tails and Extreme Events: The references cited, and some of the images displayed, show that the historical mean and standard deviation are no longer fully reliable predictors of future resource stock conditions. More extreme events will occur than the historical average, and many will occur with greater intensity (i.e. hurricanes in the US). Catastrophic and Irreversible Events: Climate change experts use terms such as thresholds, tipping points, and surprises, to explain the great deal of uncertainty about how future levels of GHG emissions could result in irreversible damages to specific ecosystems and populations (i.e. if the Greenland ice sheet melts, sea levels rise 7 meters). Value of a Life: Putting a dollar value on a life will always be controversial. Using some of the techniques employed by health care analysts, such as using Quality Adjusted Life Years, may assist putting a dollar value on saving human lives. Equity and Distribution of Gains: Equity has become a general area of concern for the poor and middle class residents of many countries. Their wages have stagnated while the rich have prospered. Per capita carbon emissions are several orders of magnitude higher in developed versus developing countries. Similarly, institutional imperfections exist with how prevention and recovery funds are spent, recovery contracts awarded, disaster legislation captured by special interests, and future generations treated fairly. Without institutional improvements that address equity, many of the resource-poor will remain damaged, while the resource-rich will quickly recover. Unique and Threatened Systems: Several unique and threatened ecosystems and populations are already being impacted by climate change and this trend is expected to get worse. Governments throughout the world have established precedence in putting priority on protecting these assets through actions such as land transfers to native populations, threatened and endangered species laws, and wildlife reserves. Tradeoffs: Money spent of natural resource damage prevention can be spent elsewhere –on education, physical infrastructure, imprisonment, war, and foreign aid. Choices have to be made. The World Bank (2014) points to the need to understand the tradeoff between “preparing for risk with that of coping with its consequences”. The UN (2015) emphasizes that “benefit cost analysis needs to be expanded to highlight the trade-offs implicit in each [investment] decision, including the downstream benefits and avoided costs in terms of reduced poverty and inequality, environmental sustainability, economic development and social progress as well as a clear identification of who retains the risks, who bears the costs and who reaps the benefits”. Better numbers might help to make the tradeoffs, and winners and losers, clearer. Discount Rate: The social discount rate used to discount future benefits and costs remains controversial. High rates lessen the value of the longer term gains from prevention actions. Risk Management Portfolios: Managing risk is complex –all of the major references point to the need for a portfolio of actions. However, fewer of the references document the need to factor in social systems, financial systems, macroeconomic conditions, and public sector leadership, into the portfolios (see the WDR 2014). Even well-endowed governments have trouble understanding all of the ingredients needed in effective risk management portfolios. Neither this reference, nor any other reference, can alleviate all of these concerns with one grand publication, or software app, –it’s not possible. This reference will evolve as the information technology, that is, the apps, used by the reference get better. Future releases will continue to address these concerns. H. CTA-Prevention Examples (5*) Appendix A has examples of CTAPs completed using Disaster Risk Reduction (DRR) algorithms. Appendix B has examples of CTAPs completed using Risk Management Indicator (RM) algorithms. Appendix C has examples of CTAPs completed using Decision Support System (DSS) algorithms. Appendix D has examples of CTAPs completed using national data loss systems. Many algorithms complement one another and can be used jointly. For example, the Carreno (2012) and Marulanda (2013) references demonstrate carrying out “holistic” damage assessments by using algorithms from Appendix A to incorporate physical risk indicators to estimate direct damages, and algorithms from Appendix B to incorporate social fragility and lack of resiliency indicators to estimate indirect damages. Summary This reference demonstrates how to complete CTAPs. CTAPs may help US West Coast residents take effective preventive measures for their probable earthquakes, Bangladeshi delta residents mitigate their likely floods, Central Mexican school officials keep their students enrolled during more frequent droughts, North African health care administrators reduce child stunting from disrupted food systems, Pacific Islanders reduce damages from rising typhoon intensity, and people to improve their lives and livelihoods. Footnotes 1. While working as an agricultural economist for the USDA, Natural Resources Conservation Service, the author provided technical assistance and coordinated training in flood damage assessments –usually for water resource management projects, such as dams, in smaller watersheds in the US. Read Footnote 6 for caveats about the author’s current expertise (i.e. software development). He initiated this project while working in that agency’s Social Sciences Institute as an agricultural economist/scientist/software developer. 2. Practitioners who deal with the aftermaths of natural resource disasters commonly complete Damage Assessments –but using simpler, summary, and field-oriented, single page forms. This reference does not recommend any changes with the Damage Assessments already being used for field work. The CTAPs endorsed in this reference are appropriate when formal economic evaluations are needed (i.e. when resources are scarce and money needs to be saved) and knowledge banks must be used to provide easy access to this evidence (i.e. when Internet technology is needed to carry out the assessments and make the evidence transparent to people and machines). 3. Nevertheless, public relations may be a good idea for a non-profit that writes code, rather than raises funds, so the author’s non-profit may start selling baseball caps and tee-shirts with CTAP terms printed on them. 4. DevTreks has not been explicitly built as a software dictionary, or API, that other software developers can use to develop technology assessment or cost and benefit applications. Nevertheless, the underlying database uses a fairly straightforward hierarchical table structure. Experiments have been conducted in the past (i.e. using OData and Web APIs) to see how much difficulty machines would have in accessing all of that data. The conclusion at that time was that machines, and the people who program the machines, would not have that much trouble accessing most of the data –hierarchical database data is not that hard to make machine-friendly (provided that high quality data standards are followed). The Source Code tutorial shows that Version 2.0.2 included a WebApi app that starts to address the need identified in this footnote. The Containers reference in that tutorial will be updated in a future release to further address this issue. 5. Although the examples in the Appendices focus on natural resource-related disasters, the author believes that several of the algorithms, with careful construction, can also be used for other stocks. For example, in Appendix A, Example 1, substitute health care damages for natural resource damages, probable health care events (i.e. DRG severity levels such as mild stroke, moderate stroke, severe stroke, and Markov transition states such as extremely ill, 50% recovery, 90% recovery), for probable natural resources events, targeted health care populations for physical asset types, QALYs for dollar damages, and health care interventions for project alternatives, and see what you come up with (i.e. basic ICERs of alternative health care interventions? see WHO, 2003). 6. The author is not a current “expert” in many of the assessment techniques demonstrated in these examples, such as flood damage assessments. One of the primary purposes of this reference is to demonstrate how to use the generic properties of Indicators and Scores to conduct basic versions of these assessments. The generic properties can handle many, if not all, of the mathematical and statistical techniques used in more advanced, or “expert”, assessments. As many of the examples will demonstrate, if the existing algorithms can’t handle the advanced calculations, it’s very likely that a new algorithm needs to be developed and added to the libraries. 7. A technical assistance visit paid years ago by the author to a fire-prone area in the Northern California foothills, concluded that the majority of homeowners would not undertake fire-safe actions around their houses until homeowner insurance companies credited their fire insurance premiums for the improvements. The homeowners themselves indirectly proposed this “solution”. This is similar to the World Bank and UN (2010) point that individuals respond to price signals. In this case, no transparent price signals rewarded prevention actions (i.e. carbon prices are another good example). It’s also similar to behavioral economists’ conclusions that individuals overestimate low probable events while underestimating high probable events. In this case, experts believed that a serious fire was inevitable in a short time horizon and informed local residents about their conclusions. 8. In some cases, traditional approaches may be exactly what are needed. The author recently read a journal article in a literary US magazine (New Yorker) that made a convincing case that an earthquake will soon damage his Northwest US city neighborhood. He immediately wanted traditional, detailed, damage assessment information, complete with confidence intervals, for his neighborhood and building (as might most other Northwest US residents reading the article). While literary magazines may not be proper forums for this detailed information, this reference demonstrates that CTAPs completed and stored on the cloud have the potential to be a very good forum. 9. The fact that these types of algorithms are difficult to find may reflect on past, misplaced, science (i.e. science that didn’t, and still doesn’t, understand modern IT). Research peers appear to have reinforced this misguided science –publishing a paper that doesn’t advance modern IT in some way is another squandered opportunity; publishing social science equations and numeric results that can’t be easily translated into computer algorithms is at best, short-sightedness, and at worst, academic preening (often excused by sticking the word “policy” in there somewhere); contracting for economic assessments that can’t be completed and stored using modern on-line technology reinforces outdated, inefficient, and wasteful, applications of science. As concrete reinforcement to this footnote, DevTreks has started adding its source code to each release. 10. Mention should be made of bugs discovered during more comprehensive testing of these Stock Progress and M&E tools. At least 5 bugs or flaws surfaced (in the 88 tools). How do these bugs slip through? Because DevTreks is a small shop and some features simply can’t be tested by the author as much as others. Serious national, international, or virtual, shops must plan on having larger IT staffs. As a precursor, such shops first need to find administrators capable of caring, thinking, and acting, at appropriate scales and scopes. 11. The time commitment needed to complete these types of analyses correctly should be weighed against the usefulness of the results. These tools, like most DevTreks tools, have a level of complexity, or more accurately, detail, which may make them more useful to serious specialists, rather than casual analysts. These types of specialists should be the customers obtaining funding to build additional features and tools (i.e. whether through this shop or their own shop). 12. The fact that smaller countries, such as Ireland, are missing from this European Union publication, suggests the types of countries, including developing countries, which may benefit, in particular, from this type of IT product. The author has direct knowledge of persons who are studying IT in these countries (i.e. cousins) who may gladly help with the technical requirements. 13. If disaster loss data is simply not available in most parts of the world, then how can algorithms such as these, that require data distributions, help? Use your imagination. Instead of claiming that a probability density function is known, use similar techniques to those shown in this reference (rules of thumb for distributions: normal or triangular distribution, mean = approximate average point estimate, standard deviation = approximate 30% mean, confidence intervals = approximate high and low bounds on an approximate average estimate). State that the goal is to use the “introductory data” to develop “real data” in future releases. The criteria is whether or not imperfect data can support imperfect, but better, decision making. You may have a window of time to prevent the worst outcomes. 14. A review of the UNEP web site for their MCA initiative, MCA4climate, does not confirm much advance since 2011. More recent UN references, such as UN CAPNET 2015, introduce alternative DSS systems, such as DPSIR and SWOT, rather than MCA. It’s important to keep any DSS system, or for that matter, any algorithm, in perspective. These are all tools that have the potential to assist decision making. Assess which ones make sense for local contexts, experiment with them, adapt, but ultimately figure out how to make affordable and transparent decisions for dealing with climate change. 15. Some experts, or at least critics, in this field like to qualify this statement by using the term “credible organizations”. Until those critics can provide URLs and examples as demonstrated here, along with the source code proving their web and cloud data services, their credibility is hard to judge. Oh, and in this type of digital world, credibility doesn’t necessarily derive from attending lots of meetings, producing more paper work, being good at rent seeking, or having a big budget to spend. References American Association for the Advancement of Science (AAAS) and Resources for the Future (RFF). The Economic and Financial Risks of a Changing Planet. Insights from Leading Experts. Workshop Report. November, 2014. Bierbaum, Lee, Smith, Blair, Carter, Chapin, Fleming, Ruffo, McNeeley, Stults, Verduzco, and Seylier, 2014: Ch. 28: Adaptation. Climate Change Impacts in the United States: The Third National Climate Assessment, Melillo, Richmond, and Yohe, Eds., U.S. Global Climate Change Research Program, 670-706. Carreño, Martha, Cardona, Omar D., Ordaz, Mario G., Barbat, Alex H. Seismic risk evaluation for an urban centre. 250th Anniversary of the 1755 Lisbon Earthquake. 2005 Carreño M.L., Cardona O.D. and Barbat, A.H.. Urban Seismic Risk Evaluation: A Holistic Approach, Natural Hazards. 40(1):137-172. 2007 Carreño, Martha, Cardona, Omar D., Barbat, Alex H. Holistic Evaluation of the Seismic Risk in Barcelona by Using Indicators. 15 WCEE, Lisbon, Portugal. 2012 European Commission, JRC Scientific and Policy Reports, Recording Disaster Losses. 2013. European Commission, JRC Scientific and Policy Reports, Current status and Best Practices for Disaster Loss Data recording in EU Member States. 2014. (12*) Hochrainer, Stefan, Howard Kunreuther, Joanne Linnerooth-Bayer, Reinhard Mechler, Erwann Michel-Kerjan, Robert Muir-Wood, Nicola Ranger, Pantea Vaziri, Michael Young. The Costs and Benefits of Reducing Risk from Natural Hazards to Residential Structures in Developing Countries. Working Paper # 2011-01, Risk Management and Decision Processes Center, The Wharton School, University of Pennsylvania, 2011 Hochrainer, Stefan; Patnaik, Unmesh; Kull, Daniel, Singh, Praveen; Wajih, Shiraz. Disaster Financing and Poverty Traps for Poor Households: Realities in Northern India. International Journal of Mass Emergencies and Disasters March 2011, Vol. 29, No. 1, pp. 57–82. Inter-American Development Bank. From disaster response to prevention: companion paper to the Disaster Risk Management Policy. Sustainable Development Department sector strategy and policy paper series. 2007 IPCC. Climate Change 2014, Impacts, Adaptation, and Vulnerability, Part A Global and Sectoral Aspects. Working Group 2 Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. [Field, Barnes, Barros, Dockken, Mach, Mastrandrea, Bilir, Chatterjee, Ebi, Estrada, Genova, Girma, Kissel, Levy, MacCracken, Mastrandea, and White (EDS)]. SUBJECT TO FINAL EDIT IPCC, 2012. Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation. A Special Report of Working Group I and II of the Intergovernmental Panel on Climate Change. [Field, C.B., V. Barros, T. F. Stocker, D. Qin, D.J. Dokken, K.L. Ebi, M.D. Mastrandrea, K.J. Mach, G.K. Plattner, S.K. Allen, M. Tignor, and P.M. Midgley (eds). Cambridge University Press, Cambridge U.K., 582p Khantisidhi, Banthitha. Assessment of Climate Change Impacts on Flood Behavior. 2nd Mekong Climate Change Forum. 2014 Khazai, Bijan; Bendimerad, Fouad; Cardona, Omar Dario; Carreno, Martha-Lilliana; Barbat, Alex H.; Burton, Christopher G. A Guide to Measuring Urban Risk Resilience. Principle, Tools and Practice of Urban Indicators (Prerelease Draft). Earthquakes and Megacities Initiative - EMI. 2015 Marulanda, Mabel C, Carreño, Martha, Cardona, Omar D., Ordaz, Mario G., Barbat, Alex H. Probabilistic earthquake risk assessment using CAPRA: application to the city of Barcelona, Spain. Nat Hazards, 69:59-84. 2013 Mechler, Reinhard. Cost-benefit Analysis of Natural Disaster Risk Management in Developing Countries. Manual. 2005 Mechler, R. and The Risk to Resilience Study Team, (2008): The Cost-Benefit Analysis Methodology, From Risk to Resilience Working Paper No. 1, eds. Moench, M., Caspari, E. & A. Pokhrel, ISET, ISET-Nepal and ProVention, Kathmandu, Nepal, 32 pp Mechler, R., Hochrainer, S., Kull, D., Chopde, S., Singh, P., S. Wajih and The Risk to Resilience Study Team, (2008): Uttar Pradesh Drought Cost-Benefit Analysis, From Risk to Resilience Working Paper No. 5., eds. Moench, M., Caspari, E. & A. Pokhrel ISET, ISET-Nepal and ProVention, Kathmandu, Nepal, 32 pp. Miller, Kathleen A. and Valerie Belton, 2014. Water resource management and climate change adaptation: a holistic and multiple criteria perspective, Mitigation and Adaptation Strategies for Global Change, 19(3): 289-308. DOI 10.1007/s11027-013-9537-0 Mishra, A.K., Singh, V.P. Drought modeling – A review. J. Hydrol. (2011), doi:10.1016/j.jhydrol.2011.03.049. The full article was retrieved directly from Singh’s website in January 2016, where it is described as an Article in Press. Moench, M. and The Risk to Resilience Study Team, (2008): Understanding the Costs and Benefits of Disaster Risk Reduction under Changing Climatic Conditions, From Risk to Resilience Working Paper No. 9, eds. Moench, M., Caspari, E. & A. Pokhrel, ISET, ISET-Nepal and ProVention, Kathmandu, Nepal, 38 pp. Mustafa, D., Ahmed, S., E. Saroch and The Risk to Resilience Study Team, (2008): Pinning Down Vulnerability: From Narratives to Numbers, From Risk to Resilience Working Paper No. 2, eds. Moench, M., Caspari, E. & A. Pokhrel, ISET, ISET-Nepal and ProVention, Kathmandu, Nepal, 28 pp. (This is the second of a 9 part series on disaster risk reduction.) Samuel Rufat, Eric Tate, Christopher G. Burton, Abu Sayeed Maroof. Social vulnerability to floods: Review of case studies and implications for measurement. International Journal of Disaster Risk Reduction 14 (2015) 470–486 Shreve, C.M., Kelman, I. Does mitigation save? Reviewing cost-benefit analyses of disaster risk reduction. International Journal of Disaster Risk Reduction. 10 (2014) 213-235 Swenja Surminski, Ana Lopez, Joern Birkmann, Torsten Welle. Current knowledge on relevant methodologies and data requirements as well as lessons learned and gaps identified at different levels, in assessing the risk of loss and damage associated with the adverse effects of climate change (Mandate of the Conference of the Parties) last accessed on the web on July 29, 2015 https://unfccc.int/files/adaptation/cancun_adaptation_framework/loss_and_damage/application/pdf/background_paper_full.pdf UK DCLG (Department of Communities and Local Government) (2009), Multi-Criteria Analysis: A Manual, UK DCLG, London. United Nations. CAPNET. Drought risk reduction in integrated water resources management training manual. 2015 United Nations. ECLAC. Handbook for Disaster Assessment. 2014 United Nations Environment Programme (UNEP). A Practical Framework for Planning Pro-Development Climate Policy. 2011 UNISDR, 2009. Drought Risk Reduction Framework and Practices: Contributing to the Implementation of the Hyogo Framework for Action. United Nations secretariat of the International Strategy for Disaster Reduction (UNISDR), Geneva, Switzerland, 213 pp United Nations. Integrated Drought Management Programme (IDMP). National Drought Management Policy Guidelines, a Template for Action. 2014 United Nations. UNISDR. GAR. Global Assessment Report on Disaster Risk Reduction. 2015 United Nations. UNISDR. DRAFT Post-2015 Framework for Disaster Risk Reduction: a proposal for monitoring progress. 2014 United Nations. UNISDR and partners. Proposal on Disaster-Related Indicators to Sustainable Development Goals. 2015 UNDP. Mainstreaming Drought Risk Management. A Primer. 2011 U.S. Army Corps of Engineers, Hydrologic Engineering Center. HEC-FDA Flood Damage Reduction Analysis. User's Manual. Version 1.2.4, 2008 U.S. Federal Emergency Management Agency. HAZUS-MH. Technical Manuals (more manuals are available than listed here, including an earthquake manual): HAZUS-MH Flood Technical Manual: https://www.fema.gov/media-library-data/20130726-1820-25045-8292/hzmh2_1_fl_tm.pdf HAZUS-MH Hurricane Technical Manual: https://www.fema.gov/media-library-data/20130726-1820-25045-9850/hzmh2_1_hr_tm.pdf V. Meyer, N. Becker, V. Markantonis, R. Schwarze, J. C. J. M. van den Bergh, L. M. Bouwer, P. Bubeck, P. Ciavola, E. Genovese, C. Green, S. Hallegatte, H. Kreibich, Q. Lequeux,I. Logar, E. Papyrakis,C. Pfurtscheller, J. Poussin, V. Przyluski, A. H. Thieken, and C. Viavattene. Review article: Assessing the costs of natural hazards – state of the art and knowledge gaps. Nat. Hazards Earth Syst. Sci., 13, 1351–1373, 2013 Walsh, Wuebbels, Hayhoe, Kossen, Kunkel, Stephens, Thorne, Voss, Wehner, Willis, Anderson, Doney, Feely, Hennon, Kharin, Knutsen, Landerer, Lenton, Kennedy, and Somerville, 2014: Ch. 2: Our Changing Climate. Climate Change Impacts in the United States: The Third National Climate Assessment, Melillo, Richmond, and Yohe, Eds., U.S. Global Climate Change Research Program, 19-67. Westen, Alkema, Damen, Kerle, and Kingma. Multi-hazard risk assessment, Distance Education Course, Guide Book. United Nations University. ITC School on Disaster Geoinformation Management. 2011 World Bank. World Development Report 2014: Risk and Opportunity - Managing Risk for Development. Washington, DC 2014 World Bank. Risk Analysis Course Manual. Global Facility for Disaster Reduction and Recovery. 2011 World Bank and United Nations. Natural hazards, unnatural disasters: the economics of effective prevention. 2010 World Health Organization. WHO Guide to Cost-Effectiveness Analysis. T. Tan-Torresedejer, R. Baltussen, T. Adam, R. Hutubessy, A. Acharya, D.B. Evans and C.J.L. Murray (eds). 2003 References Note We try to use references that are open access or that do not charge fees. Many recent publications have been found that demonstrate analytic techniques that would be useful in this reference, but many are not open access (i.e. Zekai Sen, 2015) and are therefore of limited usefulness in this context. It’s not entirely clear why authors who work in publicly funded institutions support inaccessible publications (peer pressure?). Improvements, Errors, and New Features Please notify DevTreks (devtrekkers@gmail.com) if you find errors in these references. Also please let us know about suggested improvements or recommended new features. This is another reference that has the potential for hundreds of pages of algorithm examples. Future releases will have to deal with the matter. A video tutorial explaining this reference can be found at: https://www.devtreks.org/commontreks/preview/commons/resourcepack/Technology Assessment 2/1544/none CTAP Examples Introduction (6*) These appendices contains examples of algorithms that are used to carry out CTAPs. The following guiding principles are used with all examples: 1. CTAs: All of these examples use the generic mathematical and statistical software techniques, involving numeric algorithms and structured software object models, introduced in the CTA reference. These techniques offer the flexibility needed to consistently quantify the uncertainty of a broad “portfolio” of prevention interventions and to make the evidence transparent to people and machines. 2. CTA-Prevention Portfolios (7*): A portfolio of prevention, or mitigation and adaptation, interventions must be assessed for decision makers. As mentioned in the associated Resource Stock Analysis tutorial, the most effective interventions, such as putting a price on carbon, must be in the portfolio. Successful interventions must include changes to human capital, institutional capital, social capital, and cultural capital, resource stocks. 3. Plausible Scenarios and Alternatives: The probabilistic risk and statistical algorithms used in CTAs can’t accommodate all of the uncertainties associated with climate change. Many changes simply can’t be known yet. The World Bank and UN (2010) reference recommends dealing with this uncertainty by presenting ranges of plausible interventions where benefits clearly exceed costs. The IPCC references demonstrate the use of plausible assumptions, or scenarios, that show how climate change will probably change natural resource stocks. 4. Experiment and Gain Experience: In some cases, the new CTAP approaches may be exactly what are not needed to undertake effective preventive actions (8*). People need to be experimenting and increasing their expertise in this area if prevention is to be globally affordable. That’s especially true for software developers building “apps”. Unless financed by investors, developers and data analysts should be paid for their work (i.e. including this work, so don’t be surprised if developers and analysts have to charge fees, or in the case of nonprofits, donations, for these data services). Appendix A. Disaster Risk Reduction (DRR) Algorithms A. Disaster Risk Reduction Introduction Mechler (2005) demonstrates using the following 4 steps to conduct cost-benefit analyses of natural resource disasters in developing countries. Definitions for these terms can be found in the main reference. Hochrainer et al (2011) explain that Step 3. Exceedance Probability (EP), is derived from data in the first 2 steps. CTAPs must be completed using the generic Indicators and Scores in the Resource Stock or Monitoring and Evaluation calculators. In order to do so, the typical steps, or modules, used in traditional damage assessments have to be translated into Indicators and Scores as demonstrated in the list that follows. All of the images derive from the Mechler (2005) and Hochrainer et al (2011) case studies that are used in Examples 1 and 2. Note that most of the “curves” shown in these images are similar to the cumulative density distributions more thoroughly explained in the CTA reference. * Step 1. Indicator 1. Hazard Exceedance Probability Distribution. The following image shows that the uncertainty of flood depths and exceedance probabilities must be translated into Indicator 1. * Step 2. Indicator 2. Exposure Distribution. The following image shows that the uncertainty of the number of assets being damaged and their value must be translated into Indicator 2. * Step 2. Indicator 3. Vulnerability Distribution. The following image shows that the uncertainty of the percent of assets damaged at varying flood depths, wind speeds, earthquake intensity, or other damaging events, must be translated into Indicator 3. These simple percentages reflect damages to the actual physical stocks. The calculations can also use losses to the physical stock flow, such as rental income. Care needs to be taken not to double count both losses –measure either the loss to the physical stock or physical stock flow, but not both. * Step 3. Indicator 4. Loss Exceedance Probability Distribution. The following image shows that the uncertainty of the monetary damages and exceedance probabilities must be translated into Indicator 4. The algorithm that generates this Exceedance Probability (EP) distribution is explained in the next section. EP distributions are used to measure Average Annual Losses (AAL), Return Period Losses (RPL), and Tail Value at Risk (TVAR) losses. AAL losses reflect the full area under the EP curve, RPL losses reflect the average annual losses for a specific occurrence event (i.e. one point on the curve) in one year, while TVAR losses reflect cumulative losses up to a specific event, including the tail. The following example uses the 25 year event in the previous image to demonstrate how to interpret the differences between the 3 losses: AAL: In 2005, average annual losses are 72 billion rupiahs (i.e. the entire area under the curve). RPL or PML: The 500 billion loss represents the 96 percentile of the annual loss distribution. The probability of exceeding $500 billion in one year is 4%. The UNISDR GAR (2015) and Marulanda (2013) references use the term, Probable Maximum Loss (PML), for this term. TVAR: Given that at least a $500B loss occurs, the average severity will be $800 billion (using only the 10 year and 25 year events in the cumulative calculation). Note the significant difference between average annual losses and total losses. * Step 4. Benefits Analysis. Indicator 5. Costs of Alternatives. The following image shows that the uncertainty of the costs for each alternative must be translated into Indicator 5. The following image (V. Meyer et al, 2013) demonstrates the importance of defining and categorizing damages and using economic estimation techniques that are appropriate for specific cost and damage categories. The authors provide a comprehensive review of techniques that are appropriate for estimating disaster-related costs and damage reduction benefits. The national and international data standards proposed by the European Commission references (2013, 2014) for disaster loss accounting include similar categories. * Indicators 2 to 5. Alternatives. The existing Indicators are used to hold data for each project alternative (i.e. mitigation and adaptation intervention). The Indicator.URL TEXT files include additional rows of data specifying how each project alternative changes the data. Each alternative uses the same number of rows as the baseline data but a simple label convention must be used so that each alternative can be properly identified. An additional letter suffix, such as A, B, and C is added to the existing label column of data (i.e. Indicator 4, Label 4A, Alternative B = Label 4A_B). * Step 4. Indicator 6. Benefit Cost Analysis. The following image shows that the uncertainty of benefits of each project alternative, along with Indicator 5’s Costs, must be translated into Indicator 6. This Indicator displays the final analysis of benefit cost ratios for each project alternative. * Indicator 7. Cost Effectiveness Analysis (CEA). This Indicator is calculated in a similar manner to Indicator 6, but instead of calculating reductions in direct monetary damages as benefits, it calculates reductions in indirect non-monetary damages as benefits. Typical examples of non-monetary damages are the Indirect and Intangible concerns discussed in the main reference, such as transportation disruptions, job losses, and QALYs. Changes in alternative project costs are divided by changes in the alternative benefits to develop Cost Effectiveness Ratios. The following image (WHO, 2003) demonstrates a CEA for human capital stock interventions. Note that the WHO reference stresses the importance of defining the baseline using the null practice, because current practice may be less effective than all other alternatives. [This latter statement was corrected in version 2.1.0 because new algorithms released included upgraded CEA techniques]. When Appendix B, subalgorithm 10, is used to generate the Indicator7.URL TEXT file, Indicator 7 measures the cost effectiveness of alternative interventions in reducing the indirect damages from disasters. A nice complement to Indicator 6’s measurement of direct damages. * Optional Trend, Scenario, and/or Sensitivity Analysis. Optional second datasets. The following images show that the uncertainty of time-related trends such as changes in population, alternative scenarios such as changes in natural resource stocks due to climate change, and/or additional assumptions about changes in key indicators such as discount rates or asset life, can be translated into additional tables and datasets. * Scores. Decision Support Systems. Appendix C has examples of wider frameworks that can also be employed for decision support. For example, V. Meyer et al (2013) state “[Benefit Cost Analysis] could be usefully embedded in a wider Multi-Criteria Analysis framework. This allows stakeholders and decision makers to decide on the relative importance of the different decision criteria and their related uncertainties”. Appendix A in Mechler’s, 2005 reference, Moench et al’s 2008 reference, as well as Annex 3 in the UN 2015 GAR reference, demonstrate how these steps fit into a more comprehensive Cost Benefit Analysis (CBA) framework. CBA frameworks are important when resources are scarce (i.e. money needs to be saved) and formal evaluations are needed that provide formal evidence of resource savings (i.e. Internet technology must be used to complete the evaluations and display the evidence). B. Disaster Risk Reduction (DRR) Algorithms The USACOE (2008) reference is a User Manual for desktop software that carries out comprehensive flood damage assessments. Appendix H in that reference explains the algorithms and Monte Carlo simulation techniques employed by that software to carry out the assessments. The following image from that reference demonstrates that comprehensive damage assessments use more advanced techniques than the simple 2 step process depicted in the previous section. In this image, relationships between several “damage assessment steps” are aggregated by an algorithm into the final Exceedance Probability distribution. The following image introduces how the U.S. Federal Emergency Management Agency uses actual observed damage losses to develop loss calculations in their HAZUS-MH software. With the exception of the HAZUS-MH reference manuals, most of the references cited in this Appendix do not present actual numerical examples of DRR algorithms. Without numerical examples, algorithms can’t be proofed for accuracy. The HAZUS-MH manuals are exceptions. These manuals include comprehensive numerical formulas, but some of their software development techniques appear to be very “domain-specific”, rather than “generic-indicator-nonspecific”. For example, many of the algorithms introduced in this reference use probability density functions (pdf) to characterize damages. These functions can be defined using four properties (Distribution, QT, QTD1, and QTD2) that include standard pdf shape and scale parameters, such as mean and standard deviation. In contrast, “domain-specific” algorithms require custom distributions that often need to be expressed using mathematical formulas, or, to use HAZUS terms, “multivariate damage state functions”. Although Chapter 1 in the HAZUS Hurricane technical reference mentions that their software may translate multivariate damage state functions into simpler probability density functions (i.e. by fitting “curves” to the data), the Appendixes in their technical manual, show hundreds of examples of the following damage state, and content loss ratio, multivariate functions. If these types of multivariate functions can’t be reasonably defined using standard pdf properties, the case will be made for using more advanced algorithms (i.e. that inherit from regression, bayesian, and machine learning subalgorithms, rather than Monte Carlo subalgorithms). For example, when modeling drought losses, Mishra et al (2011) summarize the use of multivariate damage algorithms that use techniques such as regression (ARIMA), probabilistic risk (Markov chains), copula (Normal), artificial intelligence (Neural Network), and data mining. The references in the Technology Assessment, Performance Analysis, and Social Performance Analysis, tutorials have examples of introductory subalgorithms that demonstrate many of these techniques. The mathematical and statistical packages used to carry out CTAs also contain numerous examples of multivariate data analysis. C. Additional Tools Additional natural resource damage assessment tools are available (i.e. Central America CAPRA, USFEMA HAZUS, UNISDR’s Global Risk Assessment, commercial catastrophe software, several tools reviewed by the European Commission references) and should be closely evaluated prior to using this example’s techniques. In addition, most serious natural disaster damage assessments are done in the context of overall watershed planning –the assessment is one ingredient in an overall natural resources conservation, or ecosystem, planning approach (see the Resource Stock Analysis reference for an example of such an approach). In effect, this example demonstrates simple algorithms for quickly implementing the World Bank and UN (2010) recommendation to present ranges of plausible interventions where benefits clearly exceed costs. The results from the more advanced algorithms can also be summarized using these simpler tools. Future releases will include examples and algorithms that demonstrate some of the more advanced assessment techniques, such as the use of multivariate loss functions (i.e. see the Social Performance references). D. DRR Algorithm Examples The following algorithms demonstrate how to use Disaster Risk Reduction (DRR) Distributions to calculate reductions in natural resource disaster damages. Most algorithms use probability distributions, defined by QT, QTD1, and QTD2 properties stored in URL.Indicator TEXT files, as the initial data used in each damage assessment step. The distributions are used to generate final mean and confidence interval calculations expressed using an Indicator’s QTM, QTL, and QTU properties. This reference believes that modern information technology will make these distributions more transparent and available for replication in related assessments. That is, they’ll be easier to find and use. * Algorithm 1. Disaster Risk Reduction: algorithm1, subalgorithm9: This is a custom DevTreks algorithm that adds basic uncertainty analysis to traditional natural resource damage assessments. The algorithm uses Monte Carlo simulation to estimate the uncertainty of damages of disaster risk reduction interventions (subalgorithm1). In effect, this algorithm generates upper and lower confidence intervals for each point in each step of traditional Disaster Risk Reduction distributions. The algorithm is designed to be used jointly with some of Appendix B’s Risk Management Indicators algorithms (subalgorithm10). At this stage of development, the following algorithms are still a wish list, but to the author’s knowledge, climate change isn’t going away anytime soon. Although this list suggests that these will be custom algorithms (i.e. algorithm1), the statistical packages used by other algorithms will also be used (i.e. because they already contain packages that carry out these multivariate data analysis techniques). The sibling Social Performance Analysis references confirm that Version 2.1.4 began supporting these types of algorithms. * Algorithm 1. Disaster Risk Reduction: algorithmx, subalgorithmx: The algorithm uses Copula simulations to estimate the uncertainty of damages of disaster risk reduction interventions. For example, when correlated damage variables have different distributions (i.e. drought severity and duration), Copula-based algorithms can account for the correlations correctly. * Algorithm 1. Disaster Risk Reduction: algorithmx, subalgorithmx: The algorithm uses Markov Chain Monte Carlo (MCMC) simulation to estimate the uncertainty of damages of disaster risk reduction interventions. [Cross reference with HTAs –DRG severity levels and health transition states.] * Algorithm 1. Disaster Risk Reduction: algorithmx, subalgorithmx: The algorithm uses Machine Learning (AI) simulation to estimate the uncertainty of damages of disaster risk reduction interventions. * Algorithm 1. Disaster Risk Reduction: algorithmx, subalgorithmx: The algorithm uses regression analysis (logistic, ARIMA) to estimate the uncertainty of damages of disaster risk reduction interventions. * Algorithm 1. Disaster Risk Reduction: algorithmx, subalgorithmx: The algorithm uses randomized control trial analysis (ANOVA) to estimate the uncertainty of damages of disaster risk reduction interventions. [Cross reference with HTAs.] * Additional Algorithms. Disaster Risk Reduction (under planning): Customers with an immediate need for additional algorithms can contact DevTreks directly. The goal of the following examples are not to carry out exact replications of the case studies’ damage assessments. Instead, the examples emphasize developing and using CTA algorithms to assess the probability of the costs and benefits associated with resource stock loss prevention interventions. As such, although care has been taken to match the studies’ overall methodology, less importance was placed on matching the exact numbers used in the studies. For example, all of the probability distributions used in the examples are fictitious (i.e. standard deviations are all 10% of means). The author acknowledges that this could lead to flawed analyses (i.e. that will need to be improved in future releases). Algorithm1. Subalgorithm9. Disaster Risk Reduction This algorithm will be explained using two case studies of traditional natural resource damage assessments that have been completed in developing countries. Example 1 will cover Hochrainer et al’s (2011) damage assessments for hurricanes in St. Lucia, Caribbean. Example 2 will be Mechler’s (2005) case study of a flood damage assessment carried out in Semarang, Indonesia. The algorithm uses the simple 2 step process introduced in the previous section, along with a subset of the ACOE and HAZUS techniques, to generate uncertain DRR distributions. The algorithm carries out the CTAP as follows: 1. Hydrologists, or physical science experts, prepare Indicator 1. Hazard Probability Distribution. The distributions are used to calculate means and confidence intervals showing the probability that a given exceedance probability will produce an uncertain quantity of damage-causing wind, water, fire, seismic activity, or other event. The locations specified in this Indicator are also used with most other algorithms. These integer locations can be linked to separate GIS data structures. In this example, a 2nd location is calculated as a 10% increase in the 1st location exceedance probabilities (and subsequent damages). All Monte Carlo simulations are based on the QT, QTD1, and QTD2 parameters with the distribution specified in the “distribtype” column and use subalgorithm1 to generate means, or QTMs, and confidence intervals, or QTLs and QTUs. 2. Up to 7 exceedance probability events can be defined for any Indicator that specifies events. The exceedance period can be any integer value. A required data convention is to concatenate the integer value with the suffix “year” and use the concatenated string as a column header in the Indicator datasets. The word “year” is actually parsed from the column header to obtain the integer exceedance period. The examples below show the correct conventions to follow (i.e. 10year, 100year, 1572year). The current release does not focus on internationalization of the algorithms in this reference. Example 2 demonstrates some of the results from using 7 events. 3. Analysts use Indicator 1 to prepare Indicator 2. Exposure, and Indicator 3. Vulnerability. Data TEXT files holding probability distributions for the 2 indicators are referenced in the Indicator.URL property. Calculations generate confidence intervals for each Indicator’s distribution which are stored in the Indicator.MathResult. Subalgorithms 9, 10, and 11 share 95% of their source code and employ consistent data formats for all Indicators. The use of 3 aggregation levels increases the size of the data that must be stored, displayed, and analyzed. That tradeoff was made over simplicity because that technique is recommended by the experts cited in Appendix B’s references, the data format stays consistent among all algorithms, and because these analyses have grave consequences. 4. Indicator 2’s asset values can be expressed either on a unit value (i.e. m2 of area), or unit asset (i.e. single residential property), basis. Multiplying these asset prices by the number of assets, or quantity, results in the total value of exposed assets. Example 1 demonstrates unit asset value (i.e. single residential property). Example 2 demonstrates unit values (m2 of area). Example 1 further demonstrates that Indicator 5’s Costs, have also been calculated on a unit cost basis (i.e. repair cost per residential structure) by setting the “isprojectcost” column to “no”. That tells Indicator 6 and 7 to calculate costs by multiplying Indicator 5’s unit costs by Indicator 4’s asset quantities. In contrast, Example 2 demonstrates that the costs associated with unit value assets are calculated as project costs by setting the “isprojectcost” column to “yes”. Indicators 6 and 7 will calculate costs using Indicator 5’s project costs alone. Those costs are not multiplied by Indicator 4’s asset quantities. 5. The confidence intervals for Indicators 2 and 3 are multiplied by one another (damage for each stage = asset market value * percent damage) to generate confidence intervals for average annual damages (the EP curve). The EP confidence intervals are stored in Indicator 4’s Math Result. 6. Example 1 in Appendix C demonstrates incorporating projected trends in exposed asset value by adding an optional TEXT file to Indicator 4. When this file is found, the value of exposed assets used to calculate Indicator 4’s damage losses is multiplied by the trend variables. This technique allows changes in population, output production, natural resource conditions, or other asset trend characteristics, to be included in the projected asset losses (i.e. by simple multiplication). 7. Indicator 5 is used to define uncertain cost distributions for current practice and each project alternative. Costs must be specified in the same manner as Indicator 2’s asset values. That is, they must be listed as either project costs or unit costs. The annual operating cost amount is calculated as a uniform present value cost over the project life span. The installation cost is calculated as a discounted installation cost at the end of 1 year. When installation costs occur over more than 1 year, make appropriate adjustments in the initial installation cost listed in the TEXT file. The generated confidence intervals for the resultant present value costs are stored in Indicator5.MathResult. Indicator 6 must use comparable present value benefits in the Benefit Cost Ratios. In order to so, the average annual damage reduction calculated in Indicator 6 uses a uniform present value discounting formula with the discount rates and life spans in the sensitivity analysis. Indicator 7 assumes benefits are nonmonetary and does not discount benefits. The CTA reference has examples that demonstrate these types of discounting formulas. 8. Indicator 4’s Math Results are used to calculate changes in EP discounted monetary damages, or Benefits, for each project alternative. Indicator 5’s Math Results are used to calculate changes in discounted costs for each project alternative. The change in Benefits, are divided by the change in Costs to obtain an uncertain Benefit Cost Ratio (BCR). The uncertain BCR is defined by a mean with upper and lower confidence intervals and added to the Indicator6.Math Result. The discount rates and project life spans stored in Indicator 5 are used to conduct a sensitivity analysis of the ratios. 9. Indicator 7 is calculated in a similar manner to Indicator 6, but instead of calculating changes in monetary damages as benefits, it calculates changes in non-monetary indicators, or effects. The changes in Costs are divided by the changes in non-monetary Benefits to calculate Cost Effectiveness Ratios, such as incremental change in cost per incremental change in Greenhouse Gas Emissions, or QALYs. Unlike Indicator 6, the current release recommends that the confidence intervals for the non-monetary damages be calculated elsewhere, such as using the Indicator4.MathResult produced using Appendix B subalgorithm 10, and then referenced using the Indicator.URL property. If no TEXT file is found in the Indicator.URL, it will use the same data as Indicator 6 –the MathResults from Indicator 4. 10. Indicator 6 and 7’s meta (i.e. QTM, QTL, and QTU) will display a summation of the total costs and benefits for either the project alternative with the highest Benefit Cost Ratio or the lowest Cost Effectiveness Ratio. These values are derived by aggregating the Total Risk Rows across all locations. This reference recommends using the various Math Results to develop multimedia that decision makers will quickly understand when they load a typical URL to preview the calculations. The case studies show examples of appropriate media that include Loss EP Distributions, BCR tables, color-coded maps, and damage reduction graphs. Besides generating uncertain EP distributions, BCR ratios, and CERs, this type of algorithm can be used to carry out supplemental analyses that complement the basic benefit cost analysis result. Examples of supplemental analyses include: Scenario analysis: Appendix C addresses scenario analysis. The data can always be imported into other programs for more advanced scenario analysis. Trend analysis: Appendix C includes examples of simple trend analysis. The health care sector’s Health Technology Assessments often use separate population algorithms to simulate how demographic trends affect benefits and costs. The WHO 2003 reference includes an example. Examples 5 and 6 in the sibling Social Performance Analysis 3 reference provide examples of using population algorithms to determine social impacts associated with disaster risk reduction activities. Sensitivity analysis: Indicator 5 must include a 2nd URL TEXT file holding discount rates and project life spans that are used to conduct sensitivity analysis of Benefit Cost Ratios and Cost Effectiveness Ratios. The algorithm relies heavily on parsing TEXT datasets and then using specific indexing techniques to produce Math Results that are formatted correctly. These techniques require that Indicator.Labels and Indicator.URL TEXT files closely follow the conventions shown in the Examples, or the parsing and indexing will fail. If an example shows a Label with 3 characters, then that Label has to have exactly 3 characters. Certain Labels must have specific characters, such as “TR” for Total Risk, or “RF” for Physical Risk. If a column of data has an underscore (“_”) then that dataset must use the exact same convention. The algorithm supports additional rows and columns in the TEXT datasets, but only if they follow the logic used by the Indicator. For example, up to 7 exceedance events can be defined. The number of locations, assets, and project alternatives, can be lower or higher than the numbers used in the examples. The algorithm requires that Indicators have the exact indexes demonstrated in the examples. The next section demonstrates the Indicator and Score properties that need to be filled out to complete the uncertain EP distribution, BCA, and CEA analyses. All Indicators must use only MathType = algorithm1 and MathSubType = subalgorithm9. Example 1. Hurricanes, St. Lucia, Caribbean URLs https://www.devtreks.org/greentreks/preview/carbon/output/CTAP Ex- 1 - Hurricane DRR/2141223461/none https://www.devtreks.org/greentreks/preview/carbon/resourcepack/DRRs, DRIs, and RMIs/1539/none https://www.devtreks.org/greentreks/preview/carbon/resourcepack/SubAlgo 09 DRR 1A/1537/none http://localhost:5000/greentreks/preview/carbon/output/CTAP Example 1 - Hurricane DRR/2141223467/none Hochrainer et al. (2011) use four case studies demonstrating how to use Disaster Risk Reduction distributions to communicate the costs and benefits of reasonable disaster prevention interventions to decision makers. The case studies reinforce the World Bank and UN (2010) recommendation to present a range of alternatives to decision makers that clearly demonstrate where benefits exceed costs. This example uses the St. Lucia in the Caribbean case study which demonstrates how to use this technique with home improvements that reduce damages from hurricanes. The authors provide caveats about the techniques they employed. They note that only one case study, Istanbul in Turkey, accounted for damages from lives lost, and that indirect losses due to disruptions from lost jobs, damaged roads, and personal lives, were left out of the analyses. Even so, many of their benefit-cost ratios are greater than one. The following Score and Indicators show the properties for the completed analysis of hurricane damage prevention alternatives. Although all of these Indicators use normal distributions, any distribution supported by the calculator can be used. The references demonstrate that most Indicators are not normally distributed. For simplicity, data for the second location, 2, has been calculated as 10% greater than the first location, 1. The following Scores and Indicators can be placed in either base Input or Output elements, but can’t be split up into both. As with all Resource Stock Calculators, Operating and Capital Budgets can be used to combine Input and Output calculations (see the Resource Stock Analysis tutorial). The logic for using Output base elements is that Outputs are aggregated into Outcomes and those elements are usually associated, with “disaster loss and damage metrics”, such as Indicators 4, 6, and 7 (see UNISDR, 2014). The logic for using Input base elements is when the assessments are primarily being used to measure “contingent liabilities”, or costs. Very large Math Results may be too large to store in both the stylesheet and the database table field. Large datasets should use the Math Result to reference a Resource base element URL that will be used to store a TEXT csv file holding the Math Results. Score. Starting Properties The following initial Score properties are used in this example: Confidence Interval: 90 Random Seed: 7 Iterations: 10,000 Description: This example demonstrates …. Indicator 1. Hazard Distribution The current version generates confidence intervals for this Indicator, but does not use the results in subsequent calculations. More advanced DRR algorithms will use these results in subsequent calculations. Selected properties include: Distribution Type: none (the URL TEXT holds distribution) Math Type and Math Sub Type: algorithm1, subalgorithm9 QTMUnit, QTUnit, QTD1Unit, QTD2Unit: manually enter (most units that are entered manually will not be overwritten) The following Math Expression is only used to identify the columns of TEXT data to include in the calculation. Math Expression = I1.Q1.distribtype + I1.Q2.100year + I1.Q3. 50year + I1.Q4.25year + I1.Q5.10year + I1.Q6.5year Indicator.URL TEXT: Although the case study incorporates location directly in the type of asset being damaged, the following convention allows more general use of locational data, which will prove useful when this data is used by GIS applications. Location data stored in all TEXT datasets must be integers. Although, for appearances sake, distributtype columns throughout this algorithm include the distribution (i.e. normal) in the QTD1 and QTD2 properties, only the QT distribution is actually used. The Monte Carlo algorithm is run using these properties and the distribution data stored in the TEXT file. The results of the simulations are then used to fill in the rest of the following properties. Q1 to Q5 = filled in automatically with the mean and unit for each location QT = the average of each location’s mean wind speed QTD1 and QTD2 = average of distributions in the TEXT files QTM = average of the mean wind speeds calculated for each location QTL and QTU = x lower and upper x% ci The following images show the resultant calculations. Indicator 2. Exposure Distribution This Indicator uses asset value distributions to calculate confidence intervals for asset values that might be damaged from the hazards documented in Indicator 1. The asset distributions are organized by Categorical Indexes, such as ResidentialType1, CommercialType2, and Public, which are then added to parent Locational Index, such as All Residential, which in turn are added to a final Total Risk Index. Although these aggregators are not real “Indexes”, that term is used in order to stay consistent with the subalgorithms introduced in Appendix B. Selected properties include: Distribution Type: none (the Indicator.URL dataset holds the distribution types). Math Type and Math Sub Type: algorithm1, subalgorithm9 Units: automatically filled in (but most units that are entered manually will not be overwritten) The following Math Expression is only used to identify the columns of TEXT data to include in the calculation. Math Expression = I2.Q1.distribtype + I2.Q2.QT + I2.Q3.QTUnit + I2.Q4.QTD1+ I2.Q5.QTD1Unit + I2.Q6.QTD2 + I2.Q7.QTD2Unit + I2.Q8.normalization + I2.Q9.weight + I2.Q10.quantity Indicator.URL TEXT: The “RF” labeling convention will be used to distinguish physical assets from other types of assets (i.e. SF for social fragility and SR for social resiliency). The “RF” row is a location aggregation, or Locational Index. The “RF1” row is an asset category, or Categorical Index. The “TR” row is the final Total Risk for all aggregated Indicators for each separate location. The SubIndicator asset rows, have 1 additional letter suffix, such as “RF1A”. The “TR” and “RF” Labels are required, but the remaining Labels, can be changed but the number of characters used in these Labels cannot be changed. The algorithm uses the number of characters in the Label to determine which row of data is a Locational Index (2), a Categorical Index (3), and a SubIndicator, or asset type (4). Assets commonly have values that vary by location. The column named location must be integers. The column named distribution specifies the probability density distribution to use in the Monte Carlo simulation for this Indicator. The normalization and weight columns are used for consistency with Appendix B, subalgorithm 10’s use of non-monetary Indicators to calculate unit-less values. We recommend using this algorithm to calculate monetary damages, and subalgorithm10 to calculate indirect, non-monetary, indicators. For this algorithm, the normalization value has been set to “none” and the weight to 1. The quantity column is used to identify the number of assets, or SubIndicators. The total value of exposed assets will be calculated by multiplying the price of the asset (QTM, QTL, and QTU) by the quantity of the asset. To demonstrate the resultant calculations, the WoodPMin and MasonPMax asset quantities have been set to 10. In the case study, both benefits and costs are being defined based on 1 single house. This algorithm will aggregate the children SubIndicator asset values into each parent category, location, and total risk rows. That means that subsequent benefit calculations will already be based on correctly aggregated damages. However, Indicator 5 shows that initial costs have been defined on a per unit asset, or house, basis by setting the “isprojectcost” column to “no”. Indicator 6 and 7’s costs are calculated by multiplying Indicator 5’s unit costs by Indicator 4’s asset quantity. Example 2 demonstrates that an alternative way to aggregate costs is to define costs on a per project basis. In the latter case, Indicator 6 and 7’s costs are calculated by using Indicator 5’s costs directly –no multiplication occurs. The Monte Carlo algorithm, subalgorithm1, is run using these properties and the distribution data stored in the TEXT file. The results of the simulations are then used to fill in the rest of the following properties. The Math Expression is only used to identify the columns of TEXT data to include in the calculation. Q1 to Q5 = filled in automatically with the total asset value for the first five locations QT = the average of the first five locations QTD1 and QTD2 = none, the TEXT file defines the distributions QTM = total value of all indicators QTL and QTU = x lower and upper x% ci The following images show the resultant calculations. The reason for the million+ results is that the $100,000 house values are being multiplied by the quantity column, 10. Indicator 3. Vulnerability Distribution This Indicator uses a normal distribution of asset percent damage for physical assets to calculate uncertain damage loss percentages for each asset in each location and in each project alternative. The asset being damaged can be any resource stock, including human capital stocks. As mentioned in Section C. DRR Algorithms, custom distributions, such as those used to model seismic damages to buildings, are not supported by this algorithm (but some of the existing distributions, such as gamma, are fairly flexible). Selected properties include: Distribution Type: none (URL dataset is used) Math Type and Math Sub Type: algorithm1, subalgorithm9 Units: automatically filled in (but most units that are entered manually will not be overwritten) The following Math Expression is only used to identify the columns of TEXT data to include in the calculation. Math Expression: I3.Q1.distribtype + I3.Q2.5year + I3.Q3.10year + I3.Q4.25year + I3.Q5.50year + I3.Q6.100year Indicator.URL TEXT: The Labels in the following dataset use the same convention as Indicator 2, but the Categorical Indexes (RF1, RF2), Locational Indexes (RF), and Total Risk Index (TR), are placeholders that are used to display the final calculated results. The Categorical Indexes are calculated as summations of their children Sub Indicators. The Locational Indexes are calculated as summations of the Categorical Indexes. The Total Risk Index is a summation of the Locational Indexes. In addition, letter suffixes are used to identify the project alternatives listed in Indicator 5. The distributions have integer prefixes identifying the location. … … The Monte Carlo algorithm is run using these properties and the distribution data stored in the TEXT file. The results of the simulations are then used to fill in the rest of the following properties. Q1 to Q5 = filled in automatically with total percent damages and units for each event summed for all baseline TR rows across all locations QT, QTD1, and QTD2 = none QTM = total percent damages summed for all baseline TR rows across all locations QTL and QTU = x lower and upper x% ci The following images show the resultant calculations. The Math Result table displays confidence intervals for damage loss percentages. Indicator 4. Loss Exceedance Probability (EP) Distribution This Indicator uses the Math Results from Indicators 2 and 3 to generate confidence intervals for average annual damages and total damages by exceedance probability. The calculation uses the same Categorical Indexes and Locational Indexes as Indicator 3 for aggregating damages. Appendix C demonstrates how to use this Indicator to carry out simple trend analysis. Selected properties include: Distribution Type: none (URL datasets are used) Math Type and Math Sub Type: algorithm1, subalgorithm9 Units: automatically filled in (but most units that are entered manually will not be overwritten) The following Math Expression is only used to identify the columns of TEXT data to include in the calculation. Math Expression = I4.Q1.distribtype + I4.Q2.5year + I4.Q3.10year + I4.Q4.25year + I4.Q5.50year + I4.Q6.100year Optional Indicator.URL TEXT: Appendix C, Example 1, demonstrates how to use this property to conduct trend analysis. Q1 to Q5 = filled in automatically with total damages and units for each event summed for all baseline TR rows across all locations QT, QTD1, and QTD2 = with this distribution, mean and sd QTM = average annual damages summed for all baseline TR rows across all locations QTL and QTU = x lower and upper x% ci The following images show the resultant calculations. The Math Result totals column displays confidence intervals for average annual damages. The Math Result exceedance probability columns display confidence intervals for total damages. Example 2 demonstrates using up to 7 exceedance probabilities. The last column of data, quantity, can be used to verify that the naming conventions used in Indicators 2 and 3 are identical. A zero quantity indicates differences in either labels, locations, or asset names. Indicator 5. Costs of Alternatives Distribution This Indicator uses baseline and project alternative cost distributions to calculate confidence intervals for discounted costs. The full cost estimates for each project alternative cost should be completed and stored using standard tools in DevTreks (see the CTA, LCA or NPV tutorials). Selected properties include: Distribution Type: none, the TEXT file specifies the distributions Math Type and Math Sub Type: algorithm1, subalgorithm9 Units: automatically filled in (but most units that are entered manually will not be overwritten) The following Math Expression is only used to identify the columns of TEXT data to include in the calculation. Math Expression = I5.Q1.installcost + I5.Q2.installdistrib + I5.Q3.omcost+ I5.Q4.omdistrib + I5.Q5.isprojectcost Indicator.URL TEXT: The following TEXT datasets displays the data conventions used by this algorithm. The baseline, or Current Practice, does not include suffixes, such as “_A”, identifying the project alternative. Labels contain between 4 and 6 characters. Units concatenate the location with the distribution property. Both installation and operating/maintenance costs must be defined using a probability distribution. The “isprojectcost” column must specify whether the costs are project costs (yes) or unit costs (no). The latter costs will be calculated in Indicators 6 and 7 by multiplying them by Indicator 4’s asset quantity column. Sensitivity Analysis Data TEXT (stored as csv text data): The following dataset has been referenced as the second URL in the Indicator.URL property. The discount rates must be decimals while the life must be doubles. The Math Result property displays the sensitivity analysis completed from this data. At least 1 rate and 1 life must be included in this TEXT file. The Monte Carlo algorithm is run using these properties and the distribution data stored in the TEXT file. The results of the simulations are then used to fill in the rest of the following properties. Qs – the Indicator properties are relatively meaningless –the MathResult has the important data. The following images show the resultant calculations. The MathResult table displays QTM as the average annual cost and QTL and QTU as the lower and upper confidence intervals. The results include a sensitivity analysis conducted using the 2nd URL’s discount rates and life spans. Indicator 6. Benefit Cost Analysis This Indicator uses the Math Results from Indicator 4 to define discounted benefits and the Math Results from Indicator 5 to define discounted costs. Benefits are defined as the changes in damages for each project alternative in comparison to the baseline. Costs are defined as the changes in costs for each project alternative in comparison to the baseline. Calculations generate confidence intervals for benefit cost ratios for each project alternative in each location. Each baseline finds related alternatives using the label, assetype, and loc_confid, columns of data in the URL TEXT or Indicator4.MathResults. The only difference allowed between both sets of strings are the alternatives’ use of suffixes, such as “_A”, to distinguish each alternative. This is the first error to check when calculations have missing rows of data. The “assettype” should use simple strings, without spaces or unusual characters, because data errors are easier to spot. Selected properties include: Distribution Type: none Math Type and Math Sub Type: algorithm1, subalgorithm9 Units: automatically filled in (but most units that are entered manually will not be overwritten) Math Expression = I6.Q1.distribtype + I6.Q2.100year + I6.Q3.50year + I6.Q4.25year + I6.Q5.10year + I6.Q6.5year Indicator.URL TEXT: This optional URL can be used to reference a URL containing a custom distribution. This TEXT file must have the exact same format as the Indicator4.MathResult. This release has only tested small datasets: 2 locations with a small number of Indicators, Categorical Indexes, and Locational Indexes. Qs = All of the Qs automatically document the project alternative with the highest benefit cost ratio. The Unit properties can be used to find the corresponding rows of full calculations in the Math Results. The following results list the benefit cost analysis for each project alternative in the columns and the confidence intervals for each asset/rate/life sensitivity analysis in the rows. The csv file can be imported into other software for further analysis or to develop multimedia support. The reason that the QTM, QTL, and QTU ratios are equal is explained using the second image. The following image from localhost shows equal QTM, QTL, and QTU ratios. Although, the cloud shows unequal ratios, the explanation offered in the next paragraph still holds. The following localhost image, derived using Version 2.0.2, demonstrates that the calculations for the highest BCR derives from the summation of the damages and costs for each location (not from a simple summation of each location’s BCR). The information conveyed to decision makers must include the actual costs and benefits used in the ratios. The following image derived from an earlier release. It demonstrates that although the ratios for QTM, QTL, and QTU, may be equal or close, the actual costs and benefits used in the ratios differ appropriately. Indicator 7. Cost Effectiveness Analysis. This Indicator is calculated in a similar manner to Indicator 6, but instead of calculating changes in direct monetary damages as benefits, it calculates changes in indirect non-monetary indicators as benefits. The changes in costs are divided by the changes in non-monetary benefits to develop Cost Effectiveness Ratios. Unlike Indicator 6, this reference recommends obtaining the confidence intervals for these damage reductions elsewhere and then referencing them using the Indicator.URL property. If no TEXT file is found in the Indicator.URL, it will use the same data as Indicator 6 –the MathResults from Indicator 4. When the Math Results from Indicator 4, Appendix B, subalgorithm 10, are used as the TEXT file, Indicator 7 measures the cost effectiveness of alternative interventions in reducing the indirect damages from disasters. A complement to Indicator 6’s measurement of direct damages. The case studies used in Appendix B document that, for some types of disasters, the magnitude of the indirect damages can be as high as the direct damages. Avoid using damages with numbers that are too small to evaluate meaningfully. For example, human mortality amounts, expressed as percent of population, can result in percentages such as 0.002 –an amount too small to evaluate given the 3 digit precision shown in this example. Instead use the actual number of deaths (.002 of 1,000,000 = 2000 deaths per 1 million). The normalization techniques used in Appendix B also reinforce the need for careful data management. Avoid using Sub Indicator values that use significantly different scales or the 4 digit precision of the normalized values don’t pick up full differences in confidence intervals. Selected properties include: Distribution Type: none Math Type and Math Sub Type: algorithm1, subalgorithm9 Units: automatically filled in (but most units that are entered manually will not be overwritten) Math Expression = I7.Q1.distribtype + I7.Q2.100year + I7.Q3.50year + I7.Q4.25year + I7.Q5.10year + I7.Q6.5year Indicator.URL TEXT: This optional URL can be used to reference a URL containing a custom distribution. This TEXT file must have the exact same format as the Indicator4.MathResult. This optional URL measures indirect and intangible indicators. This TEXT file must have the exact same format as the Indicator4.MathResult, except instead of comparing direct damages for multiple “assettypes”, it compares indirect damages for multiple nonmonetary damage indicators. The Math Results for Indicator 4, Appendix B, subalgorithm 10, can be used for this file or it can be manually built. If this file is not found, Indicator4.MathResults will be used to conduct the analysis. Qs = All of the Qs automatically document the project alternative with the lowest cost effectiveness ratio. The Unit properties can be used to find the corresponding rows of full calculations in the Math Results. The following images show that, because this Indicator assumes benefits are nonmonetary and does not discount them, a different project alternative is selected than Indicator 6. These results list the cost effectiveness analysis for each project alternative in the columns and the confidence intervals for each indicator/rate/life sensitivity analysis in the rows. Unlike a BCA, where a higher benefit cost ratio is better, in a CEA, a lower cost effectiveness ratio is better. The use of normalized damage data with regular monetary costs in CEA results in unusual, but accurate, ratios. Analysts may want to scale the ratios (min = 0, max = 100) when communicating the results to decision makers. The csv file can be imported into other software for further analysis or to develop multimedia support. As explained for Indicator 6, although the QTM, QTL, and QTU ratios can be equal or have very small differences, the underlying cost and benefit information has correct confidence intervals that should be communicated to decision makers. Scores and Indicators 8 to 15. Decision Support Systems Appendix C introduces subalgorithms that demonstrate using Scores and Indicators 8 to 15 to produce wider decision support. The following image from Version 2.0.6 demonstrates that M&E calculators treat the Score as the 0 index in collections of M&E Indicators. M&E Scores include a Name, Label, and Date while Stock Scores do not have these properties. Example 2. Floods, Semarang, Indonesia URLs https://www.devtreks.org/greentreks/preview/carbon/resourcepack/SubAlgo 09 DRR 2/1542/none https://www.devtreks.org/greentreks/preview/carbon/output/CTAP Example 5 - Floods DRR/2141223469/none http://localhost:5000/greentreks/preview/carbon/output/CTAP Example 6 - Floods DRR/2141223476/none This case study uses Mechler’s (2005) case study of a benefit cost analysis completed for floods in the subject central Java city. The flood damages arise from flooding caused by the neighboring Garang River combined with tidal inundation caused by the proximity of the ocean. This example only analyzes the flood damages. This Example mainly focuses on differences from Example 1. As with Example 1, a lot of the data used in this analysis had to be extrapolated from summary tables presented in the case study. That makes the absolute numbers in the analysis less important than the techniques employed to generate the numbers. All of the initial data uses monetary units of million Rupiah. Indicator 1. Hazard Distribution The following image shows the initial data used with this Indicator. The values are flood depths for the respective event periods. Other than using flood events, this Indicator has not been changed from Example 1 and requires no additional documentation. For testing purposes, the following image displays the MathResults using 7 exceedance probability events. Indicator 2. Exposure Distribution The following image shows the initial data used with this Indicator. The asset unit value, in million Rupiah, come from Table 35 in Mechler’s case study. The quantity is extrapolated from Table 40. For example, the asset, Business-Structure, must have a 95 billion Rupiah total asset value when the asset prices and quantities for both locations are multiplied and summed (i.e. 47.5 billion per location). As Indicator 6 will demonstrate, this example had to stray a good deal from the case study. Asset values are defined for the same assets as the case study, including Structures (-S) and Indoor Movables (-M). In addition, Table 40 shows that the case study uses Indicator 4 to quantify additional losses for Public Infrastructure and Business Suspension. In contrast, this algorithm treats these losses just like any other loss and defines them using Indicators 2 and 3. Unlike Example 1, where costs have been defined on a per unit asset basis, this Example defines costs on a per unit project basis. Although Indicator 2’s total exposed asset value will be calculated by multiplying asset value by asset quantity, the final costs used in Indicators 6 and 7 will not be multiplied by the asset quantity. The following image displays the Math Results for this Indicator. Notice that each Indicator.QTM, QTL, and QTU, have been calculated as total asset values by multiplying their initial PRA-calculated prices by their quantities. Note that the RF1A row’s, 47.5 billion asset value for location 1, corresponds to 50% of the corresponding 95 billion asset value displayed in Table 40 in the case study. Indicator 3. Vulnerability Distribution The following image shows the initial data used with this Indicator. The damage loss percent is extrapolated from Table 40 in the case study. For example, the economic loss for the Residential asset with the 10 year event is 6 billion Rupiah. That can be derived by multiplying the $95 billion total loss by the 6.32% damage loss percent in this table. The latter value is calculated by dividing the final 6 billion loss by the 95 billion total loss. The Public and Business asset row is included to demonstrate the difference between how this subalgorithm treats “indirect damages” and the case study’s approach. This example also differs from the case study by treating Mechler’s 3 “options” for reducing flood damages as mutually exclusive project alternatives. Only the project alternative with the highest BCR will be chosen. Mechler only gives damage reduction estimates, at 100%, for the selected project. For illustrative purposes, Alternative A will reduce damages by 50%, Alternative B by 75%, and Alternative C by 100%. The following image displays the first few Math Result rows for this Indicator. For testing purposes, the following image displays some of the MathResults using 7 exceedance probability events. Indicator 4. Loss EP Distribution The following image displays some of the Math Results for this Indicator. For testing purposes, the following image displays some of the MathResults using 7 exceedance probability events. Indicator 5. Cost Distribution Alternative C’s project costs were estimated at $320 billion installation and $2 billion O&M. They were divided by 2 to account for the 2 locations being studied. Alternatives A and B were given arbitrary 75% and 50% reductions in respective costs. The following image shows the initial costs. The final column of data confirms that all costs are project, rather than unit, costs. That means that the costs calculated in Indicator 6 and 7 will not be multiplied by the Indicator4.MathResult’s quantity of asset column. The following image shows the initial data used with this Indicator’s sensitivity analysis. As Example 1 shows, project life and discount rate are often used to conduct sensitivity analysis. The case study’s use of subsidence-related scenarios in their sensitivity analysis can still be done by custom manipulation of Indicator 6’s results. The following image displays the Math Results, confirming that the IWM alternative’s discounted initial installation and O&M costs for a .05 rate and 50 year life are in the 170,000 million R range for 1 location. This subalgorithm discounts Installation Costs by 1 year. O&M Costs are calculated as a discounted uniform present value cost. The final 170,000 million is the summation of both discounted costs. Note that Indicator 6’s Benefit calculations are calculated as a discounted uniform present value of the average annual loss for each SubIndicator. Indicator 6. Benefit Cost Analysis The following images show that the results for the project alternative with the highest BCR, aggregated across all locations, has been used to fill in the Indicator’s properties. Decision makers should be given the underlying cost and benefit data used in the ratios. The following image displays some of the initial rows from Math Results for the first location in this Indicator. The last several rows demonstrate positive BCRs even for some individual asset categories. For testing purposes, the following image displays some of the MathResults using 7 exceedance probability events. The following image displays some of the final rows from Math Results for the second location in this Indicator. The following image demonstrates that the calculation for the highest QTM BCR, 9.5829 on the 3rd line, is not a summation of each location’s BCR, but rather derives from the summation of the damages and costs for each location. Although the BCRs don’t use the case study’s Net Present Value techniques shown in Table 45, the use of discounted present values and uniform present values in the calculations result in equivalent ratios. The case study’s final BCR value of 2.5 reflects different assumptions about project alternatives, costs, and the additional inundation data included in their study. Indicator 7. Cost Effectiveness Analysis The following images show that, because this Indicator assumes benefits are nonmonetary and does not discount them, a different project alternative is selected than Indicator 6. The case study does not present any non-monetary data that can be used to calculate indirect damages, so these results don’t supply additional decision support. Scores. Decision Support Systems Wider decision support is completed at the discretion of the analyst. The following image from Version 2.0.6 show how M&E calculators treat the Scores. Appendix B. Risk Management Indicator (RM) Algorithms A. Introduction This appendix demonstrates how to complete CTAPs using comprehensive sets of disaster-related indicators that are designed to reduce the risk of excess damages occurring from disasters. B. Risk Management Indicator Systems Introduction Khazai et al (2015) demonstrate the use of indicator systems and performance target systems to assess urban risk from natural resource disasters. The authors demonstrate using indicator systems to generate the 3 specific Indexes displayed in the following image. Additional examples of disaster risk reduction indicator systems can be found in the IDB (2007), World Bank (2011), European Commission (2014), United Nations UNISDR (2014, 2015), and United Nations CAPNET (2015) references. Although the latter indicator systems are designed for use at national and international scale, the references discuss the importance of developing similar indicator systems for use at local scales. Section E, CTA-Prevention Communication, discusses the importance of linking local evidence gathered using these indicator systems to global actors tasked with reducing the planet’s risks from climate change. Mustafa et al (2008) use the term “shared learning dialogues” (SLDs) for these types of indicator systems and point out that the collaborative process needed to complete these systems is often more important than final economic estimates of losses. The European Commission (2014) points out that indicator systems are particularly important when disaster loss data is simply not available for making reliable economic estimates of losses (13*). Some social scientists frown upon the use of indicator systems for scientific applications because the indicators selected can appear to be somewhat arbitrary, rather than derived from a scientific theory or historical scientific evidence. For example, Rufat et al (2015) find that the temporal context for the indicators may not be well known (i.e. at what damage stage are people most vulnerable?), the most influential drivers of final indexes may not be well understood (i.e. what proof exists that a particular set of indicators measure vulnerability best?), and the interactions among indicators may never have been adequately tested (i.e. what is the basis for weighting indicators in a particular way?). Most of the references used in this Appendix demonstrate that, if selected properly and used carefully, indicators can aid important decision-making for important problems, such as climate change-induced disasters. UN CAPNET (2015) provides the following guidance about the use of indicator systems: “Due to their limitations, it is important that decision-making is not limited to following the direct outcome of indicator assessments but that a sensible holistic view is taken. Selecting indicators should be done in such a way that they represent the root causes of a particular issue. Otherwise there is a danger of “treating the symptoms rather than the sickness””. Appendixes C and D introduce algorithms and examples that offer potential ways to take this “sensible holistic view”. C. Risk Management Indicator (RM) Algorithms A representative RM, the Urban Disaster Risk Index, is derived using the following general steps. The case studies presented in the Khazai et al (2015) reference, along with examples presented in the Carreno (2005, 2007, and 2012) and Marulanda (2013) references, demonstrate these steps. 1. Stakeholders choose indicators from several categories, such as Physical Risk, Social Fragility, and Lack of Resiliency. Values for the Physical Risk indicators can be taken directly from the Disaster Risk Reduction algorithms demonstrated in Appendix A and are equivalent to the direct damages from disasters. The Social Fragility and Lack of Resilience Indicators are selected so that they can be used to calculate “aggravating coefficient” factors, or the indirect damages from disasters. The case studies demonstrate that additional custom indicators and categories, such as Coping Capacity, can also be used. The studies also demonstrate subdividing each indicator into Sub Indicators. Although the references focus on urban areas, indicators can be developed and used for any area, including rural areas. 2. Indicator values are collected for each location within the project boundaries. Although not always explicit, the more recent references demonstrate that the DRR framework introduced in Appendix A is also used with the DRI. Some cities consider these assessments to be important enough to warrant supplementing existing population surveys, or designing new surveys, to collect data for the Indicator values. The case studies also demonstrate using experts, along with historical data, to develop minimum and maximum values, along with normalization functions, to develop the Indicator values. 3. The indicator values are first normalized. Data can be normalized using data transformation functions that include the formulas in Appendix A, Example 1 in the Resource Stock Calculation reference (i.e. z-scores, min-max, logistic, logit, pnorm, and tanh). The case studies demonstrate using more advanced transformation functions (i.e. fuzzy logic using membership functions with bell and sigmoidal distributions). 4. The normalized values are then weighted using numeric multipliers. The case studies derive the weights using a combination of expert opinion and an Analytic Hierarchy Process. 5. The normalized, weighted indicators are aggregated into Categorical Indexes (CIs). The CIs are weighted and aggregated into a final Total Risk Index for each location. The CIs help to define the most important “drivers” of damages. The DRI uses specific mathematical formulas in the aggregation. The case studies use the Moncho equation, Rt = Rf * (1 + F), in the aggregation. The Rt term is the total risk calculated from the Physical Indicators, the Rf term, and the Social Fragility and Lack of Resilience Indicators, the 1+ F term. The logic is that the indirect effects of disasters can be highly significant. For example, the case studies document that indirect damages from earthquakes can be 75% or more (1.75 = 1 + 0.75), of the direct damages. At this stage, each location has Total Risk Indexes (Rt), Physical Factor Indexes (Rf), and Aggravating Coefficients (F). The three Total Risk Indexes for each location, along with their driving Categorical Indexes, are unit-less and can be meaningfully compared among all locations. D. Additional Tools Additional RM assessment tools are available (i.e. EMI’s Integrated Risk Toolkit, Central America CAPRA, UNISDR’s Global Risk Assessment) and should be closely evaluated prior to using this example’s techniques. In addition, most serious disaster risk assessments are done in the context of overall civil disaster planning or integrated natural resource management planning –the assessment is one ingredient in an overall disaster preparation and prevention approach. For example, UNCAPNET points out that drought disaster planning is best carried out in the context of an Integrated Water Resources Management approach. They mention that these types of integrated natural resources management approaches focus on achieving “economic viability, social equity, and environmental sustainability”. In addition, many of the references demonstrate that GIS makes an appropriate platform for communicating the assessment results to decision makers. E. RM Algorithm Examples The following algorithms demonstrate how to use indicator distributions to calculate risk management indexes. Most algorithms use probability distributions, defined by QT, QTD1, and QTD2 properties stored in Indicator.URL TEXT files, as the initial data for each category of Sub Indicators used in the final indexes. The distributions are used with the Monte Carlo algorithm, subalgorithm1, to generate final mean and confidence interval calculations expressed using a Sub Indicator’s QTM, QTL, and QTU properties. Keep in mind that the source code shows that any other subalgorithm (i.e. copula, neural network, regression) can just as easily be used. * Existing Algorithms. The Indexes demonstrated in the case studies use indicators that are aggregated by using typical weighting and normalization functions. Appendix A, Example 1, in the Resource Stock Calculation reference demonstrates how Life Cycle Assessments use similar mathematical techniques. Those techniques can also be used to produce simplified versions of the Indexes. Use regular Indicators as the Indicator categories and an Indicator’s Q1 to Q10 properties as the Sub Indicators in the algorithms. Use the Score to produce the final Indexes. * Algorithm 1. Disaster Risk Index: algorithm1, subalgorithm10: The algorithm produces Benefit Cost Ratios and Cost Effectiveness Ratios for Disaster Risk Indexes (DRI). This algorithm quantifies both the direct and indirect effects of disasters for specific disaster prevention interventions. The algorithm is designed to be used jointly with Appendix A’s DRR algorithms (subalgorithm9). * Algorithm 2. Risk Management Index: algorithm1, subalgorithm11: The algorithm uses Risk Management Indexes (RMI) to calculate confidence intervals for Cost Effectiveness Ratios. These Indexes measure a community’s ability to manage disasters. Subalgorithm11 calculates Total Risk Indexes (TR rows) using weighted averages, subalgorithm12 does not use weighted averages. * Algorithm 2. Resiliency Index: algorithm1, subalgorithm12: The algorithm uses Resiliency Indexes (RI) to calculate confidence intervals for Cost Effectiveness Ratios. These Indexes are used to monitor and evaluate a community’s disaster prevention goals. Appendix E, Resource Stock and Monitoring and Evaluation Analyzers, demonstrate the use of additional tools that can be to provide further support for all Stock and M&E algorithms. These examples further demonstrate that any system of Indicators that can be analyzed using these techniques (i.e. normalization, weighting, aggregation by Indexes, EVM, and M&E) can employ this algorithm. * Additional algorithms: Algorithms based on any other algorithm (i.e. the regression and machine learning subalgorithms introduced in the CTA reference), the fuzzy logic techniques used by the main references, or Bayesian statistics and Machine Learning techniques (see Examples 7 and 8 in the sibling Social Performance Analysis reference), make logical additional algorithms. The references suggest that source code for some of these techniques might be available, potentially making these algorithms straightforward to build. The author’s preference is to start with Bayesian and Machine Learning techniques because statistical libraries may be more readily available (i.e. R, Python, and AML). Customers with an immediate need for additional algorithms can contact DevTreks directly. In effect, these algorithms reinforce the UN GAR (2015) recommendation to expand cost benefit analysis to highlight the tradeoffs implicit in investment decisions. An important aspect of those tradeoffs involve comparison of the indirect effects of disasters and the institutions charged with managing disasters. The winners and losers from those decisions can be partially addressed by careful attention to the selection of locations and appropriate Indicators for the locations (i.e. see Example 1’s Slums and Squatters Sub Indicator). Equity, externalities, and tradeoffs, can be addressed further by using Scores with the wider assessment techniques demonstrated in Appendix C. The following image (UNCAPNET, 2015) demonstrates that the general approach used by this algorithm (i.e. indicator, normalization, weighting, aggregation) can be used to build Indexes for many indicator systems. That’s why the algorithms in this Appendix support additional types of Indexes, such as the Drought Vulnerability Index and Multi-Criteria Analysis, demonstrated in Appendix C. Although the case studies use the final Indexes in urban areas, these algorithms can be used for any area, including rural areas. The case studies demonstrate developing custom indicators, such as Coping Capacity Indicators, that are appropriate for specific locations. Mustafa et al (2008) demonstrate the use of a similar Community level Vulnerabilities and Capacities Index that has been developed for rural areas. Make sure to use Internet technology that allows the Indicators to be shared throughout the world with anyone who has access to a web browser and an Internet connection. As with the examples in Appendix A, the goal of these examples are not to carry out exact replications of the referenced case studies. Instead, the examples emphasize developing and using CTA algorithms to support decisions associated with uncertain natural resource damage prevention interventions. The author acknowledges that this could lead to flawed analyses (i.e. that will need to be improved in future releases). Algorithm 1. Subalgorithm 10. Disaster Risk Index This algorithm will be explained using case studies that demonstrate developing Disaster Risk Indexes (DRI). Example 1 uses data from the Carreno 2005 reference for Bogota, Columbia with the methodology used in the Marulanda 2013 reference for Barcelona, Spain. The Carreno case study does not explicitly address the Hazard, Exposure, Vulnerability, and Loss Exceedance steps of assessments, while the Marulanda case study does. Although this case study analyzes earthquake damages, the same techniques apply to damages associated with climate change. The algorithm is designed to be fully compatible with the algorithms in Appendix A so that they can be used jointly. In fact, the two algorithms share more than 95% of their source code. The case studies clearly demonstrate that measurement of the indirect effects of disasters is as important as the direct effects. All CTAP algorithms assess technologies, or specific disaster interventions, rather than the whole cities demonstrated in the underlying references. Assessments for cities, or any area, can still be carried out by organizing data appropriately (i.e. alternative A = Time Period 2, alternative 3 = Time Period 3…). The algorithm uses the exact same steps as Appendix A, algorithm 9, but adds non-monetary Social Fragility and Lack of Resilience Indicators to the physical asset monetary damages and normalizes, weights, and aggregates, Indicators into unit-less metrics, or Total Risk Indexes. The algorithm uses the Index generation process introduced in the previous section, to generate an uncertain DRI distributions. The algorithm carries out the CTAPs as follows: 1. Physical Risk, Social Fragility, and Lack of Resilience Categorical Indexes and Sub-Indicators are derived the same way as the case studies. That is, they can be chosen from studies carried out using Appendix A’s algorithms, expert opinions, surveys, historical data, or related studies. The Text files organize each Sub Indicator by a parent Categorical Index. The TEXT files contain data for multiple locations (i.e. locations, zip codes, neighborhoods) and each location is processed using the following steps to produce Total Risk Indexes properties for the location. Because the Sub Indicator data for each location will be normalized, usually in a range of 0 to 1, the original scale of each Indicator should be comparable, otherwise the normalized data, with 4 digits of precision, does not pick up the full differences in confidence intervals. 2. The Score.Iterations, Score.RandomSeed, and Score.ConfidenceLevel properties will be used with each row of data’s distribution parameters and a Monte Carlo algorithm (subalgorithm1) to generate randomized variables. Descriptive statistics are generated from the vector and the mean and standard deviation from the statistics are used to generate confidence intervals. The 3 parameters used in the confidence interval, QTM, QTL, and QTU, are normalized and weighted. The next 3 steps explain the normalization and weighting. Subalgorithms 9, 10, and 11 share 95% of their source code and employ consistent data formats for all Indicators. 3. The fuzzy logic used by the references is considered too advanced for introductory algorithms. Instead, the Sub Indicator data is described by probability density functions, easily discoverable using this Internet technology, and the following, simpler, normalization functions are used to calculate the desired unit-less, aggregated, non-linear, index values. zscore, minmax, logistic, logit, or tanh The normalization function is applied to the final vector of combined QTMs, QTLs, and QTUs generated from running the Monte Carlo simulation algorithm (subalgorithm1) for each separate Sub Indicator for each location. The references refer to these numbers as “gross values”. The row/column format of the TEXT data means that each row has a separate normalization value. In practice, only the first legitimate normalization value is used to normalize all vector values. For consistency and appearance sake, all of the TEXT normalization values should be filled in as demonstrated in the examples. 4. The normalized Sub Indicator vector is weighted using numeric multipliers. Those weights can be derived using the case studies’ expert opinions and an Analytic Hierarchy Process, but this algorithm does not automatically calculate the weights. The weights must be specified in Indicator 2’s TEXT files. The resultant QTM, QTL, and QTL properties are added to a corresponding Sub Indicator csv row which is stored in the Indicator.MathResult. 5. The SubIndicator confidence intervals are multiplied by a parent Categorical Index weight. Each SubIndicator’s QTM properties are added to their parent Categorical Index. A csv row for each Categorical Index confidence interval is stored in the Indicator.MathResult. The Categorical Indexes are further weighted by their parent Locational Index and aggregated into the parent Locational Index. The references don’t weight Locational Indexes, so the examples set the weights to 1. 6. This step is the only difference from Appendix A, subalgorithm9. When all of the Locational Indexes for a location have been completed, the Moncho equation, Rt = Rf * (1 + F), is applied to the Locational Indexes. All of the Rf categories are summed into 1 set of QTM, QTL, and QTUs, as are the remaining social categories. The Rf, or physical, categories are distinguished from the F, or social categories, through the use of a required “RF” label. The F term is a simple summation of the social Locational Indexes. Although not tested with actual data, the source code supports more than the 2 social Locational Indexes per location demonstrated in the references. The equation is then applied to the aggregated sums and the resultant confidence interval is added to a parent Total Risk Index. The Total Risk Indexes for each location are added to csv rows in the Indicator.MathResult. 7. The Indicator metadata properties (i.e. Indicator6.QTM = .001, QTL = .001, QTU = .001) suggest flaws with this algorithm’s current normalization techniques. As usual, DevTreks recommends that developers get their hands dirty and develop better algorithms. Scores and Decision Support Systems are completed at the discretion of the analyst. Fuller incremental economic analysis can be carried out using this data with the techniques explained in the WHO, 2003 reference. The algorithm requires that Indicators have the exact indexes demonstrated in the examples, although the actual number of Indicators and Indexes can vary. Example 3. Earthquakes, Bogota, Columbia URLs: https://www.devtreks.org/greentreks/preview/carbon/output/CTAP Ex- 2 - Earthquake DRI/2141223462/none https://www.devtreks.org/greentreks/preview/carbon/resourcepack/DRRs, DRIs, and RMIs/1539/none https://www.devtreks.org/greentreks/preview/carbon/resourcepack/SubAlgo 10 DRI 1A/1538/none http://localhost:5000/greentreks/preview/carbon/output/CTAP Example 2 - Earthquake DRI/2141223468/none This example uses data from the Carreno 2005 reference to complete the DRI. The case study presents summary results for full Indicators, but the Khazai et al (2015) reference demonstrates that current best practice is to use Sub Indicators as well. The following example employs a simple technique for using the required Sub Indicators –it simply takes the full Indicator, splits it into 2 Sub Indicators using A and B suffixes, and divides the Indicator value by 2 to obtain the Sub Indicator values. Their weight value is simply 1 / 2 = 0.5. The exceedance probabilities are used the same way as algorithm 9 and are fictitious. The full Indicator is still included in Indicator.URL TEXT datasets, but it becomes the Categorical Index, or “driving factor”, used to aggregate its children Sub Indicators (the Sub Indicators are actually referred to as Indicators in other examples in this reference). As such, it does not have Distribution values (currently), but it does have the same weight value as shown in the reference. For illustrative purposes, only 3 Categorical Indexes are used for each type of Indicator (Physical, Social, and Resilience). The case study Indicator values are all point estimates calculated using fuzzy logic mathematics. The following datasets turn the point estimate into probability distributions defined by shape, or QTD1, and scale, or QTD2, parameters. In this example, the shape parameter is a mean and the scale parameter is a standard deviation calculated as 10% of the mean. Although only normal distributions are used in these datasets, any distribution supported by the calculator can be used. As mentioned, beneficiaries take these assessments seriously and often will collect this data thereby allowing realistic distributions to be used. Because the primary difference between this algorithm and Appendix A, subalgorithm9 is step 6 in the previous section, the following images highlight the main differences. Although this algorithm can incorporate monetary damage physical Indicators, its primary advantage is the assessment of the social Indicators. Indicator 2. Exposure Distribution URL: The following image shows the initial TEXT data. The “TR” label is a requirement and is a placeholder for a Total Risk Index for each location. The exact number of characters in the remaining labels are required. The “RF”, “FS”, and “SR”, rows are placeholders for Locational Indexes and are required. The characters “RF” in the label is a requirement. The “RF1, RF2, RF3”, “FS1, FS2, FS3”, and “SR1, SR2, SR3”, rows are placeholders for Categorical Indexes. The remaining rows are Sub Indicators. Locations must be integers. The “indicator” column is better to format without spaces and any unusual characters (and terms such as “dead people”, although used in the case study, can be worded more tactfully). One Total Risk Index row must be included for each separate location. The normalization column demonstrates that only the SubIndicator final QTM, QTL, and QTU properties will be normalized. Those results will be weighted and aggregated into parent Categorical, Locational, and Total Risk Indexes. Default values for the quantity column are set to 1 because this subalgorithm only uses the quantity column in order to stay consistent with subalgorithm 9. The following partial Math Results demonstrate that the normalization of the Sub Indicator confidence intervals result in significantly condensed distributions. Data must be managed carefully with this algorithm, possibly by transforming it prior to adding the data to the URL. Additional normalization, or data transformation, techniques, beyond the existing standard normalization functions, will be explored in future releases. Indicator 3. Vulnerability Distribution URL: Indicator 3’s TEXT rows must use the exact same naming conventions as Indicator 2 –labels, indicator names, and locations must be exact matches. The URL demonstrates that the 3 aggregating Indexes result in large datasets which, in turn, generate large Math Results. The aggregating Indexes were deemed a requirement so that all subalgorithms that use these techniques can share the exact same source code. That allows the source code to be debugged and optimized better. When Math Results are too large for storage, display, or performance, store them as csv TEXT files by referencing a URL in the Math Result, as demonstrated for Indicator 7. The following image displays some of the initial data. … … MathResults: The following image shows typical Math Results being stored directly in that property. The following image shows the final rows of the Indicator3.MathResults. Indicator 4. Loss Exceedance Probability (EP) Distribution The following images display representative rows of the Indicator4.MathResults. As with the results for Indicator 2, the use of normalized confidence intervals increases the need for careful data management. … The reason that the final 3 rows have quantities of 2 is that the Total Risk Indexes are calculated from two Indicator aggregations –the Physical and Social categories. The quantity column is not used by subalgorithms 10 and 11. Indicator 5. Costs Distribution No cost data was available, so the following fictitious data is used in this study. The “isprojectcost” column confirms that this subalgorithm uses project costs only. In this example, Indicators 6 and 7 will analyze one “project alternative” (_A), or time period progress, in reference to the “Current Practice”. Subalgorithms 10 and 11 must use only project costs. Indicator 7. Cost Effectiveness Analysis Cost Effectiveness Ratios are generated in the same manner as subalgorithm9 in Appendix A. The Indicator itself displays the lowest CER for the Total Risk rows, summed across Locations. The following images demonstrate that the use of normalized damage data in the CER divisor, results in unusual, but accurate, ratios. MathResults: http://localhost/resources/network_carbon/resourcepack_529/resource_1818/Ind7-Math-Result.csv or https://devtreks1.blob.core.windows.net/resources/network_carbon/resourcepack_1538/resource_8017/Ind7-Math-Result.csv The image also demonstrates the use of URLs to store Math Results. In this Indicator, a csv file was built using 1 single string (i.e. math), uploaded to the Resource element, and the URL was copied from the Download URI button displayed on the Preview panel into this property. Once the calculations are run the file content is replaced with the full Math Results. Algorithms 9 and 10 subtract alternative damages from base damages, while Algorithm 11 subtracts base performance indicators from alternative performance indicators. The cost effective ratios in the following images result from using regular monetary costs in the numerator with normalized indicator values in the denominator. The reason that QTU is smaller than QTL is that smaller CERs are better than higher CERs. As with Example 1, decision makers should be given the full cost and benefit data used to calculate the ratios. The following image shows partial results taken from the Math Result TEXT csv file. Calculations that return large Math Results should always use this technique. In addition, overall Indicator performance can be increased using this technique. Make sure that children Input or Output Series use unique URLs. The csv file can always be imported into other software and manipulated in a manner that produces multimedia that decision makers will understand. Algorithm 1. SubAlgorithm 11. Risk Management Index (RMI) The RMI uses a simplified version of the Indicator measurement and aggregation techniques used in subalgorithms 9 and 10. Khazai et al (2015) explain that “the RMI is defined as the average of the following four composite indicators [for a specific location]”. The authors also refer to these Indexes as being “public policies”. 1. Risk Identification Index: measures the individual perceptions of risk 2. Risk Reduction Index: measures the existence of prevention and mitigation measures 3. Disaster Management Index: measures response, recovery and governance 4. Financial Protection Index: measures the degree of institutionalization and risk transfer Each of the composite indexes is composed of the 6 Indicators shown in the following image (Khazai et al, 2015). The authors describe these Indicators using the phrase “topics [within the public policy Indexes] to be evaluated”. The authors do not recommend using additional Indicators or subdividing these Indicators into Sub Indicators. The use of standard Indicators allows performance and effectiveness to be compared over time and location. The 4 composite indexes are equivalent to the Categorical Indexes used by subalgorithms 9 and 10. The weighted average of the CIs are added to 1 Locational Index and that Index is then added to the final Total Risk Index, or RMI. Multiple locations can be included in the analysis. Indicator values are based on 5 performance levels: 1 (low), 2 (incipient), 3 (significant), 4 (outstanding), and 5 (optimal). Each Indicator’s performance levels are qualitatively defined in separate reference tables. The following image (Khazai et al, 2015) displays an example of performance levels for 2 Indicators. This algorithm uses the same probabilistic risk and normalization techniques introduced in subalgoritm9 and subalgorithm10: each Sub Indicator is defined by a probability distribution (QT, QTD1, and QTD2). The probability distributions can be defined using the same methods as the other subalgorithms: survey data, observational data, historical data, or expert opinion. This Internet technology makes the distributions transparent and reusable. The 0 to 5 performance scale is retained, except the values are doubles, with non-integers being acceptable. The current version does not scale the values to the 0 to 100 Index bounds demonstrated by the references. Monte Carlo simulation is used with those distributions to generate Sub Indicator descriptive statistics and cumulative density data. The statistics are used to generate confidence intervals (QTM, QTL, and QTU) for each Sub Indicator. The confidence intervals for a vector of Sub Indicator QTM, QTL, and QTU properties for each location are normalized and multiplied by a weight and then aggregated into the ancestor Categorical, Locational, and Risk Management Indexes. The baseline, or benchmark, RMIs for each location can be compared to each “project alternative” in the same manner as subalgorithm9 and subalgorithm10. The alternatives can be for different time periods, as demonstrated in the case studies for assessing progress, or for actual alternative policy interventions for assessing effectiveness and efficiency. Although the references don’t use economic analysis with the Indexes, the nature of CTAPs means that Cost Effectiveness Analysis (CEA) is carried out for the alternatives. Benefit Cost Analysis is not carried out because the Indicators are not monetary. The CEA’s logic is that each alternative improvement in “public policies” requires investments (i.e. costs) and is subject to tradeoffs. Cost effectiveness ratios allow the uniform comparison among the alternative investments. Example 1 uses the Indicators displayed in the previous image with fictitious values and weights to generate the final RMI CEA analysis. The algorithm requires that Indicators have the exact indexes demonstrated in the examples, although the actual number of Indicators and Indexes can vary. Example 4. Risk Management Index URLs: https://www.devtreks.org/greentreks/preview/carbon/output/CTAP Ex 3 - Generic RMI/2141223463/none https://www.devtreks.org/greentreks/preview/carbon/resourcepack/SubAlgo 11 RMI 1A/1540/none https://www.devtreks.org/greentreks/preview/carbon/resourcepack/DRRs, DRIs, and RMIs/1539/none http://localhost:5000/greentreks/preview/carbon/output/CTAP Example 3 - Generic RMI/2141223469/none Score. Starting Properties The following initial Score properties are used in this example: Confidence Interval: 90 Random Seed: 7 Iterations: 10,000 Description: This example demonstrates …. Indicator 1. Risk Management Indicator Distribution This Indicator defines probability distributions, normalization functions, and weights, for each of the RMI Indicators. The values are fictitious and selected to facilitate debugging the calculations. Selected properties include: Distribution Type: none (URL datasets are used) Math Type and Math Sub Type: algorithm1, subalgorithm11 Units: automatically filled in (but most units that are entered manually will not be overwritten) The following Math Expression is only used to identify the columns of TEXT data to include in the calculation. Qs = Q1 to Q5 automatically document the RMIs for the first five “Locational Indexes”. The QTs document a summation of the Locational Indexes. The QTMs document the weighted average of the Location Indexes. Math Expression = I1.Q1.distribtype + I1.Q2.QT + I1.Q3.QTUnit + I1.Q4.QTD1+ I1.Q5.QTD1Unit + I1.Q6.QTD2 + I1.Q7.QTD2Unit + I1.Q8.normalization + I1.Q9.weight + I1.Q10.quantity The following images demonstrate the IndicatorQs properties are filled in with the RMIs for different locations. The QTMs are the sum of all RMIs for all locations. Indicator.URL TEXT: The following labeling conventions are slight variations of the conventions used by the references. The algorithm uses the number of characters in the Label to determine which row of data is a Risk Management Index (2), a Categorical Index (3), and which row is an Indicator (4). With the exception of the “TR” Risk Management Index, the characters themselves can be changed, but the strings used in the baseline must be able to find their corresponding alternatives (which use the same “_A, _B suffix extensions as the other algorithms). The RMI and CI rows are placeholders and do not have probability distributions –they are used to hold the aggregated Indicator confidence intervals. The column named location must be integers. The column named distribution specifies the probability density distribution to use in the Monte Carlo simulation for this Indicator. Version 1.9.8 started requiring uniform 4 level hierarchies in these datasets. Previous datasets must be modified as shown in this example. The normalization column must use the same exact options used in other algorithms (i.e. zscore, minmax, logistic, logit, or tanh). The weight column is a double that is used as a multiplier. The quantity column is only used to stay consistent with the data formats employed in other subalgorithms and uses default values set to 1. For simplicity, Alternative A increases the baseline values by 10% and Alternative B adds another 10% to Alternative A’s values. The following image show some of the resultant calculations. Normalized confidence intervals become much more condensed when normalized. Some analysis may prefer not to normalize the vector of Sub Indicator values at all, rescale the initial Indicator values, rescale the initial cost values, or carry out other types of data transformations with the initial data. Indicator 2. Costs of Alternatives Distribution The following image verifies that this Indicator use the exact same techniques as subalgorithm10, but instead of putting costs on disaster interventions alone, costs are also placed on the changes being made to increase the performance of civil disaster management. These improvements are subject to the same tradeoffs as any other government budget expenditure, and decisions must be based upon the cost effectiveness of any proposed improvement (i.e. at least in rational societies). Because the properties are the same as the other subalgorithms, including the use of sensitivity analysis, no additional documentation is needed. Indicator 3. Cost Effectiveness Analysis. This Indicator is calculated in a similar manner as the CEA Indicator in subalgorithm10. It divides the changes in Indicator 2’s alternative costs by the changes in Indicator 1’s alternative Indicators to calculate cost effectiveness ratios. Selected properties include: Distribution Type: none (defined in the URL TEXT file) Math Type and Math Sub Type: algorithm1, subalgorithm11 Units: automatically filled in (but most units that are entered manually will not be overwritten) The following Math Expression is only used to identify the columns of TEXT data to include in the calculation. Math Expression: I3.Q1.distribtype + I3.Q2.QTM + I3.Q3.QTMUnit + I3.Q4.QTL+ I3.Q5.QTLUnit + I3.Q6.QTU + I3.Q7.QTUUnit + I3.Q8.QTUUnit + I3.Q9.quantity Qs = All of the Qs automatically document the project alternative with the lowest cost effectiveness ratio. The QTMUnit property can be used to find the corresponding 3 rows of full calculations in the Math Results. Math Results = http://localhost/resources/network_carbon/resourcepack_530/resource_1819/Ind3-Math-Result.csv or https://devtreks1.blob.core.windows.net/resources/network_carbon/resourcepack_1540/resource_8020/Ind3-Math-Result.csv Due to the size of the calculated Math Results, a URL is used to store the results. Large Math Results should always be stored using URLs. Make sure that children Input or Output Series use unique URLs. The following images display some of the results. These results list the cost effectiveness analysis for each project alternative in the columns and the confidence intervals for each indicator/rate/life sensitivity analysis in the rows. The csv file can be imported into other software for further analysis or to develop multimedia support. The ratios can be scaled in a manner that decision makers can understand. Scores and Decision Support Systems Wider decision support is completed at the discretion of the analyst. Algorithm 1. SubAlgorithm 12. Resiliency Index (RI) The following image (Khazai et al, 2015) displays the initial Indicators developed for a Disaster Resilience Index completed for Mumbai, India. These indexes are used to monitor and evaluate (M&E) progress in achieving disaster prevention goals. Unlike Indexes such as the Risk Management Index, the RI does not need to contain specific Indicators. Experts assist local stakeholders to select appropriate Indicators for the 5 thematic areas displayed in the image. The authors discuss the importance of allowing local stakeholders to develop, and take ownership of, local systems of indicators. The authors use the following image to demonstrate how each indicator is tracked over time using 5 benchmark and target levels of progress: 1) little or no awareness, 2) awareness of needs, 3) engagement, and commitment, 4) policy engagement and solution development and 5) full integration. The authors also use Annex 3 to demonstrate that the five levels can be defined for local contexts, using terms such as 1) low, 2) very low, 3) neutral, 4) high, and 5) very high. The only difference between subalgorithm12 and subalgorithm11 is the use of weighted averages in the Total Risk Index calculations. Subalgorithm12 does not use weighted averages. The authors complete the full RI using the following 5 step process: 1) Stakeholder Participation, 2) Stakeholder Consultations. 3) Initial Indicator Development, 4) Validation of the RI in Workshops, and 5) Participatory Evaluation of the RI. They communicate the results of the RI using the concepts they explain for the following image. The UN CAPNET (2015) reference demonstrates the use of similar Drought Vulnerability Index radar graphs to communicate the results of disaster-specific indicator systems. Three sets of tools, demonstrated in the 3 examples that follow, can be used in DevTreks to complete these types of M&E Indexes. Example 1 demonstrates using subalgorithm12 to complete the RI. Example 2 demonstrates using Resource Stock Progress Analyzers to assess progress over time for Example 1’s aggregate Indexes. Example 3 demonstrates using M&E calculators and analyzers for completing the RI. All of the Indicator values are fictitious and selected to facilitate debugging the calculations. Normal distributions are defined for all Indicators with the shape parameter, or QTD1, being the mean and the scale parameter, or QTD2, being a standard deviation set at 15% of the mean. For simplicity, Alternative A increases the baseline values by 10% and Alternative B adds another 10% to Alternative A’s values. The previous Risk Management Index example demonstrated the use of Categorical Indexes (CIs) and Locational Indexes (LIs) to aggregate Indicators. In this example, the CIs correspond to the 5 thematic areas used to organize the Resiliency Indicators (i.e. Critical Services and Infrastructure Resiliency). The final Locational Indexes, are aggregated into the final Risk Management Index, is equivalent to this example’s Resiliency Index. The authors do not demonstrate normalizing, weighting, or aggregating, the Indicators as done in the RMI. So this example simply sets the normalization value to “none”, the weights to 1, but still aggregates the Indicators. If desired, the Indicators can be normalized and weighted and the final Resiliency Index can be used in a similar manner to the RMI example. Although this example uses 2 Indicators in each Categorical Index, the authors stress the importance of using as many Indicators as deemed important by local stakeholders. The M&E Analysis 2, Earned Value Management (EVM), and Resource Stock Analysis, tutorials demonstrate the conventions, or properties, used in DevTreks for assessing performance and progress. Each calculator must set a “Target Type” property, with options such as “benchmark” or “actual”, for its associated base element. Progress is then measured using the relationship between the benchmark and actual base elements. Example 1 shows a partial exception to those rules. The benchmark, or baseline, Indicators do not have labels with “project alternative” suffixes, such as “_A” or “_B”. The “actual” Indicators do. That is, the project alternatives demonstrated in previous examples, are not actual alternatives in this example –instead they are used to measure progress in achieving goals, and are often related to time periods. The algorithm requires that Indicators have the exact indexes demonstrated in the examples, although the actual number of Indicators and Indexes can vary. Example 2 demonstrates that multiple base Input and/or Outputs can be used to analyze progress in achieving disaster prevention goals. In this example, some base elements are used to hold “benchmark” Indicators, while others are used to hold “actual” Indicators. Resource Stock Outcome and Investment Progress Analyzers then measure the progress being made between the benchmark goals and the actual accomplishments. The Resource Stock Analysis and Earned Value Management tutorials explain more about the use of Progress Analyzers. Example 3 demonstrates that formal Monitoring and Evaluation tools can also be used to assess progress between targeted goals and actual accomplishments. Several references (UNDP 2011, UNDP CAPNET 2015) discuss how to use M&E in the context of disaster risk reduction programs. The 49 M&E tools currently available for this purpose are documented in the Monitoring and Evaluation tutorials. In this example, M&E analysis has been conducted for Inputs, Outputs, Outcomes, Components, and Investments. Version 2.0.4 upgraded the M&E tools so that the CTA algorithms can also be used to measure the risk and uncertainty associated with M&E Indicator measurement and valuation. Example 5. Generic RI Indicators URLs: https://www.devtreks.org/greentreks/preview/carbon/output/CTAP Ex 4 - Generic RI/2141223464/none https://www.devtreks.org/greentreks/preview/carbon/resourcepack/SubAlgo 12 RI 1A/1541/none https://www.devtreks.org/greentreks/preview/carbon/resourcepack/DRRs, DRIs, RMIs, and RIs/1539/none http://localhost:5000/greentreks/preview/carbon/output/CTAP Example 4 - Generic RI/2141223470/none Score. Starting Properties The following initial Score properties are used in this example: Confidence Interval: 90 Random Seed: 7 Iterations: 10,000 Description: This example demonstrates …. Indicator 1. Resiliency Indicator Distribution This Indicator defines probability distributions, normalization functions, and weights, for each of the RI Indicators. This Indicator is equivalent to Indicator 2, Exposure Distribution, in subalgorithms 9 and 10. The values are fictitious and selected to facilitate debugging the calculations. Selected properties include: Distribution Type: none (URL datasets are used) Math Type and Math Sub Type: algorithm1, subalgorithm12 Units: automatically filled in (but most units that are entered manually will not be overwritten) The following Math Expression is only used to identify the columns of TEXT data to include in the calculation. Qs = Q1 to Q5 automatically document the RIs for the first five “Locational Indexes”. The QTs document a summation of the Locational Indexes. The QTMs document a summation of the TR rows for all locations for the project alternative with the highest TRs. Math Expression = I1.Q1.distribtype + I1.Q2.QT + I1.Q3.QTUnit + I1.Q4.QTD1+ I1.Q5.QTD1Unit + I1.Q6.QTD2 + I1.Q7.QTD2Unit + I1.Q8.normalization + I1.Q9.weight + I1.Q10.quantity The following images show the initial data used in Indicator 1. The quantity column is only used to stay consistent with the data formats employed in other subalgorithms and uses default values set to 1. Version 1.9.8 started requiring uniform 4 level hierarchies in these datasets. Previous datasets must be modified as shown in this example or as shown in the previous RMI example. The previous example used multiple Locational Indexes (i.e. RI, RR, DM, and FP). This example uses one Locational Index, RI. Subalgorithms 9, 10, 11, and 12, support arbitrary numbers of Indicators, Categorical Indexes, Locational Indexes, and locations, but their indexing must be uniform. … The following are partial results for this Indicator. These results confirm that the only difference between subalgorithm 11 and 12 is the use of weighted averages to calculate TR. Indicator 2. Costs of Alternatives Distribution No cost data was available for this Indicator, so it uses the same data as subalgorithm 10. Actual cost data should reflect the amount of funds being spent to achieve the performance improvements being monitored and evaluated. Indicator 3. Cost Effectiveness Analysis. This Indicator is calculated in the same manner as the RMI Example, but uses subalgorithm 12. The following image displays part of the Math Results. The same caveats about careful data management and data transformation apply here. Appendix C. Decision Support System Algorithms This Appendix supplements, or expands the use of, the algorithms introduced in the previous Appendixes with additional Decision Support Systems (DSS). Appendix A introduced economic analysis techniques, such as Cost Benefit Analysis, that had limited capacity to study the distribution of gains among winners and losers, externalities, and tradeoffs. Appendix B introduced indicator systems that were described as having potential limitations, such as picking indicators that address the symptoms, rather than the root causes, of disasters. These examples and algorithms demonstrate potential ways to take a “sensible holistic view” to deal with these shortfalls by using Scores to conduct analyses based on DSS systems such as (UNEP 2011, UNDP CAPNET 2015): a) Scenario and Trend Analysis (see Appendix A for examples) b) Multi-Criteria Analysis (MCA) c) Strengths, Limitations, Opportunities, and Threats (SWOT) strategic planning processes d) Driver, Pressure, Impact, Response (DPSIR) performance systems The Loss EP Distributions, Risk Management Indexes, Benefit Cost Ratios, and Cost Effectiveness Ratios, introduced in the previous Appendixes become part of the overall decision criteria in DSS systems, rather than the sole decision criteria. Even with these examples of expanded decision support, algorithms alone are not sufficient to tackle all of the dimensions needed to manage climate-change induced disasters, such as droughts. For example, the UNDP (2011) identified 5 basic steps needed to “mainstream” drought risk management that start with stakeholder identification and end with program monitoring and evaluation. The UN IDMP (2014) recommends a 10 step process that can be used to implement national drought mitigation policies, starting with the establishment of a drought management team and ending with the evaluation and revision of national drought mitigation policies. Strong disaster management institutions are needed that can provide the overall context, and manpower, for managing disasters. It’s not clear yet how many of these institutions will end up being primarily cloud-based, rather than political jurisdiction-based. Efficiency dictates the former in the long term, convention means the latter in the short term. The following examples will be explained using case studies of natural resource damage assessments that have been completed in developing countries. Example 1 uses the Mechler et al (2008) drought CBA conducted in Uttar Pradesh, India. Example 1 also uses the UNEP (2011) integrated water management MCA conducted for the Sana’a Water Basin, Yemen. Example 2 uses the UN CAPNET (2015) Drought Vulnerability Index as supplemental decision support for Example 1. Limited documentation is presented for algorithms that have already been explained in Appendixes A and B. The Social Performance Analysis tutorials verify that Versions 2.1.2 and 2.1.4 introduced new algorithms that more fully address the principal concerns raised in this Appendix -equity, externalities, and tradeoffs. Example 8. Drought, Uttar Pradesh, India and Sana’a Water Basin, Yemen URLs https://www.devtreks.org/greentreks/preview/carbon/output/CTAP Example 6 - Drought DRR and DSS/2141223470/none https://www.devtreks.org/greentreks/preview/carbon/resourcepack/Drought DRR and DSS/1543/none http://localhost:5000/greentreks/preview/carbon/output/CTAP Example 7 - Drought DRR and DSS/2141223477/none The Rohini water basin in Uttar Pradesh, India, is the main focus of this case study. The Sana’a water basin in Yemen supplements the case study with Multi-Criteria Analysis. Climate change is particularly important because most of the Indian households are engaged in agriculture and their predominant crops are dependent upon monsoon rains. The unit of study is a representative farm household with 7 family members farming 0.8 hectares of land. This example has 3 initial objectives (i.e. the term “must attempt” has been more fully addressed in the Social Performance Analysis tutorial): 1. CTAPs must attempt to fully address equity. Equity and the distribution of gains from climate change mitigation technologies is especially important to understand because as Mechler et al put it “Small and marginal farmers and landless labourers, the most vulnerable groups, suffer the effects of drought the most. They not only lose the investments they made in the sowing and other operations but also lose the food grain they rely on for subsistence. The landless are also heavily affected as there is no agricultural work available locally and they lose employment opportunities.”. In addition, the second project alternative, Crop Insurance, does not directly improve the livelihoods of farmworkers. Portfolios of climate change mitigation technologies must address all vulnerable groups. 2. CTAPs must attempt to fully address natural resource externalities. Groundwater pumping for irrigation is one of the two project interventions being analyzed. Many good examples can be found where this irrigation technique is causing groundwater supplies to become contaminated (i.e. Pakistan) and aquifers to become permanently damaged (i.e. Central Valley of California). Although the case study mentions that groundwater depletion is not an issue in this area, the author’s experience in natural resources conservation planning suggests that externalities from agriculture are the rule rather than the exception. The possibility of externalities must, at least, be addressed. 3. CTAPs must attempt to fully address tradeoffs. Ground water pumping is a proven technology for alleviating drought –thereby potentially increasing the incomes of the most vulnerable groups being impacted by drought. But the potential external social costs of this technology has to be weighed against the internal private benefits accruing to needy socioeconomic groups. In addition, the Crop Insurance alternative has government subsidies which are subject to the same tradeoffs as any other government expenditure. This example does not focus on conducting a robust, scientific, drought loss assessment. The overall drought assessment is limited to summary, “eyeball” data taken from graphs and does not use any of the typical analytic techniques that most serious drought analysts employ (i.e. Appendix A’s list of planned algorithms). With the exception of limited tabular data presented in obscure scientific articles, full drought data that could be used in a complete CTAP case study could not be found (i.e. literally anywhere, pointing to the need for this technology). This example addresses the following shortfalls found in previous examples: 1. All Indicators: The introduction to each Indicator points out that reality is messier than depicted in previous examples. Each Indicator tries to expand the approaches used in its measurement, so that the overall CTAP processes work in reality, rather than strictly through simplified case studies. Although the author’s resources don’t permit field tests, most of the CTAP processes explained in this example have already undergone field testing with ample proof presented in the referenced publications. One of their main limitations is that they haven’t undergone adequate automation. 2. Indicator 1 Hazard Distribution: Scenario Analysis: This Indicator addresses the need to expand the definition of hazards to include scenario analyses of climate change impacts on drought. 3. Indicator 2 Exposure Distribution: This Indicator expands the definition of exposure by introducing supplemental tools that can be used to identify the assets being impacted by drought. 4. Indicator 3 Vulnerability Distribution: Drought Vulnerability Index: This Indicator expands the definition of vulnerability by using a second base element, explained in Example 2, to complete a Drought Vulnerability Index. The Index includes indicators that help to quantify the root causes of vulnerability, the winners and losers from drought mitigation policies, and the external costs of mitigation technologies. In addition, this Indicator stores an optional Action Identification Table that further defines project alternatives in terms of measurable performance criteria. 5. Indicator 4 Loss EP Distribution: Trend Analysis: This Indicator expands the definition of exposure and vulnerability by including a trend analysis demonstrating how demographic trends, technological changes, and/or natural resource stock scenarios, affect changes in damage losses over time. 6. Score Decision Support System: Multi-Criteria Analysis: Scores expand decision support by including a Multi-Criteria Analysis (MCA). The most important criteria addressed in this example are equity, externalities, and tradeoffs. The MCA helps analysts to develop recommendations to decision makers. The MCA can be used as the final step in a general iterative approach that identifies changes needed in Indicators 1 through 7. Once the changes are completed, a new MCA is completed until reasonable consensus emerges about final risk mitigation strategies. In many cases, the amount of losses from disasters justifies such careful deliberation. 7. Processing Time and Progress: This example takes more time to run the calculations than any other example in this reference. A lot of calculations must be processed. Running calculations for an Input or Output and the children Series should be used cautiously. Feedback showing the progress of the calculations is not currently supported. The Technology Assessment 1 references are beginning to address the need to process very large datasets. 8. Appendix B Decision Support System. Multi-Criteria Analysis: The Score MCA section in this example points out that MCA can be completed using the techniques introduced in Appendix B, with no modifications needed. The indicators used in Appendix B’s algorithms can be the same indicators used to conduct MCA. When used in this manner, it’s “advanced” MCA. Indicator 1. Hazard Distribution The scientific literature documents that droughts have multiple uncertain natural resource characteristics, chiefly associated with identification of the drought onset, and quantifying the drought’s severity, duration, and frequency (Mishra et al 2011, UN CAPNET 2015). Multivariate natural resource models and indices (i.e. Standardized Precipitation Index or SPI) are commonly used to capture these multiple hazard distributions. Examples of these distributions will be made when algorithms listed in Appendix A are developed. The following properties identify the simpler techniques employed in the case study and used this example. Math Type and Math Sub Type: algorithm1, subalgorithm9 Math Expression: I1.Q1.distribtype + I1.Q2.200year + I1.Q3.100year + I1.Q4. 50year + I1.Q5.25year + I1.Q6.10year + I1.Q7.5year Indicator.URL TEXT: This example exemplifies the complexity of modeling rainfall exceedance probabilities because the case study does not present a table or graphic that could be used to summarize rainfall exceedance probabilities for a growing season. Instead, the following table displays a simplistic and fictitious hazard distribution extrapolated loosely from Figure 9 in the case study. Math Result: Scenario Analysis The authors cite IPCC scenarios that project future monsoon conditions to be 15% to 20% higher in South Asia by 2099. The authors downscale the IPCC projections for the Rohini basin and add actual observed data to their climate change scenarios. Figure 4 in the case study displays some of the resultant scenarios. Although the Trend Analysis introduced for Indicator 4 can be used to model simple natural resource scenarios, serious scenario analysis has to be conducted using all Indicators. The most practical way to conduct scenario analysis is to complete a full Conservation Technology Assessment for an Input or Output base element. Copy the initial assessment into children Input or Output Series. Adjust the datasets in each Series member to the scenario being modeled. Use the Resource Stock Progress Analyzers to assess the joint Series data. Algorithms that mathematically tie Indicators 1 to 4 together may also be sound approaches for handling Scenario Analysis in future releases. For example, Indicator 3’s loss ratio functions look up Indicator 1’s scenarios and adjust losses appropriately (i.e. the existing data distributions, or functions, support this technique but the current algorithms do not). Indicator 2. Exposure Distribution The following image (UNDP 2011) shows that the assets exposed to drought and used to model the benefits of drought risk reduction actions (i.e. the impacts of drought mitigation and adaptation actions) are more complex than evaluated in previous examples. The UN IDMP (2014) recommends establishing drought task forces to coordinate drought management. They recommend that these groups use checklists, similar to the following, to define and prioritize the full assets being impacted by drought. No automated tool is offered for this purpose, but, since future generations need to know the full details behind CTAs, analysts are encouraged to add a TEXT csv file, holding the checklist, as the last Indicator.URL TEXT file. In addition, feedback from the Drought Vulnerability Index (DVI) completed by local stakeholders and used with Indicator 3 can help to further refine this Indicator. The following properties identify the simpler techniques used in the case study and employed with this Indicator. Math Type and Math Sub Type: algorithm1, subalgorithm9 Math Expression: I2.Q1.distribtype + I2.Q2.QT + I2.Q3.QTUnit + I2.Q4.QTD1+ I2.Q5.QTD1Unit + I2.Q6.QTD2 + I2.Q7.QTD2Unit + I2.Q8.normalization + I2.Q9.weight + I2.Q10.quantity Indicator.URL TEXT: Exposure is extrapolated from tables and graphs in the case study and summarized in the following table. The assumptions for farmers include: gross crop income: $31,000 IndR/year; area farmed: 0.8 hectare; 70% crop income from paddy rice and 30% from wheat. The assumptions for farmworkers dependent upon agricultural employment include: gross agricultural income: $28,500 IndR/year (the poverty line identified in the case study). Both assets use a normal distribution with standard deviation = 30% of mean. Location 2 decreases Location 1’s numbers by 15%. Math Result: Indicator.URL 2nd Text: The following partial list shows the Checklist of Impacts TEXT file that has been added to this Indicator. The Checklist is used to help identify assets exposed to drought and potential mitigation and adaptation actions. Indicator 3. Vulnerability Distribution The following image (UNDP CAPNET, 2015) suggests, once again, that the simplified world portrayed in some examples has the potential to miss important analytic details. In this case, important aspects of Indicator 3’s Sensitivity Analysis. The subsequent images demonstrate how the use of disaster-specific Indicator Indexes, such as a Composite Drought Vulnerability Index, can assist with those details. Appendix B points out that the general technique of developing disaster-specific Indicator Indexes is applicable to any disaster. Example 2 demonstrates how to complete the DVI. Feedback from the DVI collaborative process is used to further refine this Indicator. The UN IDMP (2014) and UNCAPNET (2015) references demonstrate how the use of additional tools, such as “impact tree diagrams”, that focus on causal explanations, can help to further identify the root causes of vulnerability. At this stage, the data generated using Indicators 1, 2, and 3 can be used to define the mitigation and adaptation interventions (i.e. the project alternatives) that must be assessed with this Indicator. UN IDMP (2014) identifies this step as the “Action Identification” task. The Score’s MCA collaborative process can provide iterative feedback for further refining this Indicator. The Score section also demonstrates the importance of comparing policy alternatives against measurable performance criteria. This Indicator uses an optional 2nd TEXT dataset, holding an “Action Identification Table”, that further defines alternatives, or potential mitigation and adaptation actions, in terms of measurable goals or performance criteria. The following properties identify the simpler techniques employed with the case study and used in Indicator 3. Math Type and Math Sub Type: algorithm1, subalgorithm9 Math Expression: I3.Q1.distribtype + I3.Q2.5year + I3.Q3.10year + I3.Q4.25year + I3.Q5.50year + I3.Q6.100year + I3.Q7.200year Indicator.URL 1st TEXT: This example extrapolates crop loss ratios, in percent, from Figure 6 in the case study. Both crops use the same loss ratios. Agricultural wage loss is calculated as 50% of crop losses (i.e. assuming job substitution or income support are available). Standard deviations are 30% of means and Location 2 suffers equals percent losses as Location 1. Partial Math Result: Indicator.URL Alternatives: The case study documents that for farmers, Alternative A, Groundwater Irrigation, reduces 100% of losses for events up to 10 years. Alternative B, Crop Insurance, reduces 100% of losses for the 25 year and 50 year events. The case study mentions that more is involved in quantifying both loss reductions, but these are the primary impacts. Alternative C, both Groundwater Irrigation and Crop Insurance, is calculated as a summation of the individual loss reductions. For farmworkers, only Alternative A has loss reductions. Crop Insurance payments are not used to reimburse the workers. As mentioned in the introduction, the distributional impacts of mitigation technologies are important to understand and document. Partial Math Result: Indicator.URL 2nd TEXT: Optional Action Identification Table: The objective of this optional dataset is to define alternatives more transparently, especially in terms of measurable goals or criteria (i.e. the Score MCA criteria). No automated tool is offered for this table, but, since future generations need to know the full details behind CTAs, analysts are encouraged to add a TEXT csv file, similar to the following, as the last Indicator.URL TEXT file. The Action Definitions and Metrics can be considerably more concrete than this example. Indicator 4. Loss EP Distribution This Indicator displays the results of calculations run using Indicators 2 and 3. Selected properties include: Math Type and Math Sub Type: algorithm1, subalgorithm9 Math Expression = I4.Q1.distribtype + I4.Q2.5year + I4.Q3.10year + I4.Q4.25year + I4.Q5.50year + I4.Q6.100year + I4.7.200year Partial Math Result, no Trends: Indicator.URL TEXT: Optional Trend Analysis: The Semarang case study introduced in Appendix A, Example 2, clearly demonstrates the importance of factoring in demographic trends for this Indicator. In that case study, the city’s population is expected to significantly increase causing the number of exposed assets to increase. The projected trend rates in the following dataset are multiplied by Indicator 2’s asset valuations to calculate projected damage losses. This partial table shows the data format convention that must be followed. The rows match Indicator 2’s rows and the columns correspond to Indicator 3’s columns. Indicator 3’s project alternatives must be included in the dataset. In many instances, the project alternative trends will have the exact same trends as the baseline. The Categorical and Locational rows are only included for consistency –the trend multipliers in these rows will not be used. The matrix itself contains the multipliers used to adjust asset values in the final damage loss estimates. Because these are simple multipliers, asset valuations that don’t change are given a trend multiplier of 1, not 0. This file has been referenced as the first URL in the Indicator.URL property. The multipliers must be doubles. This file is optional. The trend multipliers in this table can be derived by using techniques similar to the simple population calculation (UN ECLAC, 2014) shown in the following image. This technique is appropriate for the residential assets used in the Semarang case study. Crops and other types of industrial output should use calculations based on rates of crop yield and price changes, not this technique. Farmworker income should also be tied to projected changes in output income. Simple natural resource scenario analysis can base the trend multipliers on summary data taken from IPCC scenarios. The “trendtype” column in the dataset can be used as a reminder of the way the trend multipliers were calculated (i.e. exponential, straightline). A future release may use the latter column to calculate the trend loss automatically (again, to leave a better audit trail). Examples 5 and 6 in the Social Performance Analysis tutorial began to introduce fuller population algorithms, Partial Math Result, including Trends: These results can be compared with the results for the Math Results, No Trends to confirm that the trend variables act as simple multipliers. Indicator 5. Costs Distribution As with the previous Indicators, the following image shows that Cost Indicators can have more dimensions than covered in previous examples. Note that many of the “opportunity costs” mentioned in this image are not quantified using Indicator 5. Many are measured as reductions in damage losses and should be identified using Indicators 2 and 3. In addition, the simplified cost calculations demonstrated in previous examples can have far more nuance. The following image from the Capital Input tutorial shows 1 of 3 examples of irrigation pumping costs. These costs are more accurate than the simpler approaches demonstrated here, and can be used with this Indicator for more advanced cost analysis. That tutorial also demonstrates how many advanced calculations, that can’t be carried out directly with this algorithm, can be carried out using complementary DevTreks calculators (i.e. most of which, such as crop insurance calculators, haven’t even been built yet –IT really is still in its infancy), and then summarized in these types of algorithms. The following properties identify the simpler techniques employed with the case study and used in Indicator 5. Indicator.URL TEXT: The irrigation alternative (A) includes costs that are shared for installing a bore well and annual operating and maintenance costs. The crop insurance alternative (B) costs approximately 6.5% of total crop income. In terms of household costs, 50% of the premium is subsidized, resulting in net costs of 3.5% of crop income. The government’s share of the subsidy is excluded from the estimation of household costs and benefits, but it must be included in the final decision criteria (i.e. the Score MCA analysis). Indicator 2’s damages are entered on a per hectare basis and then multiplied by the quantity column, 0.8 hectares, to calculate the resultant per household damages. Costs are already calculated on a per household basis, so no further adjustments are needed and isprojectcost can be set to yes. Sensitivity Analysis Data TEXT (stored as csv text data): The following dataset has been referenced as the second URL in the Indicator.URL property. At least 1 rate and 1 life must be included in this TEXT file (i.e. at least the first 4 columns of data). Partial Math Results: Indicator 6. Benefit Cost Analysis (BCA) Unsurprisingly, the simplified benefit cost analyses introduced in previous examples, also fail to capture important dimensions of BCA. Besides the aforementioned shortfalls dealing with equity and tradeoffs, V. Meyer et al (2013) review several economic analysis techniques, such as contingent valuation, that are also appropriate to use in quantifying disaster prevention benefits and costs. IT really is still in its infancy. The following properties identify the simpler techniques employed with the case study and used in Indicator 6. Math Type and Math Sub Type: algorithm1, subalgorithm9 Math Expression = I6.Q1.distribtype + I6.Q2.200year + I6.3.100year + I6.Q4.50year + I6.Q5.25year + I6.Q6.10year + I6.Q7.5year Partial Math Result: Indicator 7. Cost Effectiveness Analysis (CEA) In keeping with this example’s “reality is messy” theme, the health care sector uses CEA in a more comprehensive manner than demonstrated so far. For example, the following cost effectiveness acceptability curves (self-referenced) are often generated from Health Technology Assessment (HTA) results. A useful rule of thumb with algorithm development is to strive to build them in a way that supports general analysis across multiple sectors, rather than narrowly focused on a single sector. The economic concepts, economies of scale and scope, explain why (see the Performance Analysis tutorial for definitions of these terms). The following properties identify the simpler techniques employed with the case study and used in Indicator 7. Math Type and Math Sub Type: algorithm1, subalgorithm9 Math Expression: I7.Q1.distribtype + I7.Q2.200year + I7.Q3.100year + I7.Q4.50year + I7.Q5.25year + I7.Q6.10year + I7.Q7.5year Optional Indicator.URL TEXT: none (Example 2 conducts an independent CEA that will be used for decision support in the Score). Partial Math Result: Scores. Multi-Criteria Analysis Multi-Criteria Analysis uses the following general steps to assist reaching decisions about Indicator 3 to 7’s “alternatives”, that is, projects, policies, or programs (UK DCLG 2009, UNEP 2011): 1) Identify objectives, 2) Identify options for achieving the objectives (i.e. alternatives), 3) Identify the criteria to be used to compare the options, 4) Analyze the options, 5) Conduct sensitivity analysis, 6) Make choices, and 7) Use feedback to refine the MCA. The following partial image (UNEP 2011) demonstrates that the references cited for this example explain their variants of these steps in more detail. The UNEP (2011) identifies the advantages of using MCA to assist decision involving mitigation and adaptation policies, as follows (14*): “MCA techniques can be used to identify a single most preferred option or a mix of options, to rank options, to short-list a limited number of options for subsequent detailed appraisal, or simply to distinguish acceptable from unacceptable possibilities. … This approach is particularly well suited to the analysis of climate policies and climate-policy planning for the following reasons …”. The UNEP (2011) uses the following image to identify the advantages of using MCA to assist decisions involving portfolios of mitigation and adaptation policies. These results are important in the context of drought because the MCA is conducted for integrated water management in the Sana’a Water Basin in Yemen. They describe the MCA graph as follows “The portfolios which are of greatest interest are again those which lie on the efficient frontier, which, in this case is towards the top left-hand corner of the plot”. Miller and Belton (2014) also use the Sana’a Water Basin MCA to reach similar conclusions to those in some of Appendix B’s references –the collaborative process used to complete the MCAs helps to identify “socially acceptable paths forward”. UNEP (2011) uses the following hierarchical system to classify criteria used in Multi-Criteria Assessments that evaluate specific climate change mitigation and adaptation policies. They describe the system as follows: “At the heart of this framework is a hierarchical criteria tree containing a set of generic criteria, against which climate-policy planners can evaluate proposed climate-policy actions and their potential contribution to a broad range of climate, environmental and socio-economic development objectives”. A 4th level can be added to these criteria that allow more detailed analysis of policy options. UNEP describes this 4th level as follows “these indicators … provide practical measures of performance of policy options against the 3rd level criteria [and can be] expressed in monetary or non-monetary terms”. Importantly, they mention that the 4th level indicators can be customized for local contexts. They use the following image to demonstrate one set of potential indicators for the 4th level. Miller and Belton (2014) discuss the 4th level in greater detail and provide the following example of what they consider to be an important, typical, performance indicator: “For example, if the goal is to increase the reliable yield of a water resource system, an appropriate metric [for the criteria, Minimize Spending on Technology,] would be capital cost per unit of increased yield. Another possible indicator might be capital cost per unit area protected from floods of various specified magnitudes”. In terms of CTAPs, those last metrics are extremely important –they deal with decisions involved with the allocation of money. In addition, they are measurable. Close examination of the algorithms introduced in Appendix B reveal that Risk Management Indicator algorithms employ similar techniques: 4 level indicator classification systems, rating and weighting, and aggregation into Categorical Indexes, Locational Indexes, and Total Risk Indexes (i.e. MCA scores). Subalgorithms 10, 11 and 12 demonstrate these techniques. The following lists uses the term, SAS, to explain how these algorithms can be used to conduct MCA. 1. Overall Interactive Decision Support: SAS require that all raw input data be added to TEXT csv files. SAS cannot accommodate fully interactive MCAs. Other software or spreadsheet programs must be used to complete the initial MCA. In this example, the initial MCA TEXT file is then calculated by the SAS to produce the final MCA results. 2. Inputs and Outputs: DevTreks treats these as separate base elements. This example adds the Input and Output criteria and indicators to either base element, but does not split them up into both elements. The final presentation of the raw calculated results have to manually account for the difference, such as producing efficiency frontier graphs. 3. Indicator System Assessment: SAS use several steps (i.e. 3 or 7 steps) to complete an indicator system assessment. The background references mention that some MCAs may need to be evaluated in terms of projected disaster events (i.e. Miller and Belton’s 2014 statement: ‘capital cost per unit area protected from floods of various specified magnitudes’). In other words, the 4 level indicator MCA systems can be used directly with the techniques and algorithms demonstrated in Appendix B. For this example, the SAS have been modified so that the Score alone can be used to complete the MCA. 4. Ratings: SAS rate Indicators using probability density functions rather than point estimates. The references discuss the importance of accounting for climate change risk and uncertainty in MCA. In addition, although not cited, references were found (i.e. USACOE) that demonstrated the use of pdfs to rate MCA criteria. This example makes no changes. If concerned, use triangular distributions with low, median, and high, values for the ratings. In effect, the distributions are a type of MCA sensitivity analysis. 5. Normalization: SAS allow ratings to be normalized, MCAs typically do not. This example sets the normalization values to “none”. 6. Weights: SAS allow weights to be assigned to Indicators, Categorical Indexes, and Locational Indexes. MCA typically weights Categorical Indexes, or MCA criteria, alone. This example sets the Indicator and Locational Index weights to 1. 7. Final Scores: SAS aggregate Indicators, or criteria, exactly the same as MCA. The final Total Risk Index is equivalent to a final MCA score. This example makes no changes. 8. Alternatives: The alternatives presented so far in this example are for project alternatives. The UNEP alternatives are policy alternatives. The difference is semantics and this example makes no changes. In addition, SAS list alternatives vertically, rather than horizontally, in the scoring matrix. This is not important because decision makers are not expected to use the raw data contained in Indicator.Math Results. Analysts are responsible for using that raw data to build multimedia that decision makers will understand. In this example, they supply summary MCA tables and graphs, derived from Score MCAs, to decision makers. 9. Sensitivity Analysis: Several references discuss the importance of using sensitivity analysis, particularly to test the weights. The simplest way to conduct sensitivity analysis is to copy CTAs into children Input or Output Series and change each Series member to carry out whatever tests are desired. 10. Use Feedback: This example already reinforces the importance of using feedback in an iterative way to refine the overall analysis until consensus emerges about sound mitigation paths. This example makes no changes. All of the references cited for MCA demonstrate best practices that include: rating criteria using either ordinal or cardinal values, weighting criteria using either explicit or ranking values, iterating through the whole process more than once, and developing media, such as efficiency curves, to communicate the final results to decision makers. To add an element of realism to this example, data from the Yemen case study (UNEP 2011), displayed in a previous image, has been changed for use in this MCA. The issues of equity, externalities, and tradeoffs, are still addressed in the context of drought and integrated water management, but the initial data is based on the Sana’a Basin. Unfortunately, the Yemen case study doesn’t include a 4th level, so this example has to invent a fictitious set of indicators. Well-funded organizations are in a better position to complete real MCA for local areas such as the Rohini Basin or your local watershed (15*). Score Math Type and Math Sub Type: algorithm1, subalgorithm12 Score Math Expression: I0.Q1.distribtype + I0.Q2.QT + I0.Q3.QTUnit + I0.Q4.QTD1+ I0.Q5.QTD1Unit + I0.Q6.QTD2 + I0.Q7.QTD2Unit + I0.Q8.normalization + I0.Q9.weight + I0.Q10.quantity Score.QT, QTD1, and QTD2: These properties document the MCA score for the baseline MCA, before any project alternatives are considered. This score is a summation of the TR rows for all locations. Score.QTM, QTL, and QTU: These properties document the MCA score for the project alternative with the highest score. This score is a summation of the TR rows for all locations. [Version 2.1.4 and 2.1.6changed from Score.DataURL to Score.JointDataURL in Stock Calculators and Score.URL in M&E Calculators to support machine learning algorithms better.] Score.JointDataURL 1st TEXT: For illustrative purposes, this dataset only includes a subset of the full UNEP 18 criteria. This subset addresses this example’s priorities: equity, externalities, and tradeoffs. Each indicator is given a fictitious name, such as GHGActionA. Both the UNEP 2011 and Miller and Belton 2014 references includes examples of appropriate indicators, except most are not applicable to this specific example. To demonstrate some of the UNEP techniques, 3 of the Yemen case study policy alternatives listed in Table 18 will be analyzed: A = BWPLAN, strengthen basin-wide water planning and governance, B = IWM, carry out integrated land and water management, and C = AGINCENT, create incentives to promote efficient agricultural water use. The three project alternatives presented in the rest of this example are subsumed in these policy options. For example, the Groundwater Irrigation alternative fits under the “carry out integrated water management” policy alternative, and the Crop Insurance alternative fits under the “create agricultural incentives” policy alternative. … … Partial Score.Math Result: The following table displays the partial MCA results. As mentioned, the major advantage to using a 4 level MCA based on indicators, is when the indicators are measurable. The previous table does not show any evidence of metrics such as “reduce basin-wide GHG emissions by 25% in 20 years” or “keep capital cost per unit area protected from floods of various specified magnitudes less than 75% of reduced damages”. That’s what the M&E tools demonstrated in Appendix B are designed to do. This example demonstrates a simpler technique. Table 14, shown above, has been modified to include additional columns that further define how project alternatives are being measured. Indicator 3’s 2nd TEXT file holds a partial “Action Identification Table” that demonstrates this documentation. For example, for the ReduceGHG criteria, GHGActionA is defined as “reduce basin-wide GHG emissions by 20% in 20 years by taking the following actions: Pump groundwater using energy efficient technologies”. M&E tools, such as those introduced in Appendix B, can be used to measure these goals more comprehensively. The following image displays the Score properties used in this example. The reason that alternative A has the same MCA as the baseline is that, for convenience, the raw data for each project alternative is exactly the same as the baseline. Fictitious actions and metrics don’t lend themselves to realistic analyses. Obviously, the alternative with the highest MCA should be closely compared to the baseline. Indicators that measure cost effectiveness must be included in the MCAs. As with any decision support tool, the highest MCA (or BCR or CER) may not be the best portfolio, policy, or project alternative. Discussion The references address additional “lessons learned” by practitioners of this approach, including: assessing single interventions vs. portfolios of interventions, analyzing tradeoffs, and testing the final weights. This algorithm requires that these techniques be completed manually, and no examples have been offered. These are obvious shortfalls compared to the UNEP 2011 and Miller and Belton 2014 analyses. In the CTAP context, the question that must be answered by this example is: “Does this approach really achieve cost effective climate change goals, while still addressing important, related, societal goals?” The initial answer appears to be a cautious “maybe”. This approach has the potential to make sound recommendations that can be used by decision makers to mitigate and adapt to climate change while also achieving important societal goals, such as equity and natural resources conservation. The greatest potential shortfall, based on the author’s experience, is that field staff need concrete guidance about what actions to take, backed up with funding. Quantified targets must be set for concrete mitigation and adaptation actions. The targets must then be monitored and evaluated, using formal tools as demonstrated in Appendix B. The process has to provide proof to questions such as “have GHG emissions been reduced by 25% in 10 years”, “have capital cost per unit area protected from drought of various specified magnitudes been kept to less than 75% of reduced damages”, “have farmers’ income stabilized 100% during droughts”, “has malnutrition been 100% eliminated from farmworker families during all periods”, “are natural resource assets been protected 50% better for long term use”, “have the quality of disaster management institutions improved to the point that they can handle 75% of any potential national disaster”. Ultimately, the “how and why” behind the questions and answers, that is, the overall Conservation Technology Assessment, must be transparent, and “learnable” by future generations. MCA can provide important input about the actions, but it doesn’t necessarily supply the full context for disaster risk management processes or the M&E component. In other words, MCA appears to be one useful ingredient in more comprehensive planning approaches. It doesn’t replace watershed planning, ecosystem planning, integrated water management, or civil disaster planning. The potential greatest benefit from these types of approaches is that they deal squarely with society, messy though it may be. Farmers, farmworkers, fertilizer dealers, and local communities need not be afterthoughts to planning processes dominated by engineers, physical science professionals, safety experts, administrative staffs, and special interest groups. The initial guiding principle, “Experiment and Gain Experience”, holds firm. Make sure that the “experiences” are transparent, and learnable by future generations, by using sound IT. Example 9. Drought Vulnerability Index URLs: https://www.devtreks.org/greentreks/preview/carbon/output/CTAP Example 6 - Drought Vulnerability Index/2141223471/none https://www.devtreks.org/greentreks/preview/carbon/resourcepack/Drought DRR and DSS/1543/none http://localhost:5000/greentreks/preview/carbon/output/CTAP Example 7 - Drought Vulnerability Index/2141223478/none Indicator 1. Drought Vulnerability Index Distribution This example uses the data from the following image (UN CAPNET 2015) to conduct a fictitious DVI for Example 1’s Uttar Pradesh case study. The previous image shows that this data is not being used with exceedance probabilities, therefore subalgorithm 9 or 10 are not appropriate to use. Instead, it can be completed by using either the Risk Management Index or the Resiliency Index algorithms (subalgorithm 11 or 12) with a separate Output base element. Selected properties include: Math Type and Math Sub Type: algorithm1, subalgorithm12 Note that subalgorithm11 can also be used if weighted average calculations are desired for TR rows. Math Expression: I1.Q1.distribtype + I1.Q2.QT + I1.Q3.QTUnit + I1.Q4.QTD1+ I1.Q5.QTD1Unit + I1.Q6.QTD2 + I1.Q7.QTD2Unit + I1.Q8.normalization + I1.Q9.weight + I1.Q10.quantity Indicator.URL TEXT: Location 1 uses Region A’s values and Location 2 uses Region C’s values. Categories, such as Physical Factors or Social Factors, have been added that are appropriate for these indicators. For convenience, but not realism, the values and weights are set in a manner to return the values in the radar graph. Appendix B shows that weights are usually associated with categories and locations (i.e. the weights for Indicators in the Social Factor category sum to 1). The results of this calculation will be used in Example 1 to supply additional decision support. The goal of any Vulnerability Indicator Index system is to help understand the “root causes” of vulnerability. Once root causes are better understood, potential preventative actions can be better identified. The Hochrainer et al reference (Disaster Financing …, 2011) shows that social scientists have conducted social science surveys in this area, specifically for disaster-related purposes. These surveys included socioeconomic variables, such as perceived disaster risks and moneylender financing after disasters, which are more relevant than the general socioeconomic DVI indicators used in this example. Local survey data, whether formal or informal, can do a better job of identifying “root cause” vulnerability indicators than the initial indicators found in generic indicator systems. More so, when the surveys have been designed for that particular purpose. Appendix B pointed out that the indicators used with this algorithm should be developed and owned by local stakeholders. These types of Indexes must be customized by locals who receive support from these types of social science experts. The general point is that disaster risk management is too often dominated by engineering, physical science, and safety, experts when it should be dominated by social scientists and community facilitators. Furthermore, this reference demonstrates that social science professionals (1*) and facilitators must understand IT and must be able to actively assist these important efforts via modern IT technologies (i.e. as contrasted to passive, obsolescent, academic papers). Partial Math Result: Indicator 2. Costs of Alternatives Distribution This Indicator uses the same costs and project alternatives as Example 1 and no further documentation is offered. Indicator 3. Cost Effectiveness Analysis. Technically, the nature of this analysis should use social science-oriented project and policy alternatives, but for convenience, the same project alternatives as Example 1 return the following results. Example 1’s Score MCA, addresses social science-oriented policy options further. Partial Math Result: Appendix D. Integrated Local, National, and International CTAP Systems This Appendix begins to demonstrate how to aggregate data collected using CTAPs into local, national, and international, data loss systems. More generally, the goal is to aggregate any CTA data (i.e. HTAs) into higher level decision support systems. Attention will focus on the measurable performance indicators introduced in Appendix C, such as “have GHG emissions been reduced by 25% in 15 years?” and “are the mitigation and adaptation interventions equitable?”. Example 6 in the Social Performance Analysis 3 (SPA3) reference introduced the EU-sponsored INFORM project as an example of a multi-index disaster risk management system that is being used at global scale. The advantage to this MCDA approach is that the authors have been carefully assessing the best available global datasets that contain Indicators that can be combined together in novel ways to support disaster and crisis risk management. The disadvantage is that not all of SPA3’s SDG and Sendai DRR targets can be fully addressed using INFORM alone. In the context of this reference, the INFORM documentation (2018) presents no examples of how well that approach supports national, local (i.e. watershed), and industry, sustainability accounting systems, including their concerns about equity, externalities, and tradeoffs. [Hold for future possible development] Example 1. Simple National Data Loss System This example uses the word “simple” in the title because it does not directly use any of the established data loss collection systems mentioned in the Introduction. Instead, the release demonstrates how to program machines to aggregate CTAP data into any other data loss system, including the existing systems. Application Programming Interfaces (APIs, or microservice Web APIs), document-centric databases (i.e. holding CTAP results), and better tools to communicate results, such as GIS maps, take center stage. A simple example or two may suffice to demonstrate these techniques. Appendix E. Resource Stock and M&E Analyzers Version 2.1.4 moved the documentation for the Stock and M&E Analyzers to this appendix because they provide ancillary decision support, rather than primary algorithm decision support. Example 6. Resource Stock Progress Analyzers (10*) URLs: https://www.devtreks.org/greentreks/preview/carbon/output/CTAP- BM RI/2141223465/none https://www.devtreks.org/greentreks/preview/carbon/outcomegroup/Stock RI Progress/42/none https://www.devtreks.org/greentreks/preview/carbon/componentgroup/CTAP RI Stocks/662/none https://www.devtreks.org/greentreks/preview/carbon/investmentgroup/CTAP Stock RI/275505682/none The calculators and analyzers explained in the Resource Stock Calculation and Resource Stock Analysis tutorials will be used in this Example. The cost and benefit information contained in the underlying base elements is ignored –the objective of the example is to analyze the Indicator data. RI Output Calculation The purpose of this calculation is to filter out the data from Indicator 3 in Example 1 so that only the selected project alternative, along with its Categorical and Locational Index rows, are further analyzed in subsequent Resource Stock Progress Analysis. The Monitoring and Analysis tools can also be used to carry out these analyses. In this example, the Indicator rows are not essential to analyze. The filtered data represents the “benchmark” goals from which subsequent “actual” progress is analyzed. In this example, and in the context of CTAPs, progress is measured in terms of the final column of data, the Cost Effectiveness Ratios. Those ratios will also be computed for actual Indicators to see if goals and targets are actually being met. Alternatively, the underlying net benefit calculation also makes a logical variable to use in these analyses (i.e. (net benefits = (baseline damages – alternative damages) – (alternative costs – base costs))). Since this data is being used in Resource Stock Progress Analyses, and those analyses only measure progress using four properties (Indicator.QTM, Score.QTM, Score.QTL, Score.QTU), Scores must be set in some manner. For simplicity, this example assumes that stakeholders place greater emphasis on some Indicators than others. They calculate Scores by using weighted additions of Indicators in Score.MathExpressions. Even though Indicator QTL and QTU properties are not analyzed, their relative relationship to QTM (i.e. 15% lower and higher than QTM) can be still be used to add an additional uncertain dimension into the final analysis. More than 1 Output or Input base element can be used, and often must be used, to measure Stock Progress. For example, if Example 1’s analysis is for a 1 year project, progress might be measured for 4 individual quarters. In that case, the full year’s Indicator data has be broken into 4 separate Output Series. In turn, these “benchmark” Output Series will be compared to four “actual” Output Series as the project proceeds over the course of the year. In this example, 2 Outputs, each with 4 Output Series, will be used to measure progress for the first year of a 2 year project. Progress will be analyzed every 6 months. The first Output and its children establish the benchmark goals. The second Output and its children document actual accomplishments. The parent Output calculator is copied into each of the children Series, and each Series member is changed to reflect that semiannual period’s benchmark goals and actual accomplishments. The benchmark Series are changed by using an appropriate divisor in the Indicator.MathExpressions. In this example, 4 semiannual periods with equal progress require division by 4. The actual Series usually require the use of separate TEXT datasets, containing observed actual accomplishments for each 6 month period. For simplicity, and in order to generate “eyeball” verifications, this example uses the same technique as the benchmark Series. The last 2 actual Output Series haven’t occurred yet, so their parent Output.Amount is set to 0, and their indicators will all total zero. In general, analyses requiring comparisons should have equal numbers of comparative base elements. The example’s objective is to display a project half way through its project cycle, being completed on-time and on-budget, with simple 25%, 50%, and 100% metrics. In practice, a similar real project requires that stakeholders hold at least 6 workshops. The initial workshop generates the selected “project alternative” and establishes the planned goals. 4 semiannual workshops generate actual progress reports, and a final workshop assesses how well planned goals were actually accomplished. In the context of CTAPs, cost effectiveness is always a primary objective, and the analysis of CERs, or net benefits, support that objective. Selected properties include: Score.ConfidenceInterval: 90. Used with each Indicator’s observed descriptive statistics and the Score’s Monte Carlo simulation to set QTL and QTU. [The online example explains that Version 2.1.6 upgraded the calculator patterns to place greater emphasis on the Indicator.URL over the following Score.URL property.] Score.DataURL: The following image displays the TEXT csv data referenced in this property. The object will be to calculate mean QTMs for the six unique Indicators in the dataset. Those Indicators include 5 Categorical Indexes and the final Locational Index, or RMI. The mean is calculated using 1 row, identified by Label, from each of the 3 locations. The numbers are all fictitious. Score.MathExpression: The following expression shows a simple weighted addition of Indicators. (I1.QTM * .20) + (I2.QTM * .20) + (I3.QTM * .20) + (I4.QTM * .30) + (I5.QTM * .10) Score.Math Type and Sub Math Type: algorithm1 and subalgorithm1. Use Monte Carlo simulation to generate the final confidence interval for the Score. The following images display the calculated properties for the first of the six Indicators. Selected properties displayed include: Indicator.Math Type and Sub Math Type: algorithm1 and subalgorithm1. Use the Score.DataURL TEXT dataset to generate observed descriptive statistics for each Indicator. Fill in Q1 to QTU with the results. The units of measurement for most of these properties must be set by hand. When subalgorithm1 is run using a set of observed data, a Monte Carlo simulation is not run (explaining why, in this example, most of the Score properties are not needed to run Indicator calculations). Indicator.Math Expression: The image shows that Q1 to Q5 document the mean of the first five columns of data in the previous image. QT measures the result of the Math Expression, which in this case is only interested in the mean of the final column of data, the CERs. The remaining Indicators still have to be identified in the Expression, because a data convention followed by many algorithms is to only analyze data included in Math Expressions. These columns of data are added together and multiplied by zero. Output base element: I1.Q5.alt_B_cer + ((I1.Q4.alt_B_cost + I1.Q2.base_cost + I1.Q3.alt_B_perform + I1.Q1.base_perform) * 0) Output Series base element (the goal is simple 25% and 50% progress metrics): (I1.Q5.alt_B_cer / 4) + ((I1.Q4.alt_B_cost + I1.Q2.base_cost + I1.Q3.alt_B_perform + I1.Q1.base_perform) * 0) The QTLs and QTUs are calculated from Example 2’s descriptive statistics. The statistics are calculated from the QTMs’ observed data. The following image confirms that the Version 2.1.6 upgraded calculator pattern emphasized the use of the Indicator.URL property. The following images show the calculated results for a typical Score and Indicator. RI Output Progress Analysis The following image displays the results of a Stock Progress Analysis that demonstrates using Output base elements with children benchmark and actual Series. With the exception of the Target Type property, the exact same data and properties were set for the benchmark and actual Output Series so that the results can be verified by “eyeball” (i.e. 100% project accomplishments). This type of analysis may be appropriate for simple, quick, analyses. The following image displays the structure of the Output data that will be analyzed in this example. 2 Outputs have been added to an Output Group. The benchmark Output has 4 children Output Series. The actual Output has 4 children Output Series. An Output Group Progress Analysis is inappropriate because the first Output has only benchmark Series while the second has only actual Series. In general, Input and Output Group Stock Analysis is discouraged because Series data can just be too large to display properly. Nevertheless, under some circumstances, Aggregators (i.e. Labels), Dates, Target Types, and Alternative Types, can be set in a manner that will carry out legitimate I/O analyses. RI Outcome and Component Progress Analysis The following images display the Outcomes, and some of the Scores and Indicators for the completed Outcome Progress Analysis. A total of 8 semiannual Outcomes are added to an Outcome Group. 4 of the Outcomes reflect the benchmark and 4 Outcomes reflect the actual progress through the first 2 periods. The last 2 actual Output Series have not been completed yet, so set their Output.Amount = 0. The objective of this data is strictly to obtain the 25%, 50%, and 100% “eyeball” metrics. For this example, ignore the 4 digit rounding precision displayed by Analyzers and the resultant values such as 0.5720 and 1.440. The Earned Value Management tutorial explains the displayed properties. Capital Budgets require Components with Inputs and Outcomes with Outputs. Inputs are not the primary focus of this example, so for convenience, only 1 sample Input Indicator, with fictitious values, will be used. This Indicator, Risk Knowledge, derives from some of the UN Indicator systems. Both Components and Inputs are structured exactly the same as the Outcome and Outputs. Real Investment Analyses should categorize and use real Input (resource expenditure, or effort required) and Output (resource contribution, or impact) Indicators. That aligns with the best practices of the Economics, Engineering, and M&E fields –that is, Inputs should always be used efficiently in the production of Outputs (i.e. when resources are scarce and their conservation is important). The following Component Stock Analysis shows that similar data management techniques are used for the Components and Inputs. RI Investment Progress Analysis The following image shows that 2 self-explanatory Investments have been added to an Investment Group. Each Investment holds 2 Time Periods. Each Time Period holds 2 semiannual Outcomes and Components. Besides the Progress Analysis completed for the Investment Group base element, additional Stock Total, Statistical, and Change by Alternative, Investment Analyses are included with the example. The Change analysis required setting all of the benchmark Alternative Type properties to “A” and the actual to “B”. The following images display the results of the aggregated first and second Time Period elements of the benchmark and actual Investments displayed in the previous image. Unlike the next example, the 2017 actual Time Period.Amount does not need to be set to 0, because its descendent Output and Input Amounts have been set to 0. The time commitment needed to complete these types of analyses are a factor in their utility. The author hadn’t completed a Progress Analysis in several months and forgot how they work. He spent about 2 hours trying to figure out why the elements in the Investment Analysis wouldn’t line up, even after closely following the instructions. New features that were added to 1.8.8 broke the “old way” several analyses were run and had to be modified. Additional bugs also had to be fixed for a fully accurate Investment Progress analysis. Additional testing with more datasets, which requires additional resources, can overcome most of these types of issues (10, 11*). Example 7. M and E 2 Calculators and Analyzers (10*) URLs: https://www.devtreks.org/greentreks/preview/carbon/output/M and E BM RI/2141223467/none https://www.devtreks.org/greentreks/preview/carbon/outcomegroup/M and E RI Progress/43/none https://www.devtreks.org/greentreks/preview/carbon/componentgroup/CTAP RI M and E/663/none https://www.devtreks.org/greentreks/preview/carbon/investmentgroup/CTAP M and E RI/275505683/none http://localhost:5000/greentreks/preview/carbon/output/A- CTAP- Benchmark RI/2141223474/none The calculators and analyzers explained in the M&E Calculation and M&E Analysis 2 tutorials will be used in this Example. The cost and benefit information contained in the underlying base elements is ignored –the objective of the example is to analyze the Indicator data. Inputs and Outputs are not displayed in some M&E analyses, because the quantity of data becomes difficult to interpret. In this example, that includes a comparative, but not regular, progress analysis of Components and Outcomes, and the Progress Investment analysis. In this example, that means the Output data holding the RI Indexes is not included in the final results of the Investment Analysis. Use a Totals Investment analysis when Inputs and Outputs must be included in the analysis. Those results can also be imported into other software and manipulated to produce the desired analyses. RI Output M&E Calculation Outputs are structured exactly the same way as demonstrated in Example 2. A benchmark Output hold 4 semiannual Output Series that establish benchmark goals. An actual Output holds 4 semiannual Output Series that measure actual accomplishments. The calculations use the same 6 Indicators holding Categorical and Locational Index aggregations. Prior to Version 2.0.4, these calculators did not support data stored in URLs, therefore Example 1’s selected project alternative had to be manually entered into the calculators as M&E Indicators. Post Version 2.0.4 M&E calculators offer considerably more flexibility and power than demonstrated in this example. Given Example 2’s results, logical properties to calculate can be either Example 2’s QTM, QTL, and QTU, properties for each Index, or the cost, performance, and final CER columns of data found in Example 2’s raw dataset. This example uses the former properties. M&E Indicator.Q1, and Indicator.Q2 properties are set equal to Example 2’s Resource Stock Index QTL, and QTU, and QTM properties for the selected project alternative, or benchmark goals. M&E Indicator.QT, QTM, QTL, and QTU properties are also set using these properties. The following image displays a typical Output calculation for 1 Indicator. Please refer to the previous Stock calculation example to see the upgraded patterns recommended as of Version 2.1.6 (i.e. use the Indicator.URL property to store Indicator data). The following image displays the calculated results for all of the Indicators in 1 Output Series. This image and the remaining M&E images derive from M&E calculations and analyses run prior to Versions 2.0.4 and 2.0.6. The M&E tutorials demonstrate that current M&E calculators work very similarly to the Resource Stock calculators. RI Output M&E Analysis M&E Analyzers aggregate Indicator.QTM, Indicator.QTL, and Indicator.QTU, properties. The following image displays the results of an M&E Change by Year Analysis that demonstrates running the analysis at an Output base element with children Series. The base element Series were first aggregated into 2 separate years. No changes occur between the two years because all of the 4 Series have Indicators with identical properties (i.e. progress is expected to be equal for each semiannual period). RI Outcome M&E Calculation M&E calculators can be run for all base elements, and base element Indicators are never aggregated into ancestor elements. That is, Output Indicators will not be aggregated into Outcome Indicators, and Outcome Indicators will not be aggregated into Time Period elements. The M&E Introduction reference explains that each base element’s Indicators serve different purposes –an Outcome Indicator is not the same as an Output Indicator. Therefore, new Outcome and Time Period Indicators must be devised. Outcomes are structured exactly the same way as demonstrated in Example 2. For illustrative purposes, 1 Outcome Indicator, Percent of Firms Increasing Resilience, is added to each Outcome (i.e. by copying it from the Parent Outcome Group). This Indicator measures concrete practices being implemented by firms within the targeted 3 locations to increase their resilience to natural resource disasters. The numbers are fictitious and, for the purpose of “eyeball metrics”, identical for each Outcome. The following image shows that an Outcome M&E Indicator has been copied from a parent Outcome Group and then edited for each separate Outcome. RI Outcome M&E Analysis The following image of an M&E Outcome Analysis is run at the Outcome Group base element. The analysis shows that both the Outcome and Output base elements for the last semiannual period are 50% of the target goals because these periods haven’t happened yet. This was done by setting the Output.Amount and Outcome.Amount equal to 0. If the same analysis is run using the “Compare Only” option, the children Outputs are not displayed. The author hadn’t run M&E 2 analyses for several months when this tutorial was first released and discovered at least 3 bugs or flaws with the analytic results. That’s the specific reason that the 49 additional M&E 1 tools were deprecated in Version 1.9.4 –higher priority was placed on the 49 M&E 2 tools that were upgraded in Version 2.0.4. The following Component M & E 2 Progress Analysis shows the same relative results as the Outcome Analysis. Because the author hadn’t run these analyses in a while, it took a while to understand that the Total Actual Period Progress and Total Actual Cumulative Progress were measuring numeric differences between planned and actual numbers. RI Investment M&E Calculation For illustrative purposes, 1 Time Period Indicator, Percent Resilient Firms, is added to each Time Period. This Indicator measures the impacts that this resiliency improvement project is having on firms within the targeted 3 locations. The references cited in the M&E Calculation tutorial emphasize the importance of measuring impacts. Those references point out that money tends to be wasted, sometimes prodigiously, when the impact of the spent money is not measured. Examples abound. Experts explain how to measure this Indicator. RI Investment M&E Analysis Investments, including Components and Inputs, are structured exactly the same way as demonstrated in Example 2. The following M&E 2 Progress Analysis demonstrates that M&E Indicators have only been added to the 4 benchmark and actual Time Period elements of this 2 year Investment. The Investment base element has been left out of the analysis because it does not hold separate M&E Indicators. The full results of the analysis also demonstrate that the Benchmark base elements of Budget analyses won’t have correct full “planned” metrics until the last Time Period because, at that point, all Time Period calculations have been run. All of the “actual” metrics are accurate because the benchmark Investment is ordered, and calculations run, before the actual Investment. In general, Inputs and Outputs are not included in Budget analyses, because the quantity of data becomes difficult to interpret. In addition, references cited in the M&E Calculation tutorial imply that Input and Output Indicators may not be as significant as the remaining base element Indicators. In this example, that means the Output data holding the RI Indexes is not included in the final results. The simplest way to deal with this issue to put the Output Indicators into Outcomes instead and devise different Output Indicators. Alternatively, an M&E Totals analysis displays all elements, including Inputs and Outputs. For information purposes, additional M&E Totals, Statistics, and Change by Alternative Investment Analyses are included with the example. The Totals analysis includes all base elements, including Inputs and Outputs. The Change analysis required setting all of the benchmark Alternative Type properties to “A” and the actual to “B”. The following shows that, for this midterm evaluation, the actual TimePeriod.Amount property has been set to zero for the 2017 period. The Planned Full Percent and Planned Cumulative Percent show that 50% of the planned goals have been accomplished. As a result, this project is 100% on time and on budget. That’s important information to track for most serious projects, programs, and technology assessments (11*). The following image of the last 2 semiannual periods, quarter 6 and quarter 8, shows the same metrics as Example 2 for comparable base elements. Although these analyses display the names of the children Inputs and Outputs, it doesn’t analyze their Indicators for the reasons mentioned in the introduction to this example. Appendix Appendix F. Testing on localhost Pull the data and media files out of db into file system by logging in (kpboyle1, public) and viewing the following URLs on the Preview panel. The default club must be Carbon Emissions Reducers. Testing used the Kestrel server documented in the Source Code tutorial. That tutorial confirms the Version 2.1.6 upgrade to https://localhost:5001 URLs. http://localhost/greentreks/preview/carbon/resourcepack/CTAP Disaster Risk Management/528/none NOTE: 1.9.8 required that all indicator datasets must have uniform 4 hierarchical levels (3 levels are no longer supported). Subalgo 11 and 12 datasets have been modified for that purpose. http://localhost/greentreks/preview/carbon/resourcepack/SubAlgo 09 DRR 1A/527/none http://localhost/greentreks/preview/carbon/resourcepack/SubAlgo 09 DRR 2/532/none http://localhost/greentreks/preview/carbon/resourcepack/SubAlgo 10 DRI 1A/529/none http://localhost/greentreks/preview/carbon/resourcepack/SubAlgo 11 RMI 1A/530/none http://localhost/greentreks/preview/carbon/resourcepack/SubAlgo 12 RI 1A/531/none http://localhost/greentreks/preview/carbon/resourcepack/Drought DRR and DSS/533/none Make the base document for each of the following URLs before running calculations. Additional Version URLs are documented in related tutorials, such as the Social Performance Analysis tutorials. 1.9.2 URLs: http://localhost/greentreks/preview/carbon/output/CTAP Example 1 - Hurricane DRR/2141223467/none http://localhost/greentreks/preview/carbon/output/CTAP Example 2 - Earthquake DRI/2141223468/none http://localhost/greentreks/preview/carbon/output/CTAP Example 3 - Generic RMI/2141223469/none http://localhost/greentreks/preview/carbon/output/CTAP Example 4 - Generic RI/2141223470/none 1.9.4 URLs: http://localhost/greentreks/preview/carbon/outputgroup/CTAP Output Group Example 5/1936433777/none http://localhost/greentreks/preview/carbon/output/CTAP Example 5 - Progress RI/2141223471/none http://localhost/greentreks/preview/carbon/outputgroup/M and E RI Output Group/1936433778/none http://localhost/greentreks/preview/carbon/output/A- CTAP- Benchmark RI/2141223474/none These URLs require running the NPV calculator prior to the Progress Analyzer, even though those results are not used with these examples. Make the base document first. http://localhost/greentreks/preview/carbon/outcomegroup/CTAP RI Progress/47/none http://localhost/greentreks/preview/carbon/componentgroup/CTAP RI Progress/663/none http://localhost/greentreks/preview/carbon/investmentgroup/CTAP Progress RI/275505683/none http://localhost/greentreks/preview/carbon/outcomegroup/CTAP RI M and E Progress/48/none http://localhost/greentreks/preview/carbon/componentgroup/CTAP RI M and E/664/none http://localhost/greentreks/preview/carbon/investmentgroup/CTAP RI M and E/275505684/none 1.9.6 URLs: http://localhost/greentreks/preview/carbon/output/CTAP Example 6 - Floods DRR/2141223476/none 1.9.8 URLs: http://localhost/greentreks/preview/carbon/output/CTAP Example 7 - Drought DRR and DSS/2141223477/none http://localhost/greentreks/preview/carbon/output/CTAP Example 7 - Drought Vulnerability Index/2141223478/none 2.0.0: Version 2.0.0 upgraded to Microsoft’s new Net Core 1 open source initiative. The software was refactored to support the Linux, Mac, or Windows servers supported by that initiative. The Social Budgeting and Source Code tutorials explain more about the refactor. The new technologies included in the refactor may take CTAs a step closer to being an accepted best practice technology for tackling climate change and other serious societal issues. 2.0.2 URLs: Version 2.0.2 upgraded the technologies used to conduct Conservation Technology Assessments (CTAs). The first WebApi app, DevTreksStatsApi, supporting cross platform CTAs, was released. The Technology Assessment 1 and Source Code tutorials document these upgrades. This reference was further proofed. 2.0.4 Version 2.0.4 upgraded the 49 Monitoring and Evaluation (M&E) calculators and analyzers so that they also can use all of the CTA algorithms. 2.0.6 URLs All of the CTA algorithms were tested and improved to work with the upgraded Monitoring and Evaluation (M&E) tools. Each of the following URLs contain both Resource Stock and M&E calculations. The existing TEXT datasets used with the original Stock calculator examples did not need to be changed because the algorithms, rather than the M&E calculators, manipulate the data. The M&E and CTA 01 references demonstrate dataset conventions needed for other types of M&E calculations (i.e. datasets replace an Indicator’s index position for the Indicator.Label). http://localhost:5000/greentreks/preview/carbon/output/CTAP Example 1 - Hurricane DRR/2141223467/none http://localhost:5000/greentreks/preview/carbon/output/CTAP Example 6 - Floods DRR/2141223476/none http://localhost:5000/greentreks/preview/carbon/output/CTAP Example 2 - Earthquake DRI/2141223468/none http://localhost:5000/greentreks/preview/carbon/output/CTAP Example 3 - Generic RMI/2141223469/none http://localhost:5000/greentreks/preview/carbon/output/CTAP Example 4 - Generic RI/2141223470/none http://localhost:5000/greentreks/preview/carbon/output/CTAP Example 7 - Drought DRR and DSS/2141223477/none 2.0.8 URLs The Social Performance Analysis tutorial documents 4 new algorithms that begin to measure social performance, including the social impacts of climate change mitigation and adaptation actions. 2.1.0 URLs The Social Performance Analysis tutorial has the latest URLs. The tutorials that were updated in the past week contain the results of Version 2.1.0 tests. The tests suggest that the “netframework” github branch is now obsolete. We recommend the new .NetCore2.0 github source. 2.1.4+ URLs The Social Performance Analysis 3 reference has the latest URLs. Example 6 in the Social Performance Analysis 3 reference uses the results of the CTAP algorithms to further support disaster risk management in the content of the Sendai Disaster Risk Reduction goals. DevTreks –social budgeting that improves lives and livelihoods 1