Social Performance Analysis 1 Last Updated: November 12, 2018; First Released: May 12, 2017 Author: Kevin Boyle, President, DevTreks (1*) Version: DevTreks 2.1.6 [Appendix C. Examples, can be found in the sibling reference, Social Performance Analysis 2. Appendix D. Definitions, includes informal definitions for the terms used in this reference.] A. Introduction to Social Performance Analysis (2*) Many corporations are concerned about their reputation as being good environmental stewards and socially sound community members. They don’t want reputations for polluting rivers, gobbling up energy supplies, exploiting factory workers, wasting garbage, or producing products that fatten children. Traditionally, this derives from hardheaded concerns about being fined for violating environmental and safety regulations, being sued for harming consumers, and losing market share by being cast in the wrong limelight. Increasingly though, this derives from their investors’ preference for these corporate characteristics, their employees’ desire to work for firms who share these concerns, their customers’ demand for goods and services produced using socially sound methods, and their growing awareness of the power of modern IT to effectively identify and deal with both “good and bad actors”. Section Page Social Performance Measures in Resource Conservation Accounting 2 Standards for Corporate and Public Sector Financial Accounting 4 Standards, or Work Breakdown Structures, for Resource Conservation Accounting 18 Conservation Technology Assessments (CTA) for Corporate and Public Sector Financial Reporting 24 Summary and Conclusions 38 Appendix A. Resource Conservation Value Accounting (RCA) Framework 53 Social Performance Measures 125 Examples and Definitions 132 Government’s role in enforcing corporate social responsibility is being replaced by investor, employee, customer, and informed citizen, demand for private sector social soundness (3*). This reference’s goal is to assist corporations to prove the social soundness of their corporate behavior in order to satisfy investor, customer, and community, demand for transparent proof of corporate social accountability. And to keep their executives, products, goods, and services, on good actor lists and off of bad actor lists. This reference also recognizes that citizens have the same concerns about the social soundness of the public entities responsible for managing public resources. Many have firsthand experience with public entities who are unable or unwilling to effectively manage public goods. Without effective public sector leadership and management, private sector social soundness may not matter. This reference introduces Social Performance Measures that can be used by firms and public entities to collect, measure, analyze, and explain the evidence, or materiality, that proves their public claims about conserving scarce resources. The following statement defines how firms, public entities, investors, customers, and informed citizens, use these measures. Firms and public entities use Social Performance Measures that follow a Resource Conservation Value Accounting (RCA) Framework as background evidence, or materiality, to support financial reporting claims about conserving scarce resources. Their goal is to actively contribute to balanced lives and a balanced planet. Investors, customers, and informed citizens use Social Performance Measures to take concrete action to support good actors and to chasten bad actors. B. Social Performance Measures in Resource Conservation Accounting The Performance Analysis 1 reference introduced Performance Measures, such as Net Savings or Net Benefits, that firms and public entities have traditionally used to measure their standard financial and economic productivity and performance. This reference extends those traditional measures with Social Performance Measures. These new measures provide quantitative and qualitative metrics that firms and public entities can use to measure their impacts on their community’s public goods. These public goods comprise a community’s stocks of human capital, physical capital, economic capital, natural resources capital, social capital, cultural capital, and institutional capital. These metrics provide evidence of social productivity and performance, or social soundness. Recent newspaper reports (NYT, Smith, 2016) confirm growing investor use of Social Performance Measures, with statements such as “Investing based on E.S.G. [Environmental, Social, and Corporate Governance] factors has mushroomed in recent years, driven, in part, by big pension funds and European money managers that are trying new ways to evaluate potential investments. The idea has changed over the last three decades from managers’ simple exclusion from their portfolios of ‘sin stocks’ such as tobacco, alcohol and firearm makers, to incorporation of E.S.G. analysis into their stock and bond picks”. They cite E.S.G. investment funds, offered by companies such as Vanguard and Parnassus, which have outperformed the market. Just as the discipline of Economics is broadly concerned about allocating scarce resources well, Social Performance Measures are broadly used to explain the evidence that proves a firm or public entity’s claims about the conservation of scarce resources. For that reason, this reference avoids using terms usually associated with financial accounting such as “socially responsible reporting”, “sustainability reporting”, or “integrated reporting”. Those terms narrowly focus on the corporate Financial Accounting surface, rather than the underlying Resource Conservation Accounting depth. And whether justified or not, the common acceptance of the phrase “cooking the books” casts doubts about the terms’ true credibility. Unlike Financial Accounting, where teams of accountants sift through extremely private data in order to publicly report upon corporate financial performance, Resource Conservation Accounting uses social networks and clubs to gather and report on forensic data made available by firms, public entities, and others (i.e. financial reports, informed citizens, whistle-blowers, drones, big data) to confirm claims made about how they impact the services generated by public goods. Rather than financial accountants, resource conservationists collect and maintain this evidence. Although the conservationists can be employed directly by firms and public entities, scientific integrity is enhanced when the conservationists work for third-party, objective, scientific, non-governmental organizations (4*). These conservationists are also supported in their independent action to deal with bad actors who sully the reputations of their neighbors (5*). Appendix A. Resource Conservation Value Accounting Framework, introduces a resource conservation accounting framework for measuring social performance. Resource conservationists use Conservation Technology Assessments (CTAs) to apply this framework and report on their social performance. A critical need addressed by CTAs is to measure the risk and uncertainty associated with balancing a company’s internal need for profits, return on investment, and shareholder value, with their community’s need for clean water, fresh air, stable climates, civic rights, non-rigged courts, healthy lifestyles, and spiritual meaning. CTAs do so by helping firms to identify win-win tradeoffs, benefitting both themselves and their community, which they may not have recognized from doing business as usual. At a broader scale, CTAs help investors to understand socially sound capital-allocation decisions. From a governance perspective, CTAs assist public entities to increase public returns from public goods. C. Standards for Corporate and Public Sector Financial Accounting The following image (KPMG, 2016) confirms that mandatory and voluntary “sustainability reporting instruments” are becoming increasingly adopted by companies throughout the world. These standards assist companies to report their accomplishments in areas such as environmental stewardship, respect for human rights, and support for their community’s well-being. They align with the global “Sustainability Development Goals” adopted by international aid agencies (KPMG, 2016, UNSD, 2016). The following groups have developed these types of formal financial reporting standards to assist company and public agency efforts to report on their “sustainability” accomplishments. Note carefully the trend in mandatory, rather than voluntary, reporting instruments shown in this image. Many standards are being developed beyond the systems introduced below (i.e. refer to Appendix A and B). Although most of these systems are designed for private sector companies, many of their ingredients, such as quantitative proof of GHG reductions, can be adapted to assist public sector reporting requirements as well. International Organizations. Corporate guidelines for reporting standards for human capital development, including human rights protection, can be found in publications put out by international organizations such as the UN Guiding Principles (UN, 2015), , and the OECD Guidelines for MNEs (OECDa, 2008). Government guidelines for the statistical reporting of economic and environmental standards can be found in publications such as the UN System of Environmental-Economic Accounting (UN, 2014) and System of Environmental-Economic Accounting 2012 Experimental Ecosystem Accounting (UN, 2014). The United Nations Forum on Sustainability Standards (2016) has developed guidelines and applied knowledge platforms for using “Voluntary Sustainability Standards” (VSS) to support industry-specific efforts to certify the social soundness of their products. Their current efforts (UNFSS, 2016) attempt to integrate the role of the public sector, or more specifically to answer the question: “What are the optimal dynamics between public policy processes and voluntary sustainability standards to ensure sustainability objectives are most effectively met?”. Their motivation derives from credible evidence proving that both public and private sector acceptance and cooperation are needed to achieve fully successful voluntary standard systems. The following image (Committee on Sustainability Assessment (COSA), 2014) confirms that international organizations are actively publishing guidelines, developing standards, and building tools, to “measure and understand sustainability”. That organization’s methods are based on multidimensional systems of Indicators grounded in Economic, Environmental, and Social, “global themes”. Appendix 1 in the KPMG, 2016 reference and Appendix 3 in the TFCRFD, 2016 reference identify additional international efforts in this area. The following images (UN, 2015 and TFCRFD 2016) are representative of some of these efforts. International Accounting Standards Board (IASB) and US Financial Accounting Standards Board (FASB). International and national “keepers” of corporate financial reporting standards and recommendations. The following images derive from some of their publications and presentations. International Integrated Reporting Council (IIRC). “A global coalition of regulators, investors, companies, standard setters, the accounting profession and NGOs. The coalition is promoting communication about value creation as the next step in the evolution of corporate reporting”. The following image derives from some of their publications and presentations. Global Sustainability Standards Board (GSSB) and Global Reporting Initiative (GRI). Their stated objective is “to make sustainability reporting standard practice by providing guidance and support to organizations”. “GRI Standards are the first global standards for sustainability reporting”. The following image derives from some of their publications and presentations. Sustainability Accounting Standards Board (SASB). “The mission of SASB is to develop and disseminate sustainability accounting standards that help public corporations disclose material, decision-useful information to investors. That mission is accomplished through a rigorous process that includes evidence-based research and broad, balanced stakeholder participation.” The following image derives from some of their publications and presentations. Domain-Specific Reporting The bottom segment of the triangle depicted in the following image (Allison-Hope, 2016) demonstrates the types of domain-specific reports that corporations use to provide detailed proof of their impacts on specific “capitals” (i.e. simplistically, data privacy = social capital, human rights = human capital, climate change = natural resources capital). The following images confirm that international organizations, such as the Task Force on Climate Related Financial Disclosures (2016), are currently developing strong recommendations, guidelines, and principles, in specific domain fields, such as climate change, that help firms to understand and disclose financial risks better. The principals outlined for reporting climate change-related financial risks in the previous image are the same principals used for any science-based report. The following image (USEPA, 2000) confirms that science-oriented organizations concur with using these principles in science-based reports that must characterize risk and uncertainty. Investment Rating and Investment Companies The following image (MSCI, 2016) demonstrates tools currently being used by investors that employ Environmental, Social, and Corporate Governance (E.S.G) factors, or Social Performance Measures, in their investment advice and for their social impact investment funds. That company cites research carried out by Barclays “that found investment-grade bonds with higher ESG scores outperformed those with low ESG scores between 2007 and 2015”. Smith (NYT, 2016) cites the growing use of similar investment tools offered by TIAA-CREFF, Vanguard, Goldman Sachs, FTSE Russell, S&P Dow Jones, and Sustainalytics. In a similar way, The Economist (2017) highlights how social impact investments in companies that offer “measurable social or environmental benefits as well as [profits]” are becoming mainstream, with investment companies such as BlackRock, TIAA, PCCM, Goldman Sachs, Bain Capital, and Zurich, actively increasing their portfolios. In recent years, several U.S. states (i.e. Oregon, California, and Colorado) have allowed corporations to register for “public benefit”, rather than “internal profit only”, purposes. International Public Sector Accounting Standards Board (IASB). International “keepers” of public sector financial reporting standards and recommendations. The following image (IFAC, 2011) explains the role of this board. Public Sector Laws and Regulations. A large number of laws and regulations govern the reporting standards required by public sector entities. For example, in the U.S., the “National Environmental Policy Act of 1969 (NEPA) requires Federal agencies to consider and disclose to the public the environmental effects of a proposed Federal action and alternatives before making a decision or taking action” (USDOI, 2015). Similar requirements, affecting both private and public entities, exist in the form of Environmental Impact Statements, Environmental Assessments, Food and Drug Safety Standards, and Health Technology Assessments. D. Standards, or Work Breakdown Structures, for Resource Conservation Accounting The Work Breakdown Structures tutorial explains the standards used in DevTreks for resource conservation accounting. Rather than the previous section’s emphasis on the standards being used only for corporate and public sector financial reporting, DevTreks stresses the need for standards for any cost, benefit, and performance content. The standards are not only needed, and must not only be used, for annual financial reporting (6*). They must be used to increase the accountability of any project, program, technology assessment, or financial report, which is concerned with presenting scientific evidence of resource conservation. The following image, from the WBS reference, demonstrates a prototype WBS developed by the author several decades ago for general conservation planning. It also includes a classification system for natural resources capital accounting and can be downloaded from the WBS tutorial. The natural resources capital elements correspond to “Conservation Practice Standards” that are defined more thoroughly for each U.S. state in separate publications (and may be found online by searching through the publications of USDA, Natural Resources Conservation Service state offices). The Antonopoulos et al. reference (2016) demonstrates how agricultural conservation practice standards have been developed and codified for Europe. The Sustainable Food Lab (2016) reference includes a WBS that represents a “customizable framework of indicators for measuring farm-level sustainability in smallholder agricultural supply chains”. The following image (Bloomberg, 2014) demonstrates that existing financial accounting reporting systems share a lot in common with WBSs. The following image (GSSB, 2013) reinforces the point that existing financial accounting reporting systems share a lot in common with WBSs and may serve the same purposes. The following image (Allison-Hope, 2016) demonstrates the use of Performance Indicators within a WBS for sustainability reporting. The use of indicators and WBSs are explained in the Resource Stock and Monitoring and Evaluation tutorials. These indicators correspond to the “Disclosures” displayed in previous images. The following image (Polasky et al, 2015) highlights the major problem faced by any user of financial or resource conservation accounting standards systems, including Work Breakdown Structures. None of the ecosystem service reporting systems shown in this image has wide acceptance and multi-sectoral use. As the authors put it “One impediment to rapid mainstreaming of ecosystem services stems from the proliferation of definitions, conceptual frameworks, approaches, datasets, and models”. That statement holds true for all public services generated by all public capital stocks. Appendix A contains a new WBS, Public and Private Performance Risk Tradeoff WBS, developed as a starting place for the needed resource conservation accounting standards. This new WBS has been designed to illustrate how tradeoffs between public sector-related disclosures and private company-related disclosures can be understood and reported more easily. It helps public sector entities and companies to identify, understand, report, and reduce risks, from internal threats they impose on themselves and external threats posed by external actors. Both parties can work in tandem to use their knowledge of risks and tradeoffs to advance each other’s goals. For example, the UNFSS (2016) discusses how “Rather than taking individual action, governments can join forces with the private sector and civil society to amplify the sustainability benefits of VSS [Voluntary Sustainability Standards]”. Appendices B and C demonstrate how the WBS can be integrated with international social and economic development goals, VSSs, and WBSs. This WBS still accommodates Polasky’s et al (2015) need for mainstream ecosystem service reporting standards, but within the larger context of protecting and improving all public services generated by all public capital stocks. Just as natural resources capital stocks generate ecosystem services, social capital stocks generate social equity services, institutional capital stocks generate political and judicial system services, cultural capital stocks generate spiritual meaning services, and physical capital stocks generate disaster resiliency services. Ecosystem services, or any other public service, can’t be protected properly, or improved efficiently, without a clear understanding of their full social services context. Although this WBS can be downloaded from the Performance Analysis tutorial, Footnote 11 discusses DevTreks’ preference for using digital (rather than bricks and mortar), science-oriented, online, social networks to maintain and further develop these standards. E. Conservation Technology Assessments (CTA) for Corporate and Public Sector Financial Reporting Corporations, employees, investors, corporate regulators, and government auditors, want transparent, standardized, scientific, evidence that corporations and public entities are accurately reporting their progress in achieving their stated Social Performance goals. Specifically, evidence must be maintained that proves the accuracy of the numbers reported in the following types of “socially sound” financial reports (Bloomberg 2014, Microsoft 2016, and Canada 2016). The following image (UN, 2014) demonstrates that ecosystem service accounting uses techniques that are very similar to the CTA techniques introduced in the Resource Stock references. Although this framework is used for government statistical reporting, the references explain that CTAs derive from the health care sector’s Health Technology Assessments (HTA). HTAs can be, and are, used to support public and private sector investing in health technologies and, by extension, performance accountability. The following image (Boston College, 2010) demonstrates how some corporations provide background documentation to support the environmental component of their financial reports. This image confirms that some of these Corporate Social Responsibility (CSR) reports use the same techniques, such as Life Cycle Analysis, explained in the Resource Stock Calculation and Life Cycle tutorials. The following image comes from the fundamental concepts section of the Integrated Reporting Framework (IIRC, 2013). These concepts are very similar to the CTA concepts introduced in the Technology Assessment tutorials (i.e. the stock of capitals and the flow of services used in the definition for CTA; the goal of contributing to balanced resource stocks). The six “capitals” addressed in this reporting are: financial, manufactured, human, social & relationship, intellectual and natural. These capitals coincides with the seven “capitals” addressed by CTAs. CTAs assign equal weight to human capital, physical capital, economic capital, natural resources capital, social capital, cultural capital, and institutional capital. The second image (EPA, 2016) confirms that these capitals are similar to the same capitals, or Scope, of several international climate change resiliency assessment frameworks. The third image (MOVE, 2011) confirms that these are the same capitals used by some disaster risk assessment frameworks. The images displayed in the previous section, Section D. Standards, confirm that existing financial reporting systems, such as GRI and SASB, also use similar “categories”, or capitals. The IPCC WG2 and IPBES (2016a) references explain how these capitals, including institutional capital and cultural capital, influence decision making processes aimed at protecting and improving resource stocks. The six capitals used in the Integrated Reporting Framework can be readily addressed by these more comprehensive capitals (i.e. financial = economic, manufactured = physical, intellectual = part of economic). They provide further evidence that CTAs can readily provide background, transparent, proof for the claims reported using financial accounting systems. The following image (IIRC, 2013) depicts the “value creation process” that some standards-settings groups use as the foundation for financial reporting. Note carefully the Business Model at the center of the image. The Inputs, Activities, Outputs, and Outcomes in the model coincide closely with the framework presented in the Monitoring and Evaluation (M&E) and Resource Stock tutorials. The M&E and Resource Stock frameworks extend this model further by emphasizing the need to measure Impacts. That is, what evidence exists that money has been, or is being, spent well? Have lives and livelihoods actually improved? How much value has really been created? In effect, this “value creation process” coincides with the “social value conservation process” explained in this reference for preserving and improving public capital stocks. The following image (IASB, 2013) confirms that the CTA emphasis on using basic tools grounded in economics and resource conservation to measure the risk and uncertainty of resources conservation data is consistent with current efforts to improve international financial reporting. Allowances must be made about differences between how resource conservationists and accountants use terms such as “equity” and “net returns”. For example, conservationists use the term “equity” to denote the fair distribution of benefits across society. They use amortization, rather than cash flows or double entry book keeping, to define “net returns”. The following images (GSSB, 2016, SASB, 2016) demonstrate typical content found in “socially sound” accounting reports. The top image shows that standards for disclosures include both requirements and recommendations for reporting. Note that “item c” in the top box requires firms to report on the “calculation tools” used for the disclosure. The bottom image shows that, in many cases, much of a report’s content involves quantitative measurements. CTAs offer a modern, flexible, transparent, open source, algorithm-driven, set of online scientific tools for these calculations and measurements. Importantly, they also include tutorials demonstrating how to replicate the reported results. For example, many of DevTreks’ tutorials demonstrate using a “Change by Time Analysis” to produce reports very similar to the second image (i.e. refer to the Change Analysis 1 tutorial). The top image below (Allison-Hope, 2016) summarizes a recent proposal for the future advancement of these “socially sound” reporting techniques. CTAs support, in particular, the bottom segment of this triangle. For example, the Technology Assessment 2 tutorial demonstrates how CTAPs can be used to supply scientific evidence of how mitigation and adaptation technologies lessen the risk of damages caused by climate change. The bottom image below (IFAC, 2016) makes clear that the nature of public sector financial reporting means that conventional performance measurements, such as Net Returns and Return on Investment, must be supplemented with alternative assessments of public sector performance. The techniques introduced in the Technology Assessment 2 tutorial also demonstrate how CTAPs can be used to supply these alternative assessments, including Disaster Risk Indexes, Resiliency Indexes, Risk Management Indexes, Multi-Criteria Assessments, Cost-Benefit Analyses, and Cost-Effectiveness Analyses. F. Social Performance Measures and Examples Appendix B. Social Performance Measures introduces the algorithms used to measure specific Social Performance Measures, including the Total Social Risk Score for Companies and Communities, Life Cycle Assessments, and Life Cycle Benefit and Cost Assessments. Appendix C. Social Performance Examples presents online examples of these Measures. Future releases will include additional Social Performance Measures and examples. Summary and Conclusions Clubs using DevTreks can start to carry out the basic analysis of the impacts that companies and public entities have on their community’s stocks of human capital, physical capital, economic capital, natural resources capital, social capital, cultural capital, and institutional capital. Clubs can solicit help understanding social performance better and share structured evidence explaining social performance. Networks can build knowledge banks that explain social performance and pass that knowledge down to future generations. Investors, customers, and informed citizens can use the information in the knowledge banks to take concrete action to support good actors and to chasten bad actors. Measuring social performance is a precursor to improving societal performance (7*). Society improves by understanding, protecting, and improving the public services generated by public capital stocks. Doing a better job of collecting, measuring, aggregating, analyzing, sharing, and explaining, social performance data can help people to improve their lives and livelihoods. Footnotes 1. Most of this reference derives from the author’s experience working as an agricultural economist for the USDA, Natural Resources Conservation Service and a rural credit lender for the USDA, Farmers Home Administration. The author spent more than 15 years working as a full time economist on natural resources conservation issues. An additional 10 years was spent as a rural lender reviewing agricultural firms’ financial statements, income statements, and cash flows (as well as putting farmers and ranchers in and out of business based on that evidence). He made the connection between the conservation of scarce resources and financial reporting decades ago, but didn’t have time to fully build tools that complement financial and scientific resource accounting until he founded DevTreks about a decade ago. 2. Readers should be aware that plenty of professionals and practitioners in the fields of “capital improvement” can do a better job explaining these subjects and presenting examples. The key value-added that DevTreks brings to the table are the actual information technology tools needed to carry out these practices. DevTreks would not be producing this reference unless the source code was “freely” available to back up the recommendations in this reference. [In fact, nothing is free with information technology –good IT requires hard work]. As usual, DevTreks encourages source code users to produce their own tutorials, references, and algorithms that target the needs of their own customers. 3. To bring this statement down to earth, particularly in 2017, in a very real sense it doesn’t matter anymore who leads government agencies, like the US Environmental Protection Agency. Customers, investors, and informed citizens, have far more power. That power is realized at scale and scope when they fully exploit the nascent online tools that are available to satisfy their demand for socially sound private and public sector behavior and then use their code, purses, wallets, and ingenuity to take appropriate actions. 4. Another key distinction from financial accountants is the mentality of the resource conservationists as expressed by their attire. They’re usually the ones in the meetings wearing the tee shirts, flannel shirts, and jeans. They are also the ones more likely to understand the significance of “$28” donations, and the need to include farmworkers and factory workers in the corporate audience. As mentioned in several tutorials, DevTreks takes the long view about how attitudes and technologies evolve to accomplish these “best resource conservation accounting practices”. That “long view” is measured in technological development and adoption time. Refer to the definition of Technology Development, Diffusion, and Adoption in Appendix D. 5. Importantly, the resource conservationists and their supporting technologists may be good examples of the “new economy” jobs that people seem so distressed about. Rural residents should understand that, although many of the examples and references appear to focus on multi-national corporations, these tools, and jobs, are fully applicable to rural areas, including towns and villages, and rural companies, including farms and ranches. In the U.S., their local Conservation Districts and the District employees, which the author has worked extensively with in the past, may be good examples. Because of the need to effectively gather and analyze objective, scientific evidence that also identifies and deals with bad actors, new types of resource conservationists and Conservation Districts may be needed (i.e. conservation technology assessors and Conservation Technology Assessment Districts based primarily on the cloud to avoid local pressure). 6. The existing standards-setting groups appear to miss this connection. A WBS may sound pedestrian, and not worthy of attracting money or being taken seriously, compared to “international sustainability reporting standards”. This reference tries to make the case that the systems complement one another (and might contain a lot of the same elements). 7. The chairman of the Task Force on Climate Related Financial Disclosures (2016), Michael Bloomberg, phrases this as “What gets measured better gets managed better”. Daniele Giovannucci of the Committee on Sustainability Assessment (COSA via UNFSS 2016) cites the adage “you cannot manage what you cannot measure”. This reference would add “what gets measured well, gets purchased more often and attracts more investments”. 8. In other words, a group of bandits, or, as some rural friends the author has known might say, rural sociopaths, can’t descend on public land and claim ownership because they don’t personally like public ownership of goods, services and assets or because they want to personally gain from the exploitation of public goods. 9. For example, Miller (NYT, 2016) quotes economists who have found that “Over the long haul, … automation has been much more important [to explain the loss of well-paying jobs than open trade and immigration] –it’s not even close.” That is, technological changes that lead to higher productivity drives much of economic development. New institutions are needed that can embrace, learn, and adapt, open source code to improve local economic development. 10. Although finding good online data for the drought case study presented in the CTAP reference proved futile. 11. A reminder –it may be the role of a software development company to provide basic resource accounting value frameworks and applied algorithms, but that role is the full time job of social networks and their clubs. For example, the author has trained resource conservationists to use an applied version of this framework for conservation planning (i.e. USDA, NRCS, Soil, Water, Air Plants, Animals, and Humans). The principle advantage to that approach was its comprehensiveness –eroded soil effected crop yields but also effected stream water quality, which in turn, effected fisheries and fishing economies. The principle disadvantages included extensive and complicated paperwork, reliance of subject matter specialist subjective knowledge in defining and applying indicator thresholds (i.e. plus 3 = highly positive impact; minus 3 = highly negative impact), difficulty fully understanding the interactions between local conservation and regional conservation, and fully applying the results to local farms and ranches. These frameworks and applied algorithms require the full time commitment of the new technology-oriented and social network-based institutions advocated in this reference (i.e. refer to Footnote 12’s institutional factors and refer to recent news reports introducing new AI startups). 12. As another example of “doing it right”, interpret the following question: “Other than institutional factors, is there any technical reason that Social Performance Measures can’t be completed online, stored uniformly online, and easily accessed online by people and machines, for every company and public entity in the world?”. After all, if an investment rating company with a few hundred employees can complete E.S.G. ratings for thousands of companies, what can online social networks and their clubs accomplish? 13. And by the natural resource conservation sector - the IPBES (2016b) reference is the equivalent of a Health Technology Assessment (HTA) that “assess[es] animal pollination as a regulating ecosystem service underpinning food production in the context of its contribution to nature’s gifts to people and supporting a good quality of life”. 14. The specific reason that DevTreks relies on the generic terms “social networks and clubs” throughout its tutorials (i.e. Footnotes 11, 12, and the Summary) is so that no assumptions need to be made about the capacity for innovation and inclusiveness within conventional institutions (i.e. including high-faluting ones). Many have lost, and continue to lose, their credibility. This aligns with Ramnath and Child’s (2017) conclusion that “traditional criteria for judging research merit such as - citations, peer reviews, and other bibliometric approaches, are not measuring up as useful measures of research effectiveness”. 15. The danger of having one IT company dominate this field can be lessened by carefully following the directions suggested in this reference. 16. In additional other words, carried out by people in well-paying, local jobs focused on improving the public services generated by public capital stocks. References Allison-Hope, Dunstan. 2016. “A Proposal for the Future of Sustainability Reporting.” BSR, San Francisco. Antonopoulos, Ioannis-Sofoklis, Paolo Canfora, Marco Dri, Pierre Gaudillat, David Styles, Julie Williamson, Alice Jewer, Neal Haddaway, Martin Price. Best environmental management practice for the agriculture sector - crop and animal production. European Commission. Final Draft, 2016. Bloomberg, Materiality Assessment Impact Report. 2014 Boston College. Institute for Responsible Investing. How to Read a Corporate Social Responsibility Report – a user’s guide. 2010 Canada. Accounting Standards Oversight Council. Annual Report. 2016 Committee on Sustainability Assessment (COSA). Principles and Characteristics of the COSA System. A Concise Introduction. 2014 Deal, Robert; Fong, Lisa; Phelps, Erin, tech. eds. 2017. Integrating ecosystem services into national Forest Service policy and operations. Gen. Tech. Rep. PNW-GTR-943. Portland, OR: U.S. Department of Agriculture, Forest Service, Pacific Northwest Research Station. 87 p. Eco-Management and Audit Scheme (EMAS). Users Guide. European Commission. http://ec.europa.eu/environment/emas/emas_publications/guidance_en.htm (last accessed April, 2017) European Environment Agency (EEA). Climate change, impacts and vulnerability in Europe 2016 An indicator-based report. EEA Report No 1/2017 European Environment Agency. Mapping and Assessment of Ecosystems and their Services. 3rd Report – Final, March 2016 European Environment Agency, National monitoring, reporting and evaluation of climate change adaptation in Europe. EEA Technical report No 20/2015 European Environment Agency. Report on phase 1 of the knowledge innovation project on an integrated system of natural capital and ecosystem services accounting in the EU (KIP-INCA Phase 1 report) – Final, September 2016 European Environment Agency. Towards a transport and environment reporting mechanism (TERM) for the EU. Part 1: TERM concept and process. 1999 Eurostat. Manuals and guidelines. Getting messages across using indicators. A handbook based on experiences from assessing Sustainable Development Indicators. 2014 Global Sustainability Standards Board (GSSB). G4 Sustainability Reporting Guidelines. 2013 Global Sustainability Standards Board (GSSB). GRI Reporting Standards (101 to 400 with supplemental publications). 2016 Hammerl, Marion and Stefan Hörmann, EMAS and Biodiversity. How to address biodiversity protection through environmental management systems. Published by Lake Constance Foundation (LCF) and Global Nature Fund (GNF), Germany. 2016 IPBES (2016a): Summary for policymakers of the methodological assessment report on scenarios and models of biodiversity and ecosystem services of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. S. Ferrier, K. N. Ninan, P. Leadley, R. Alkemade, L. A. Acosta, H. R. Akçakaya, L. Brotons, W. W. L. Cheung, V. Christensen, K. A. Harhash, J. Kabubo-Mariara, C. Lundquist, M. Obersteiner, H. M. Pereira, G. Peterson, R. Pichs-Madruga, N. H.  Ravindranath, C. Rondinini, B. A. Wintle (eds.). Secretariat of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, Bonn, Germany. 32 pages. AND Individual chapters of the methodological assessment of scenarios and models of biodiversity and ecosystem services (deliverable 3 (c), 369 pages) IPBES (2016b). The assessment report of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services on pollinators, pollination and food production. S.G. Potts, V. L. Imperatriz-Fonseca, and H. T. Ngo, (eds). Secretariat of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, Bonn, Germany. 552 pages. IPBES (2016b): Summary for policymakers of the assessment report of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services on pollinators, pollination and food production. S.G. Potts, V. L. Imperatriz-Fonseca, H. T. Ngo, J. C. Biesmeijer, T. D. Breeze, L. V. Dicks, L. A. Garibaldi, R. Hill, J. Settele, A. J. Vanbergen, M. A. Aizen, S. A. Cunningham, C. Eardley, B. M. Freitas, N. Gallai, P. G. Kevan, A. Kovács-Hostyánszki, P. K. Kwapong, J. Li, X. Li, D. J. Martins, G. Nates-Parra, J. S. Pettis, R. Rader, and B. F. Viana (eds.). Secretariat of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services, Bonn, Germany. 36 pages International Accounting Standards Board (IASB). Conceptual Framework for Financial Reporting Exposure Draft ED/2015/3. 2015 IPCC WG2. Climate Change 2014, Impacts, Adaptation, and Vulnerability, Part A Global and Sectoral Aspects. Working Group 2 Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. [Field, Barnes, Barros, Dockken, Mach, Mastrandrea, Bilir, Chatterjee, Ebi, Estrada, Genova, Girma, Kissel, Levy, MacCracken, Mastrandea, and White (EDS)]. SUBJECT TO FINAL EDIT International Federation of Accountants (IFAC). International Public Sector Accounting Standards Board Fact Sheet. 2011 http://www.ifac.org/system/files/downloads/IPSASB_Fact_Sheet.pdf (last accessed 1/2017) International Federation of Accountants (IFAC). Handbook of International Public Sector Accounting Pronouncements. 2016 Edition, Volume 1. 2016 The International Integrated Reporting Council (IIRC). The International Framework. 2013 International Recovery Platform. Guidance Note on Recovery Private Sector. 2016 Jacobs, Sanders , Nicolas Dendoncker , Berta Martín-López , David Nicholas Barton , Erik Gomez-Baggethun , Fanny Boeraeve, Francesca L. McGrath , Kati Vierikko , Davide Geneletti , Katharina J. Sevecke , Nathalie Pipart , Eeva Primmer , Peter Mederly , Stefan Schmidt, Alexandra Aragã, Himlal Baral , Rosalind H. Bark , Tania Briceno , Delphine Brogna , Pedro Cabral , Rik De Vreese , Camino Liquete , Hannah Mueller, Kelvin S.-H. Peh , Anna Phelan, Alexander R. Rincón , Shannon H. Rogers, Francis Turkelboom, Wouter Van Reeth, Boris T. van Zanten, Hilde Karine Wam, Carla-Leanne Washbourne. A new valuation school: Integrating diverse values of nature in resource and land use decisions. Ecosystem Services 22 (2016) 213–220 Khazai, Bijan; Bendimerad, Fouad; Cardona, Omar Dario; Carreno, Martha-Lilliana; Barbat, Alex H.; Burton, Christopher G. A Guide to Measuring Urban Risk Resilience. Principle, Tools and Practice of Urban Indicators (Prerelease Draft). Earthquakes and Megacities Initiative - EMI. 2015 KPMG, GRI, UNEP, Centre for Corporate Governance in Africa, University of Stellenbosch Business School. Carrots Sticks. Global trends in sustainability reporting regulation and policy. 2016 Kristensen, Peter. The DPSIR Framework. National Environmental Research Institute, Denmark Department of Policy Analysis European Topic Centre on Water, European Environment. Paper presented at the 27-29 September 2004 workshop. Agency. 2004 La Notte, Alessandra; Dalia D'Amato, Hanna Makinen, Maria Luisa Paracchine, Camino Liquete, Benis Egoh, Davide Geneletti, Neville D. Grossman. Ecosystem Service Classification: A systems ecology perspective of the cascade framework. Ecological Indicators. 72 (2017) 392-402 Loconto, Allison and Dankers, Cora. (2014). Impact of International Voluntary Standards on Smallholder Market Participation in Developing Countries: A Review of Literature. Food and Agriculture Organization of the United Nations. Rome, 2014. http://www.fao.org/3/a-i3682e.pdf Microsoft Global Reporting Initiative Index. 2015 (last accessed 12/2016) https://www.microsoft.com/about/csr/transparencyhub/global-reporting-initiative-index/ MSCI. 2016 (last accessed 12/2016) https://www.msci.com/esg-indexes Methods for the Improvement of Vulnerability Assessment in Europe (MOVE). Assessing vulnerability to natural hazards in Europe: From Principles to Practice. A manual on concept, methodology and tools. 2011 National Ecosystem Services Partnership (NESP). Federal Resource Management and Ecosystem Services Guidebook. Section 3—Ecosystem Service Assessment Methods (last accessed 2/2017, nespguidebook.com) Natural Capital Coalition (NCC). 2016. “Natural Capital Protocol”. (Online) Available at:www.naturalcapitalcoalition.org/protocol (last accessed March, 2017) Newspaper reports, such as the following, are not usually included in scientific, technical reporting. But the fact that these ones occurred in just one 7 day period when this reference was being prepared, should give informed citizens reason to pause. New York Times. Levitsky, Steven and Ziblatt, Daniel. Is Our Democracy in Danger? December 18, 2016 New York Times. Miller, Claire Cain. What’s Really Killing Jobs? It’s Automation, Not China. December 22, 2016 New York Times. Smith, Randall. Investor Demand Leads Analysts to Focus on Stocks’ Social and Environmental Risks. December 15, 2016 New York Times. Tabuchi, Hiroko. Disclose Climate Risks, Companies Are Urged. December 15, 2016 Lydia Olander, Robert J. Johnston, Heather Tallis, Jimmy Kagan, Lynn Maguire, Steve Polasky, Dean Urban, James Boyd, Lisa Wainger, and Margaret Palmer. 2015. “Best Practices for Integrating Ecosystem Services into Federal Decision Making.” Durham: National Ecosystem Services Partnership, Duke University. doi:10.13016/M2CH07 OECDa. Guidelines for Multinational Enterprises. 2008 OECDb. Handbook on Constructing Composite Indicators. Methodology and User Guide. 2008 OECD (2015), National Climate Change Adaptation: Emerging Practices in Monitoring and Evaluation, OECD Publishing, Paris. http://dx.doi.org/10.1787/9789264229679-en (last accessed March, 2017) Evelien M. de Olde, Frank W. Oudshoorn, Claus A.G. Sørensen, Eddie A.M. Bokkers, Imke J.M. de Boerc. Sustainability at farm-level: Lessons learned from a comparison of tools in practice. Ecological Indicators 66 (2016) 391-404 Kathryn O’Neill, Kavitha Viswanathan, Eduardo Celades, Ties Boerma. Strategizing national health in the 21st century: a handbook Chapter 9. Monitoring, evaluation and review of national health policies, strategies and plans. World Health Organization. 2016 S. Polasky, H. Tallis, and B. Reyers. Setting the Bar: Standards for Ecosystem Services, Proceedings of the National Academies of Science of the United States of America. 2015; 112(24):7356–7361 Gayatri Ramnath, Keith Child. Sustainability research that counts. Lessons from implementation research for agriculture. COSA Issue Brief 01. 2017 RAND. Developing a Risk Assessment Methodology for the National Aeronautics and Space Administration. 2016 Santos-Martín F., García Llorente M.; Quintas-Soriano C., Zorrilla-Miras P., Martín-López B., Loureiro M., Benayas J.; Montes M. (2016). Spanish National Ecosystem Assessment: Socio-economic valuation of ecosystem services in Spain. Synthesis of the key findings. Biodiversity Foundation, 2016 of the Spanish Ministry of Agriculture, Food and Environment. Madrid, Spain 68 pp. ISBN: 978-84-608-8776-8. Stevens, M., Demolder, H., Jacobs, S., Michels, H., Schneiders, A., Simoens, I., Spanhove, T., Van Gossum, P., Van Reeth, W., Peymen, J. (Eds.) (2015). Flanders Regional Ecosystem Assessment: State and trends of ecosystems and their services in Flanders. Synthesis. Communications of the Research Institute for Nature and Forest, INBO.M.2015.7842756, Brussels. 2015 AND Flanders case study. From project “Mapping of Ecosystems and their Services in the EU and its Member States (MESEU)” http://biodiversity.europa.eu/maes/maes-catalogue-of-case-studies (last accessed 3/2017) Sustainability Accounting Standards Board (SASB). Letter to Secretary, United States Securities and Exchange Commission. Regarding: Concept Release on Business and Financial Disclosure Required by Regulation S-K. 2016 Sustainable Food Lab. Towards a Shared Approach for Smallholder Performance Measurement: Common indicators and metrics. 2016 Task Force on Climate Related Financial Disclosures (TFCRFD). Recommendations of the Task Force on Climate-related Financial Disclosures. December, 2016 The Economist. Impact Investing. Coming of Age. January 7, 2017 U.S. Council on Environmental Quality. Principles and Requirements for Federal Investments in Water Resources (PR&G). March, 2013 U.S. Council on Environmental Quality. Interagency Guidelines for PR&G. December, 2014 U.S. Environmental Protection Agency, United States Office of Science Policy. Science Policy Council. Risk Characterization Handbook. December 2000 U.S. Environmental Protection Agency, National Center for Environmental Assessment Office of Research and Development. Evaluating Urban Resilience to Climate Change: A Multi-Sector Approach. EPA/600/R-15/312 DO NOT CITE OR QUOTE External Review Draft, June 2016 [DevTreks focus on technology development, rather than academic reporting, explains the use of these types of reports.] United States Environmental Protection Agency, Office of Water Office of Research and Development, National Ecosystem Services Classification System (NESCS): Framework Design and Policy Application, September 2015 USGCRP, 2016: The Impacts of Climate Change on Human Health in the United States: A Scientific Assessment. Crimmins, A., J. Balbus, J.L. Gamble, C.B. Beard, J.E. Bell, D. Dodgen, R.J. Eisen, N. Fann, M.D. Hawkins, S.C. Herring, L. Jantarasami, D.M. Mills, S. Saha, M.C. Sarofim, J. Trtanj, and L. Ziska, Eds. U.S. Global Change Research Program, Washington, DC, 312 pp. http://dx.doi.org/10.7930/J0R49NQX US Department of the Interior, Agency Specific Procedures For Implementing the Council on Environmental Quality’s Principles, Requirements, and Guidelines for Water and Land Related Resources Implementation Studies. 707 DM 1 Handbook. November, 2015 U.S. National Aeronautical and Space Administration. NASA Risk Management Handbook, NASA/SP-2011-3422, Version 1.0, November, 2011. United Nations Development Programme. UNDP Policy and Programme Brief. UNDP Support to the Implementation of the 2030 Agenda for Sustainable Development. 2016a United Nations Development Programme. From the MDGs to Sustainable Development For All. Lessons from 15 Years of Practice. 2016b United Nations Environment Programme (UNEP) Loss and Damage: The Role of Ecosystem Services. 2016 UNEP DTU Partnership. Monitoring and Evaluation for Climate Change Adaptation. A summary of key challenges and emerging practice. UNEP DTU Partnership Working Papers series; Climate Resilient Development Programme, Working Paper 1: 2016 UN. Guiding Principles. Reporting Framework with implementation guidance. 2015 UN. Sendai Framework for Disaster Risk Reduction 2015–2030, A/RES/69/283, General Assembly, 69th session. 2015 UN. Report of the open-ended intergovernmental expert working group on indicators and terminology relating to disaster risk reduction. A/71/644. 2016 UN. System of Environmental-Economic Accounting 2012— Central Framework. 2014 UN. System of Environmental-Economic Accounting 2012 Experimental Ecosystem Accounting. 2014 UN Statistical Division. Inter-agency Expert Group on SDG Indicators. Tier Classification for Global SDG Indicators. 21 December, 2016 United Nations Forum on Sustainability Standards (UNFSS). Meeting Sustainability Goals. Voluntary Sustainability Standards and the Role of Government.2nd Flagship Report of the United Nations Forum on Sustainability Standards. 2016 References Note We try to use references that are open access or that do not charge fees. Improvements, Errors, and New Features Please notify DevTreks (devtrekkers@gmail.com) if you find errors in these references. Also please let us know about suggested improvements or recommended new features. Video tutorials explaining this reference can be found at: Two Social Analysis video tutorials can be found in this resourcepack. https://www.devtreks.org/commontreks/preview/commons/resourcepack/Performance Analysis 1/509/none Appendix A. Resource Conservation Value Accounting Framework (11*) This conceptual framework derives from techniques applied in the natural resources capital protection and improvement field, particularly climate change and biodiversity protection, because much of the most recent and best resource conservation accounting and risk assessment science is being carried out for that public capital stock. The framework is applied to assess all public capital stock services. A. Principles In the context of DevTreks, a good Resource Conservation Accounting reporting system follows the Financial Accounting principles already defined in this reference along with the following public capital-accounting principles. 1. Supports Science-based Resource Allocation Decisions. Reporting must be grounded in a science that supports decisions for allocating scarce resources well. Those resources comprise a community’s stock of human capital, physical capital, economic capital, natural resources capital, social capital, cultural capital, and institutional capital. Physical stock balances are mainly important to the degree that they generate services demanded by people. Changes in stock balances and resultant public services must be measured and reported in terms of costs, benefits, productivity, tradeoffs, and performance. 2. Reports the Risk and Uncertainty of Stock Balances. Reporting must account for the risk and uncertainty of measuring resource conservation actions and valuing changes in stock balances. That is, it must present final qualitative and quantitative measurements, and price financial risk, in terms of scenarios, ranges, thresholds, and confidence intervals. 3. Supports Transparent Corporate and Public Sector Materiality Reporting. Reporting must help investors, customers, and informed citizens to take concrete action to support good actors and to chasten bad actors. That is, it must use online tools and produce online results that support the financial accounting principle of materiality. 4. Improves Institutions Over Time. Reporting must use adaptive management, or “adaptive efficiency”, applied over time, to improve formal and informal institutions. That is, it must use online tools and produce online results that help investors, customers, and citizens, to become socially sound investors, consumers, and community participants. 5. Leads to the Rapid Adoption and Use of Online Knowledge Banks. Reporting must lead to the faster diffusion and adoption of its applied, online, tools. 6. Promotes Experimentation and Gains Experience. These reporting techniques may be exactly what are not needed to conserve scarce resources. People need to be experimenting and increasing their experience if global resources are to be conserved affordably and transparently. B. Summary, Resource Conservation Value Accounting Framework This section summarizes the resource conservation accounting framework used in this reference for measuring social performance. The remaining sections of this Appendix explain this framework in greater detail. The following image (European Environment Agency, 2016) summarizes a conceptual framework currently being used for ecosystem assessment in the European Union (EU). To summarize this framework, changes in ecosystem conditions, or state, are caused by drivers (of change) rooted in the ecosystem services generated by ecosystem capital stocks. Humans derive benefits from these ecosystem services resulting in their demand for levels of service quantity and quality that will increase their quality of life. Changes in demand for services or deterioration in the existing supply of highly demanded services, result in human responses, or mitigation and adaptation actions, designed to protect or improve the services. These responses change the ecosystem conditions thereby changing their service flows. The following image (IPBES, 2016a) summarizes a similar, but more nuanced, version of the conceptual framework currently being used for ecosystem assessment by a scientific organization affiliated with the UN. The primary nuance derives from their explicit acknowledgement that different stakeholders place different values on ecosystem services (i.e. the blue text). The nuances are particularly important to understand because they explain why many resource stocks have been overexploited or allowed to deteriorate –powerful interest groups placed greater value on what they could personally derive from the public stock rather than what was best for the common good. Further descriptions of the IBPES framework (i.e. Chapter 1 in their “scenarios and models” 2016a reference, Appendix 1 in their “pollinators” 36 page reference and the Preface to their 552 page reference) distinguish between “indirect drivers of change”, rooted in the socio-economic systems displayed in the EAA framework (i.e. government policy, societal demand for housing, company production process causing air pollution), and “direct drivers of change” that directly change natural resource stock conditions (i.e. conversion of farmland to residential land uses; reduced air quality). The natural resource stock changes result in changes to the structure and function of ecosystems which change the services they generate. The changes to services result in “nature’s benefits” that can change the quality of life experienced by specific stakeholder groups. The following image (EMAS, 2017) summarizes the framework employed by a financial reporting system, the Eco-Management and Audit Scheme (EMAS), currently being used in Europe as “a management tool for companies and other organisations to evaluate, report and improve their environmental performance”. Their definition for these terms include “Environmental aspect means an element of an organisation’s activities, products or services that has or can have an impact on the environment. Environmental aspects may be input related (consumption of raw materials and energy, for instance) or output related (air emissions, waste generation, etc.).” Hammerl et al (2016) demonstrate how this reporting system works in the context of improving biodiversity and preserving ecosystem services. They elaborate on the latter term as follows: “The purpose of introducing the concept of 'ecosystem services' is to consider ecological services more readily in decision-making processes, estimate their (economic) value and motivate decision makers to reduce the excessive use and degradation of the natural basis providing those ecosystem services. A loss of biodiversity results in the reduction of the quality of the assets and services provided by nature, thus impinging on businesses in almost every branch of industry. Major businesses have recognised the preservation and protection of biodiversity to be of utmost importance. For this reason, firm anchoring of the ecosystem service approach within the entrepreneurial goals of a business is an essential prerequisite for ensuring success.” To implement Hammerl et al (2016) guidelines, this RCA Framework inserts another box, Environmental Services, between Environmental Aspects and Environmental Impacts, similar to the ecosystem service causal chain, or social impact pathway, displayed in the following image (Olander et al, 2015: Activities = Actions, Aspects = Ecosystem, Environmental Services = Ecosystem Services, and Impacts = Societal Benefit). Antonopoulos et al (2016) demonstrate how to apply the reporting system in specific economic sectors, such as agriculture. These references recommend several performance indicators that can be used for reporting. The following image (Natural Capital Coalition or NCC, 2016) summarizes part of an applied decision support framework, the Natural Capital Protocol, currently being used for natural capital assessments by private sector companies. The 3 steps highlighted in this image, or impact and dependency pathways, can be derived directly from the EAA, IBPES, and EMAS Conceptual Frameworks (i.e. impact and dependency drivers = indirect and direct drivers of change or activities/products/services; natural capital state = ecosystem state, direct drivers of change that impact nature, or environmental aspects; value impacts and dependencies = ecosystem service valuation, nature’s benefit measurement, or environmental impacts). The 9th step of this framework, Take Actions, coincide with Responses displayed in the EAA framework and the “indirect drivers of change” found in the IBPES framework. The IBPES concern about stakeholder values is incorporated throughout this process (i.e. Step 1, Scoping). This 9 step process is further explained with the decision support processes introduced later in this Appendix. These groups use systems of Indicators to implement their frameworks (i.e. see EEA 2016, IPBES 2016a Chapter 8, EMAS Section 2.3.2.1, and NCC Section 5.2.2). This reference’s Resource Conservation Value Accounting Framework adapts these frameworks by using qualitative and quantitative Indicators from those systems to establish relationships and scenarios for the following 4 indicator paths. Resource Stock Pathway 1 (i.e. Actions: EAA “drivers of change” and “responses”; IBPES “indirect drivers of change”; EMAS “activities, products, and services”; and NCC “impact drivers or dependencies”). Non-performing and non-existent actions (i.e. current practice) address the causal factors and barriers that explain current stock conditions and why stocks fail to deliver desired stock service levels. They help to explain benchmark stock conditions and services. New mitigation and adaption actions (i.e. responses) attempt to protect or improve stock conditions and services. They help to explain targeted stock conditions and services. Indicators measure the most significant actions as well as their expected trends. Resource Stock Pathway 2 (i.e. Conditions: EAA “state of biodiversity and ecological functions”; IBPES “direct drivers of change that impact nature” ”; EMAS “environmental aspects”; NCC “state of natural capital”). The state of an ecosystem corresponds to the condition of the 7 public capital stocks. The EAA (2017) defines ecosystem condition as “the capacity of ecosystems to deliver ecosystem services for human well-being depends on the condition of ecosystems, i.e. the quality of their structure and functionality. There is growing understanding of the importance of biodiversity in ecosystem functioning and service delivery, which represents our natural capital.” That organization further defines conditions as “the effective capacity of an ecosystem to provide services, relative to its potential capacity.” Indicators measure the current condition of priority capital stocks as well as expected trends with those assets. Resource Stock Pathway 3 (i.e. Services: EAA “ecosystem services”; Hammerl et al “ecosystem services”, IBPES “ecosystem services and nature’s benefits” ”. Demand to protect or improve resource stock conditions is driven by the services generated by the stocks (i.e. ecosystem services). The purpose of protecting or improving services is to generate benefits that coincide with the values and preferences held by diverse stakeholders. Indicators measure the most important services as well as expected trends with those services. Resource Stock Pathway 4 (i.e. Impacts or Valuations: and Tradeoffs (EAA “ecosystem service values”; IBPES “quality of life” ”; EMAS “environmental impacts”; NCC “value of natural capital impacts and dependencies”): The impacts of improvements in services are measured as enhancements to the quality of life of stakeholders. Different stakeholders value the services differently. Measuring these different valuations allow tradeoffs to be considered when making decisions that affect stock services, thereby lessening the potential public conflicts, or changed consumer demand for a company’s products, arising from stakeholders who feel their values have been neglected. Indicators measure the impacts from changes in services as improvements to stakeholder quality of life. Flexible Definitions of the 4 Pathways: This generic pathway is used because it accommodates a wide assortment of sustainability assessment tools. For example, Olde et al. (2016) review 48 sustainability assessment tools currently being used in the agricultural sector, most of which use 4 level indicator hierarchies. In addition, this type of Indicator combination can be used to support sustainability decision support tools that are based on “social impact pathways” (NCC 2016: Actions, Conditions, Services, and Impacts), “causal chains” (Olander et al 2015: Actions, Ecosystem State, Ecosystem Services, Societal Benefit), “exposure pathways” (USGCRP, 2016: Drivers, Exposure Pathways such as Dose-Response, Outcomes), “value creation process” (IIRC 2013: Inputs, Activities, Outputs, Outcomes), and “theory of change” or “results chains” (COSA, 2014 and Sustainable Food Lab, 2016: Activities, Outputs, Outcomes, Impacts). The individual levels within the hierarchy can also be defined flexibly. For example, Actions in Level 1 can be defined several alternative ways: as company activities, performance objectives, as the capacity to respond effectively to vulnerabilities, drivers and pressures, or as the primary risks or stressors effecting stock conditions and services. Conditions in Level 2 can be defined in terms of biodiversity loss. If Services are defined for Level 3, they may focus on the supply of ecosystem services while Level 4’s Impacts focus on their demand. The term “ecosystem services” can be replaced with whatever public service is being investigated in the capital stock assessment, such as cultural and institutional services. Loconto et al (2014) summarize the practical importance of such Indicator “pathways” in product standards systems as follows: “In September 2010, the International Social and Environmental Accreditation and Labelling (ISEAL) Alliance published an Impacts Code with requirements for its members (standards systems) to help them understand the impact of their standards. Standards organizations are required to develop their theory of change or impact chain, i.e. the pathways through which the standard is likely to have certain impacts.” Indicator Thresholds: The current state of knowledge and technology limits the application of the framework to loosely defined relationships, or linkages, between the 4 Indicators. This becomes reinforced by this framework’s extension beyond ecosystem services to cover all public services. In other words, full mathematical functions or CTA algorithms (i.e. IPBES models and linkages) can’t be defined yet that jointly change, in the case of social impact pathways, Conditions, Services, and Impacts when Actions are taken. Subsequent sections of this Appendix address this reality, and the associated increase in the risk and uncertainty of stock assessment metrics, by introducing the use of qualitative and quantitative Indicator Threshold systems. Quality of Life Scenarios. The current state of knowledge and technology limits the number of multi-stock Indicators, stakeholders, and spatial and temporal scales, which are used to define quality of life scenarios for realistic future “social impact pathways”. For example, the remaining sections of this Appendix explains that, in the United States, federal agencies are required to apply a similar framework when assessing water and land resource projects, many of which are limited to single watersheds with general cost effectiveness objectives. The EEA (2017) reference introduces 5 socio-economic scenarios (Shared Socioeconomic Pathways or SSPs) that have been developed to complement IPCC climate change scenarios. Those SSPs provide logical Indicators that can help to define alternative Quality of Life scenarios. Given the caveats presented in this section, the following example, further explained in subsequent sections of this Appendix, illustrates how these 4 hierarchical level can be applied in tools introduced in Appendix B. The framework and planning process may become easier to understand and apply as more concrete examples are added to Appendix C (10*). Illustrative Community Example 1. Georgia Heat Wave (introduced in Appendix A, Section E. Indicator Threshold Overview) Although, for simplicity, the following example demonstrates how the framework works for one simplistic natural resource capital stock scenario, such as a 1.5 degree temperature increase, the same process works for all public capital stocks, whether the scenario deals with the severe erosion of civil rights, major meddling with judicial systems, or significant interferences with public information access. This example demonstrates using a social impact pathway for the 4 hierarchical levels. The remaining sections of this Appendix and some of Appendix C’s examples will address the deficiencies in this example, including the applied decision support processes, spatial and temporal scales, multiple service values held by multiple stakeholder groups for multiple stressor reductions, double counting, and the applied digital platform. Indicator Thresholds 1. Low Action Resiliency Threshold = 1: low resiliency resulting from too little (<$2) per capita public expenditure per 100,000 vulnerable population 2. High Action Resiliency Threshold = 4: high resiliency resulting from appropriate (>$10) per capita public expenditure per 100,000 vulnerable population 3. Low Condition Resiliency Threshold = 1: 200% of target GHG levels result in unacceptable capacity to regulate climatic conditions 4. High Condition Resiliency Threshold = 4: 90% of target GHG levels result in acceptable capacity to regulate climatic conditions 5. Low Service Threshold = 1: 1 in 10 years have greater than 100% of critical heat wave threshold results in a very unacceptable level of risk for regulating climatic conditions 6. High Service Threshold = 4: 1 in 10 years have less than 25% of critical heat wave threshold results in a very acceptable level of risk for regulating climatic conditions 7. Low Impacts Resiliency Threshold = 1: 1 in 10 years have greater than 100 increased deaths per 100,000 vulnerable population resulting in 40% QOL satisfaction by stakeholder groups 8. High Impacts Resiliency Threshold = 4: 1 in 10 years have less than 10 increased deaths per 100,000 vulnerable population resulting in 70% QOL satisfaction by stakeholder groups Scenario 1. Acceptable Quality of Life for Georgia citizens 9. Impact Pathway: Actions -> Conditions -> Services -> Impacts 10. Stressors: High GHG result in severe heat waves 11. Targeted Stakeholder Groups: Urban and rural residents of Georgia who can’t reduce health risks caused by heat waves 12. Performance Objective: reduce deaths caused by heat waves 13. Mitigation and Adaptation Actions: Improvement 1 consists of a) …, b)…, and c)…. (i.e. BEMPs from publications like Antonopoulos et al, 2016) Iteration 1. Resource Stock Assessment of Scenario 1 Indicator 1. Actions Scores (or Ratings) 14. Benchmark Action Score = 1: low resiliency resulting from too little (<$2) per capita public expenditure per 100,000 vulnerable population 15. Target Portfolio 1 Action = 4: high resiliency resulting from appropriate (>$10) per capita public expenditure per 100,000 vulnerable population Indicator 2. Conditions Scores 16. Benchmark Condition Score = 1: 200% of target GHG levels result in unacceptable capacity to regulate climatic condition 17. Target Portfolio 1 Condition = 4: 90% of target GHG levels result in acceptable capacity to regulate climatic conditions Indicator 3. Services Scores 18. Benchmark Service Score = 1: 1 in 10 years have greater than 100% of critical heat wave threshold results in a very unacceptable level of risk for regulating climatic services 19. Target Portfolio 1 Service = 4: 1 in 10 years have less than 25% of critical heat wave results in a very acceptable level of risk for regulating climatic services Indicator 4. Impacts Scores 20. Benchmark Impact Score = 1: 1 in 10 years have greater than 100 increased deaths per 100,000 vulnerable population resulting in 40% QOL satisfaction by stakeholder group 1 21. Target Portfolio 1 Impact = 4: 1 in 10 years have less than 10 increased deaths per 100,000 vulnerable population resulting in 70% QOL satisfaction by stakeholder group 1 Iteration 1. Monitoring and Evaluation Assessment of Scenario 1 22. Actual Action Score = 2: moderate resiliency resulting from moderate ($2-$5) per capita public expenditure per 100,000 vulnerable population 23. Actual Condition Score = 2: 150% of target GHG levels result in acceptable capacity to regulate climatic conditions 24. Actual Service Score = 2: 1 in 10 years have 50 to 100% of critical heat wave threshold results in an unacceptable level of risk for regulating climatic services 25. Actual Impact Score = 2: 1 in 10 years have 50 to 100 increased deaths per 100,000 vulnerable population resulting in 50% QOL satisfaction by stakeholder group 1 Iteration 2: Benchmark Resource Stock Assessment Ratings = Iteration 1 Actual Ratings Iteration n: Independent Institutional Improvement Score (by independent M&E specialists) Illustrative Institutional Improvement Results 1. Fair, Affordable, and Transparent Tradeoffs and Synergies are Understood: Georgia uses Adaptive Management to fine tune the mitigation actions until deaths from heat waves are reduced to acceptable levels in transparent and affordable manners. Some citizens might find this approach heartless, especially if their elderly relatives are the “deaths”, but the public funds spent on these mitigation actions might easily be spent on other public capital risks, or in other places, with much higher death rates. Every public and private sector expenditure is subject to tradeoffs and synergies. In effect, the values and preferences of some stakeholders and consumers must be traded off to accommodate the values and preferences of other stakeholders and consumers. The goal is to understand these tradeoffs and synergies in order to reduce the risk of increased deaths, or negative impacts to capital stocks, in fair, transparent, affordable, and socially inclusive, ways. 2. Root Causes are Acted Upon: The heat wave deaths can’t be avoided without a good understanding of the “drivers of change”, or root causes, of the increased temperatures –what threshold levels has Georgia established for corporations to follow in reducing GHG? Which public sector policies and private sector behaviors contributed to these thresholds and deaths? Can these Institutional Capital Risks be mitigated now in a cost effective way rather than incur the potentially “disastrous” costs and deaths caused by the increased Natural Resources Capital risks? Can the free riders responsible for these increased Institutional Capital Risks and public expenditures be held accountable, now and in the future? Can the free riders be forced to pay their full share of the calamities when they occur (i.e. see EEA, 2017, USGCRP, 2016)? If policy makers’ short term policies prove too risky and expensive for long term social soundness, what independent actions can a community or company take to increase resilience to those risks? C. Vulnerability, Resiliency, and Risk Reduction The following images (EPA, 2016) summarize a comprehensive framework for evaluating urban resiliency to climate change. This framework measures “the ability of a city to reduce exposure and sensitivity to, and recover and learn from gradual climatic changes or extreme climate events”. This framework adapts these types of climate change resiliency definitions in the definition for “Public capital stock resiliency” found in Appendix D. This framework improves this reference’s resource accounting framework with several useful supplements. * Resiliency to climate change is critically important for any resource conservation accounting framework. In the context of this reference, the desired resiliency is to corporate and public sector actions that negatively impact the services generated by public capital stocks. In the domain of climate change, important stressors to public capital stocks are efforts by firms and public entities to undermine, skirt, or not transparently support (i.e. via their financial reports and agency reports), efforts to improve a community’s resilience to climate change-related disasters. * The framework employs quantitative and qualitative Indicators and Indicator Thresholds to measure resiliency in a way that complements the Indicator measurements used in CTAs. * The mitigation and adaptation Responses for reducing the negative risks associated with climate change are applied with Adaptive Management, or Learning, to result in actual Risk Reduction results. The Risk Reductions coincide with Disaster Risk Reduction techniques, including Vulnerability Assessment, introduced in the Technology Assessment 2 tutorial. The following changes describe how this framework has been adapted to the more general Resource Accounting Framework needed for this reference. The next section, Indicator Threshold Overview, demonstrates how to use Indicators to measure resilience. * Public Capital Stressors (or Drivers and Conditions). The Climate Stressors and Drivers of Change in the framework are supplemented with more general “Public Capital Stressors”. In effect, the “Stressors and Drivers” coincide with “Corporate Stressors” and “Public Sector Management Stressors”. The DRR algorithms introduced in the Technology Assessment 2 tutorial includes specific Hazard Indicators that correspond to these Stressors. Although climate stressors must still be addressed using this framework, the desired measurements involve the positive and negative impacts that private firms and public sector management have on public services, such as the damages caused by climate change-induced disasters, or the inability of communities to effectively respond to changing climatic conditions. In general, climate change is investigated as 1 stock characteristic in “quality of life” scenarios containing several characteristics taken from all of the 7 public capital stocks. * Public Capital Stock Services. The boxes labelled “Economy, Natural Environment, and Infrastructure” on the left hand side of the image are replaced with the more comprehensive 7 Public Capital Stocks used throughout this reference. In addition, the Responses taken to improve those stocks result from changes in the supply of highly demanded stock services, not by the stocks themselves. * Public Capital Stock Vulnerabilities and Risks. The three elements of urban vulnerability, “Exposure, Sensitivity, and Response Capacity”, listed on the left hand side, and defined above, are applied in the context of corporate and public sector actions that impact the public services generated by the 7 public capital stocks. The DRR algorithms introduced in the Technology Assessment 2 tutorial includes specific Indicators for measuring Exposure and Sensitivity (called Vulnerability Distribution in that reference). Section B’s Overview explains that the third element of Vulnerability Assessment, Response/Adaptive Capacity, can be addressed by carefully defining the use of Condition Indicators (i.e. as their capacity to deliver services) and Action Indicators (i.e. as their capacity to respond effectively to vulnerabilities). Risks can be addressed through Indicator Threshold systems. The impact, or valuation of these services, as defined by multiple stakeholder interests, will be addressed in subsequent sections of this Appendix. * Mitigation and Adaptation Actions (or Responses and Impacts), Monitoring and Evaluation, and Institutional Improvement through Adaptive Management. The bridges, learning loops, Responses, and “Risk Reduction Capacity” in the image support the “Improves Institutions” component introduced in Section A. Principles, and reinforces the need to monitor and evaluate mitigation actions applied using Adaptive Management until risks to vulnerable public capital stocks are reduced to manageable levels. The DRR algorithms introduced in the Technology Assessment 2 tutorial demonstrate the use of specific Social Fragility and Lack of Resiliency Indicators to reduce risks to vulnerable populations. * Sector-specific WBSs. This framework can be extended beyond urban areas to any area needing to conserve scarce resources. In order to do so, the actual analysis has to be customized for industries and areas (i.e. agriculture and rural). Concretely, this can be done by fully following the guidance presented in the Social Budgeting tutorial for employing sector-specific social networks to administer the WBSs. D. Indicator Threshold Overview This section introduces the use of Indicators and Thresholds for measuring the risk and uncertainty associated with assessing public capital stocks and their service flows. Although this applied approach may, over time, prove practical, the use of any accounting framework based on Indicators must be used cautiously. The OECDb (2008), Eurostat (2014), OECD (2015), and EEA (2017) references highlight important conceptual, statistical, and quality control, problems that must be considered when using Indicators to measure the riskiness of outcomes and impacts. An important take home message from those references is that Indicator systems must be tied as closely as possible to real world measurements with real world companies and communities, implying that: 1) Monitoring and Evaluation frameworks that learn from mistakes play a key role in any Resource Conservation Value Accounting Framework (refer to the EEA 2015 and OECD 2015 references); 2) Indicators that can’t be qualitatively or quantitatively measured should have limited use, and 3) risk and uncertainty techniques, such as thresholds and sensitivity analysis, must be used with Indicator measurements. The following images (EPA 2016) demonstrate how resource conservation frameworks use quantitative and qualitative Indicators and Indicator Thresholds to measure climate change-related risks. That reference explains their use: “threshold values were established for each indicator that defined the upper and lower boundaries of the four resilience categories (i.e. lowest to highest)”. Rather than apply these measurements to specific loss-exceedance events (i.e. 25 year, 50 year, 100 year, 250 year, flood events) they apply them to individual, multi-causal, “gradual changes and/or extreme events” (i.e. prolonged drought, extreme hurricane). The CTAP reference also introduces algorithms (i.e. 11 and 12) that indirectly apply similar techniques. The following image (RAND 2016) demonstrates how risk assessment frameworks used by national organizations, such as US NASA, also use Indicator Thresholds to measure project and program risks. The authors explain the use of thresholds as follows: “Requiring that the analysis of each risk factor and its components, indicators, and mitigations begin with an identification of boundaries provides a clear articulation of the limits, acceptable and unacceptable, involved in the analysis. This development process allows senior leaders to articulate boundary conditions upon which a risk is no longer acceptable, providing guidance to the analysis.” Those authors mention that more quantitative risk assessments techniques, such as the probabilistic risk assessments introduced in the CTAP reference, are not appropriate when the probabilities are not known. The following image (Khazai et al, 2015) demonstrates how risk assessment frameworks used by international organizations also use “Target Levels of Attainment”, or qualitative Indicator Thresholds, to monitor and evaluate progress in reducing risks associated with natural resource disasters. Subagorithm12 in the Technology Assessment 2 reference, demonstrates using this system to compete Disaster Resiliency Indexes. The following image (EEA, 2017) demonstrates the use of Indicators and Thresholds for assessing the impacts of climate change. The following image (Eurostat, 2014) compares how several national and international government statistical agencies measure Indicators and use Thresholds to define performance for the measurements. The NESP Section 3, Using Indicators Effectively (2017) reference provides the following guidelines on how to use Indicator Thresholds to assess the risk and uncertainty about performance. “Imprecise estimates of performance for one or more measures are typical. A common but undesirable way of dealing with uncertainty about performance is to create measurement scales that lump quantitative results into “bins,” such as 0–10 breeding pairs of a particular bird species, 11–20 breeding pairs, and so on. The problem with this tactic is that to unambiguously assign a particular result to the correct bin, i.e, to know that it belongs to the 0–10 bin and not to the 11–20 bin—the evaluator must know whether the number of breeding pairs is 10 or 11. A better way to handle uncertainty about the number of breeding pairs is to express performance as a range of values in cells of the alternatives/attribute matrix. Instead of describing performance as falling into a predefined “bin,” such as 0–10, express performance as a range considered likely to encompass the true performance (e.g., 5–8 breeding pairs) or as a probability distribution (e.g., a mean of 6.5 with a standard deviation of 2). Then carry out the rest of the analysis by using the extremes of the range (or by sampling from the probability distribution) to see if that uncertainty affects the overall rating of alternatives.” Applying these guidelines to the EPA 4 Level Resiliency system, the Low Resiliency Score might establish a realistic benchmark for the current condition of the resource or for a realistic condition that could be described as severely degraded. The High Resiliency Score reflects plausible boundaries for extreme positive conditions away from the benchmark (i.e. upper confidence interval or 3 standard deviations), such as fully sustainable. E. Social Performance Indicator Threshold Overview In order to support the Section A principles, “Supports … Resource Allocation Decisions” and “Supports … Materiality Reporting”, desired Indicator Thresholds must measure private and public sector investments, services, and assets in terms of costs, benefits, productivity, tradeoffs, and performance. The following two images (TFCRFD, 2016) demonstrate using a standard set of materiality indicators (i.e. the Financial Impacts to Firms column) to measure risks to private sector firms. These materiality indicators demonstrate the use of the desired cost, benefit, and performance, impact metrics. Although the images highlight Climate-Related Risks and Opportunities, the same materiality indicators are fully applicable to any financial risk faced by a private sector company. The following image (Hammerl, 2016) demonstrates a similar set of private sector risks, and similar surmised materiality indicators, associated with ecosystems and biodiversity. The authors cite recent surveys that have found that “more than three quarters of Europeans [i.e. consumers] believe that mankind has a responsibility to look after nature and that it is important to stop biodiversity loss.” Recent surveys in the US confirm that 64% of US citizens [i.e. consumers] have similar sentiments about dangers they face from climate change (http://www.gallup.com, March 2017 poll). Protest marches held throughout the world in 2017 confirm that many citizens (i.e. consumers) feel the same way about potential losses to institutional, social, and human capital stock services, including gender and civil rights. Although these materiality indicators highlight the specific impacts on a firm’s financial “gross performance”, they must be further defined in terms of productivity (i.e. input per unit output), efficiency (i.e. movement along the scale of benefit and cost Indicator Thresholds that can act as a proxy for marginal benefits and costs) performance (i.e. quantity of inputs consumed as percent total inputs available for consumption) and tradeoffs (i.e. quality of service 1 can be increased but quantity of service 2 must be reduced). Examples include (i.e. also see Section 2.3.2.4 of EMAS 2017): * Capital Investment Budget Indicator: Threshold 1 = 1% capital investment budget; Threshold 2 = 10% capital investment budget; Threshold 7 = 100% capital investment budget. * Cost per unit Emission Reduction Indicator: Threshold 1 = 1% of operating costs per unit emission reduction; Threshold 2 = 10% of operating costs per unit emission reduction; Threshold 7 = 100% of operating costs per unit emission reduction. * Input as Percent Total Available Capacity: Threshold 1 = 50 m3 water consumed per 1,000 m3 water available; Threshold 2 = 25 m3 water consumed per 1,000 m3 water available; Threshold 7 = 5 m3 water consumed per 1,000 m3 water available. * Emission per Unit Output Indicator: Threshold 1 = 1% of emission standard; Threshold 2 = 10% emission standard; Threshold 7 = 100% emission standard. * Targeted Customer 1 ESG Satisfaction Index: Threshold 1 = 1% of target index; Threshold 2 = 10% target index; Threshold 7 = 125% target index. The following image (EMAS, 2017) documents the benefits that European companies have experienced from adopting similar types of financial reporting systems. Given that the public sector also makes capital investments, collects annual revenues, incurs operating costs, manages workforces, owns assets with market values, and strives to deliver services using more productive business practices, a similar set of materiality impact indicators can be adapted to measure Social Risks to Communities. The major difference is to account for the public sector’s primary responsibility for increasing the public’s returns from the services generated from public goods. As the USDOI (2015) puts it “[federal] investments should maximize the present value of net public benefits”. In the context of the ecosystem services generated by natural resources capital stocks, PR&G (2014) defines public benefits and costs as “Ecosystems provide services to people. Thus, Federal investment impacts on the environment or ecosystem may be understood in terms of changes in service flows. The process of identifying, evaluating, and comparing these changes provides a useful organizing framework to produce a complete accounting [i.e. this RCA Framework]. Reduced service flows over time amount to costs, and increased services flows over time amount to benefits.” The following definition and image (NESP, Section 3, 2017) demonstrate the use Benefit-Relevant Indicators and of Causal Chains for measuring the public returns, or benefits, generated by public services (i.e. in this case, the ecosystem services generated by natural resources capital stocks). The same principles hold for any public service generated by any public capital stock. Note that the “performance objective” mentioned in the image’s caption coincide with the expected performance of Mitigation and Adaptation Actions that are applied in this RCA Framework. This reference uses the terms “causal chains”, “social impact pathways”, “results chains”, “exposure pathways”, and “theory of change” interchangeably (i.e. the Indicators change depending on the term and the purpose of the assessment). “Benefit-relevant indicators (BRIs) are measurable indicators that capture this connection by considering whether there is demand for the service, how much it is used (for use values) or enjoyed/valued (for nonuse values), and whether the site provides the access necessary for people to benefit from the service, among other considerations. An ecological measure can become a BRI if it is tied directly and causally to something important to people, e.g., the presence of bald eagles, which are clearly identified as important to the American public.” Importantly, the UN-SETAC (2016) reference in Appendix C, Example 3, refers to this type of Indicator valuation technique as being based on “instrumentally valued systems” that derive their benefits from their utility to humans. In contrast, “intrinsically valued systems” derive Indicator benefits from their existence. “Culturally valued systems” derive value to humans based on aesthetic, artistic, recreational, or spiritual, qualities. For example, the following image shows how some international organizations (IPBES, 2016a) relate ecosystem services to human benefits based on instrumental, intrinsic, and cultural values. Note the implied, overall, performance objective of public services – improve human quality of life. The 1st of the following 3 images (Santos-Martín, 2016) demonstrates the use of ecosystem valuation methods to measure the benefits arising from these provisioning, regulating/supporting, and cultural, ecosystem services. The 2nd and 3rd images illustrate the meta-analysis results derived from a review of those valuation techniques carried out throughout all of Spain. This reference provides good conceptual overviews of how these different valuation methods can be used to value ecosystem services. The IBPES (2016) references cover alternative valuation methods in depth. Although the Spanish study highlights the advantages to documenting the monetary benefits associated with ecosystem services, most of the comprehensive references used in this Appendix (i.e. IPBES, EEA, OECD, IPCC, USEPA, and USGCRP) discuss the importance of using a mix of quantitative and qualitative measures of performance in assessing the value of public services. For example, OECD (2015) describes this mix in the following statement. “[Climate change adaptation indicator sets] may entail a mix of qualitative outcome indicators and quantitative process indicators. On their own, any category of indicator may not be enough. For instance, a [quantitative] process indicator specifying whether a policy framework has been developed does not shed light on whether the policy has been implemented and what the corresponding outcomes are. It is useful to complement this type of indicator with qualitative indicators to assess how the policy may have contributed to changes observed.” In the context of Monitoring, Reporting, and Evaluation (MRE, or M&E) indicator systems, the EEA (2015) describes this mix in the following statement. The EPA Indicator Threshold system displayed throughout this Appendix is an example of a “mixed methods” approach. “A mixed methods approach to MRE makes use of multiple sources of information and combines both the quantitative and qualitative methods (for example using a range of indicators, alongside stakeholder perspectives gained through self-assessments, surveys and consultations with experts). This allows for more effective triangulation of information gathered through MRE processes as different data sources can be checked against each other to ensure that the overall narrative of adaptation progress is robust, consistent and contextualised.” In the context of this RCA Framework, IITA and COSA (2016 in Appendix C, Example 1) explain the relation between “results chains”, “causal chains”, “theory of change”, or “social impact pathways”, and mixed methods assessment techniques: “Explanatory mixed methods use a structured qualitative investigation to determine if a chain of causation, consistent with the theory of change [or social impact pathway], was implemented and could have plausibly explained changes in performance pointed to by the quantitative evidence.” Social scientists, business managers, public sector analysts, and financial accountants have spent decades developing similar Indicators and Indicator Threshold systems for measuring costs, benefits, productivity, tradeoffs, and performance. The following image (EPA, 2016) demonstrates the use of Indicator Thresholds for socioeconomic variables, which include Social Risks. In this example, several of these indicators measure input use productivity, such as energy consumption per capita or BTU use per dollar. The following images (EEA, 2017) demonstrate the use of Indicator Thresholds for socioeconomic variables which include ecosystem services and materiality impacts (top image) and socioeconomic scenarios (bottom image). In relation to this RCA Framework, the top image’s Level 1 = Stressors, Level 2 = Threshold levels, Level 3 = Ecosystem services, and Level 4 = Impacts/sensitivities. The second image includes social science scenarios, specifically SS3 and SS4, which have become more pronounced, and worrisome, in some countries in recent years. The following image (USGCRP, 2016) uses geographic vulnerability mapping to develop a Composite Heat Vulnerability Index that identifies communities vulnerable to heat waves. The authors describe the public sector performance advantages offered by this technique as follows: “By linking together census data, data on the determinants of health (social, environmental, preexisting health conditions), measures of adaptive capacity (such as health care access), and climate data, GIS mapping helps identify and position resources for at-risk populations”. In this example, public sector performance is enhanced by using the Indicator, Vulnerability Index, to target where public funding should be directed. In the context of Social Performance measurement, the authors mention that these approaches have been extended to measure the actual health outcomes associated with the public expenditures. The USGCRP (2016) authors use the following image to demonstrate the modeled relationship (i.e. a dose-response mathematical relation) between urban deaths and temperature increases. Continuing with the Georgia heat stress example, a logical extension of this technique is to develop Indicator Thresholds that define cost, benefit, productivity, tradeoffs, and performance levels for reducing the risks of increased deaths from increased temperatures, as illustrated in Section B. Implementation Steps, Community Example 1 (i.e. High Resiliency Score = per capita cost to reduce 90% deaths from 1.5 degree temperature increase = $1, Low Resiliency Score = per capita cost to reduce 100% deaths from 1.5 degree temperature increase = $1 million). The desired outcome from the applied use of this framework is for private companies and public entities to comply with consistent resource conservation accounting frameworks, standards, indicator threshold systems, risk and uncertainty measurements, time horizons, reporting requirements, and effective risk reduction actions (i.e. the KPMG advice (2016) to “align and harmonize” these Indicator systems). The social risks of bad actor behavior, such as air pollution costs caused by managing emissions badly, are measured and reported using the same set of metrics, whether the bad actor is a local government utility, corrupt politician, or local factory. F. Indicator Thresholds and Work Breakdown Structure The following stylized Private and Public Performance Indicator Threshold tables illustrate how each of the WBS Risks might be measured. Section C’s tools introduce the actual TEXT datasets used to apply this framework. The same materiality indicators are used in both systems, but the threshold levels differ, with the focus switching from private sector risks in the top table to public sector risks in the bottom table. The generic indicators shown in the Indicator Threshold tables are refined by social networks when applied to the specific risks listed in this WBS for companies and communities. For example, rather than using the generic indicator, cost per unit output, a refined indicator might be ‘per capita cost per unit renewable energy production’. The following WBS illustrates typical Social and Private Risk and Indicator elements used in this Framework. This WBS illustrates how tradeoffs between public sector-related disclosures and private company-related disclosures can be understood and applied more easily. The WBS helps public entities and private companies to work in tandem to use their knowledge of risks and tradeoffs to advance each other’s goals. This WBS can be downloaded from the Performance Analysis tutorial, but refer to Footnote 11. The KPMG (2016) reference points out that the majority of international mandatory reporting requirements are coming from governments. These mandatory reports provide logical starting places for defining additional WBS Risks, Indicators and Thresholds. For example, Appendix 3 in the TFCRFD, 2016 reference (see the Select Disclosures - Governments image in Section B), shows that Australia requires that corporations report compliance with specific thresholds for GHG emissions. The EEA (2017) points out the importance of developing Indicator systems that have widespread acceptance, similar to a WBS, so that comparisons can be made between countries, industries, and companies. As an example of the importance of such indicators, they cite the UN’s (2016) development of a uniform set of Disaster Risk Reduction (DRR) Indicators that member countries have agreed to use for reporting the results of disasters. Many of those Indicators relate directly to private sector materiality indicators, such as economic losses associated with property, employee, and business disruption. The next section of this Appendix, Public and Private Service Damage and Loss from Disasters, further discusses the role of climate change-related disaster risk information in public and private sector financial reporting and risk management. The KPMG (2016) reference also points out the UN’s Sustainability Development Goals (SDGs) (UNSD, 2016) contain a specific target (12.6) for private sector companies to integrate sustainability [Indicators] into their reporting cycles. The social and economic development nature of those goals mean that the SDG Indicators, as well as the Sendai DRR Indicators, are, in effect, materiality Indicators. Although targeted towards developing countries, many of those goals and Indicators, such as the targets for poverty reduction, can be adjusted to apply to most public sector governments. In addition, many of these targets, such as the targets for environmental improvement, can be adjusted to apply to most private sector companies. Given the large number of reporting instruments found throughout the world (and throughout the references), KPMG (2016) recommends: “Alignment and harmonization must be a key goal for governments, market regulators, stock exchanges, industry associations, standard setters and all those responsible for developing reporting instruments.” This RCA WBS demonstrates how to refine Indicators, such as the SDGs, so that they can be applied, or “aligned and harmonized”, for both public and private sector reporting. Appendices B and C will demonstrate “reporting instruments” that integrate widely accepted Indicator systems, such as the UN’s SDGs and climate change-related Sendai DRR Indicators, into this framework’s WBS, tools, and reports. The following image (USGCRP, 2016) presents a concrete example of how several of the capitals used in this WBS interact and are applied in resource conservation assessment tools. In the context of this reference, the image’s “Climate Drivers” are replaced by “Corporate and Public Entity Drivers [that impact public capital stocks]”. The outcome, Improved Physical Health, is replaced by the more general “Improved Quality of Life”. Unlike the USGCRP Health Outcome Indicators, appropriate Social Performance Indicators for this RCA Framework must relate more transparently to benefit and cost metrics for impacts to human capital stocks, such as Cost per Quality Adjusted Life Years (QALYs) and Change in Disability Adjusted Life Years (DALYs) Per Change in Stock Services. The following images (Kristensen, 2004) demonstrates the linkages between indicators for the capitals. The EEA Conceptual Framework for Ecosystem Assessment introduced in Appendix A derives from this framework. G. Public and Private Sector Damage and Loss from Disasters Several international efforts are underway to improve how ecosystem service assessments, or, in the context of this reference, public service assessments, can be used to mitigate and adapt to climate change. The EEA (2017) demonstrates the use of indicators, including ecosystem service indicators, to measure the impacts of climate change across the EUR. They stress the importance of improving the “knowledge bank” of climate change-related disasters and losses throughout the EU, noting that “economic costs can potentially be high, even for modest levels of climate change”. They find “Climate-related extreme events accounted for almost EUR 400 billion of economic losses in the EEA member countries over the period 1980?2013. This accounts for 82 % of the total reported losses due to extreme events over this period, […]. Policies and actions would be facilitated by better collection of data concerning the economic, social and environmental impacts of weather and climate-related extremes.” The main body of this reference relates these public sector economic, social, and environmental impacts directly to private sector materiality indicators, including “ESG Factors”. The United Nations Environment Programme (2016) reference introduces 5 case studies that demonstrate how ecosystem based assessment approaches also can be used to assess the damages and losses to humans caused by climate change and why all 7 public capital stocks must be considered in the assessments. The stated purpose of their reference is to “[try] to advance understanding of climatic stressor effects on ecosystems and possible correlations and implications for societal losses and damages.” Their approach, further defined in the following statement, helps to explain why Appendixes B and C introduce Social Performance Measures and Examples that demonstrate disaster loss assessment. “The case studies show that causal links between climate change and a specific event, with subsequent loss and damage, are often complicated. Oversimplification must be avoided and the role of different factors, such as governance or management of natural resources [i.e. institutional capital], should be explored further. For example, lack of investment in water related infrastructure [i.e. physical capital], improved agricultural technology [i.e. economic capital], or health care services [i.e. human capital] also influences the risk of loss and damage.” The International Recovery Platform (2016) discusses why private sector companies have, in the past, depended on the public sector to make information accessible to them about the risks they face from climate change-related disasters –they simply didn’t have the capacity to independently assess these risks. They cite research that has concluded “accountability among the public and private actors requires transparent and accessible disaster risk information”. Their recommended solutions include government support for data platforms for collecting and sharing disaster risk information (i.e. see last section of this Appendix). In addition, this reference begins to show private sector companies how to independently assess these risks to help them to carry out better business continuity plans (BCP) (i.e. so that their businesses don’t fail and jobs disappear). The authors use the following statement to describe the advantages of this approach. “Through the development of a BCP, businesses will have analyzed the manner in which identified threats stand to impact their business processes, their people, their facilities and property, and their data. Using the knowledge gained through this exercise, they become better equipped to identify suitable risk reduction options or mechanisms to work around the problems that arise.” H. Public and Private Sector Monitoring and Evaluation (M&E) DevTreks employs software objects, or Base Elements, in its applied, online resource conservation tools. Section F. Indicator Thresholds and WBSs, explained how to use Work Breakdown Structures to classify and communicate Resource Conservation Accounting Disclosures, or Indicators. Those Indicators are calculated and analyzed using calculators and analyzers linked to these base elements. The Work Breakdown Structure tutorial explains how to classify and communicate information contained in the base elements. For example, the M&E calculators can be linked to all base elements, including Outcomes and Outputs. In the context of public sector performance reporting, a public sector environmental agency might use the U.S. Department of Energy’s WBS to classify Outcome and Output base elements associated with mitigation actions designed to reduce energy consumption. The previous section’s Social and Private Risk WBS and Indicator Threshold tables can then be used to report on the performance of the mitigation and adaptation actions. The OECD (2015) provides an international perspective about how countries are using monitoring and evaluation (M&E) “for the twin objectives of learning and accountability” when implementing public sector policies, programs, and projects, particularly related to climate change. As they put it “Learning aims to enhance stakeholders’ understanding of the country’s climate change risks and vulnerabilities that in turn can help to identify approaches that are effective in reducing those risks. Accountability aims to ensure that resources allocated for adaptation are effective in achieving set objectives.” The following image (EEA, 2015) illustrates how most EUR members have developed, or are developing, Indicator systems that can be used to monitor, report, and evaluate (MRE or M&E), climate change adaptation actions. Appendices B and C will begin to demonstrate how to follow the KPMG advice (2016) to “align and harmonize” these Indicator systems. EEA (2015) cites individual countries that have engaged private sector stakeholders to participate in these MRE systems. Several of those private sector stakeholders have expressed their desire for better risk management information and tools that permit them to take independent action to mitigate and adapt to these risks (i.e. so that their businesses don’t fail and jobs disappear). For example, the following image (EMAS, 2017) shows how firms use M&E with the EMAS system of environmental reporting. I. Decision Making Processes And Valuations The IPBES (2016a, chapter 5) reference introduces several decision contexts that can be used to assess ecosystem and public services. The CTA and CTAP tutorials introduce examples demonstrating how to implement resource conservation decision support frameworks with communities and organizations. For example, the CTA reference includes a 10 step process used by the US NASA to assess project risks. The CTAP reference explains that subalgorithm 12, Resiliency Index (RI), uses the following 5 step process to assess urban resiliency to climate change: 1) Stakeholder Participation, 2) Stakeholder Consultations. 3) Initial Indicator Development, 4) Validation of the RI in Workshops, and 5) Participatory Evaluation of the RI. That reference also points out that beneficiaries take these approaches seriously because they are experiencing the heightened effects of climate change-related disasters, such as drought and typhoons, first hand. In the context of this reference, many of the beneficiaries have extensive experience dealing with deficient public sector institutions and private sector companies. Because of the danger of attempting to measure everything while actually reducing risks for nothing, this reference recommends an adaption of the Resiliency Index’s 5 step process for this framework, similar to the RAND (2016) use of a limited number of indicators for measuring priority project risks. For example, the following image (OECD, 2015) demonstrates how Australia prioritizes climate change risks. The prioritized social risks, or “drivers of change”, and most important indicators, then follow the decision-making processes introduced next. Footnote 11 points out that, although the author has practical and extensive experience applying a similar framework, modern times requires modern online social networks to carry this out. Federal agencies in the United States must follow uniform principles, requirements, and guidelines (PR&G) when planning sizeable projects that can impact water resource stocks (US CEQ, 2013; US CEQ, 2014; USDOI, 2015, Deal et al, 2017). PR&G uses the following general planning guidelines for evaluating changes to the services generated by water and land resources in the U.S. a. Scope the Level of Analysis. b. Define the Purpose and Need c. Formulate a Range of Alternatives d. Project Future Conditions of the Study Area and Associated Impacts on the Affected Environment e. Evaluate Alternatives f. Display the effects/comparison of alternatives The following image (NESP, Section 3, 2017) summarizes how this planning process works in the context of protecting and improving ecosystem services. This reference substitutes the more general term, public services, for ecosystem services. The Natural Capital Coalition (2016) introduce the following 9 step process that private sector firms can use to improve how they make decisions related to their management of natural capital and their impact drivers and dependencies on biodiversity. Section B in this Appendix explains how this process relates directly to ecosystem assessment decision support processes used by federal and international agencies. The USGCRP (2016) relates these types of decision making processes with stakeholder valuations with the following statement “Understanding costs and benefits of different decisions requires understanding people’s preferences and developing ways to measure outcomes of those decisions relative to preferences. This “valuation” process is used to help rank alternative actions, illuminate tradeoffs, and enlighten public discourse.” The following image (Jacobs et al, 2016) summarizes scientific efforts to use an “Integrated Valuation” approach, rather than historical single valuation techniques, to assign values to ecosystem services when making decisions that change resource stocks. Examples of single valuation techniques include step 07 in the previous image, and the Santos-Martín (2016) images displayed in the Social Performance Indicator Threshold Overview section. The IBPES (2016b) reference uses the term “Pluralistic Valuation” (Chapter 5) in a similar manner. These decision making approaches recognize that multiple groups of stakeholders value ecosystem services differently based largely on their perceptions of which public capital stock characteristics are important to them. Some groups place higher value on the indigenous knowledge pertaining to cultural capital stocks, others on the social equity associated with social capital stocks, while others believe that natural capital stocks have intrinsic values that escape measurement. The authors advocate these multi-stakeholder, multi-capital, multi-valuation, approaches for assessing service value. They believe these approaches will lessen the severe conflict, inequity, and dissatisfied customers, which can arise from making decisions and taking actions that don’t account for the tradeoffs and synergies needed to balance the interests of diverse stakeholders. Chapter 6 in the IBPES (2016b) reference gives concrete examples of tradeoffs and synergies between ecosystem services, stakeholders, and mitigation and adaptation responses. The following image (Antonopoulos et al, 2016) illustrates tradeoffs between ecosystem services, species abundance, and land use intensity. In this example, provisioning services can be increased (i.e. food or timber production), but other services must be reduced resulting in an overall decline of ecosystem service values, or, in the context of this reference, Social Performance Measures. One historical explanation for this tradeoff is that certain powerful stakeholders, and the social norms at the time, placed high value on economic production. An elite group of owners may have captured most provisioning service values because formal institutions that protected social equity were immature or compromised by corrupt politicians captured by special interest groups. Less powerful stakeholders, such as bird watchers or tourists [or in the case of the author, sunset watchers], placed higher value on enjoying diverse wildlife, but had much less social popularity and political influence. Ecosystem services, and any other public service, can’t be protected or improved properly without thoroughly addressing the economic, social, cultural, and institutional, “drivers” that explain service management. La Notte et al (2017) summarize prominent ambiguities in how researchers define the terms used in these types of ecosystem frameworks and their applied WBSs, including ecosystem structure, processes, functions, services, and benefits. They point out that such ambiguities can result in double counting the services generated by ecosystems. For example, if provisioning services include crop biomass, and a regulating service includes pollination, the resultant valuation assessment can easily double count the crop production benefits generated by the two service flows. They employ a system ecology perspective to propose the following definitions for the terms used in ecosystem assessments. Olander et al (2015) discuss how to use experts trained in valuation assessment to avoid double counting when using “ecosystem service causal chains”, which, in the context of this reference, includes “social impact pathways” and “results chains”. The Indicator Threshold and Scoring systems used in Appendices B and C must employ Indicators in a way that avoids double counting of services and subsequent valuations. They must also evolve as these scientific issues become resolved (11*). J. Communicate Results The following images (EPA 2016, RAND 2016) demonstrate how the results of risk reduction assessments can be communicated to decision makers. A third dimension or additional point estimates, that distinguish the difference in importance to a company versus community, can identify tradeoffs and enhance communication further. The following 2 images demonstrate how the IPCC recommends communicating information about the risk and uncertainty of resource stock measurements. The following image (IPBES, 2016a) demonstrates how the IPBES recommends using these types of qualitative rankings to further categorize “the qualitative communication of confidence”. GIS This reference treats GIS mapping as a useful tool for communicating the results of this framework (and possibly the most useful communication aid). But, unlike most of the background references, it doesn’t treat GIS as an essential tool because it adds further complexity to applying the framework and achieving results, and because not every capital stock assessment needs geographic analysis. Specifically, 1. Social Networks, Clubs, and TEXT and HTML First (14*). The applied tools introduced in Appendix B only require access to a web browser, ability to use URIs and build TEXT files, and the ability to “click” on, or touch, web pages. Two of the most important principles explained in Section A involve “transparent reporting” and “adoption of applied online tools”. Those principles can be achieved by using URIs to make all of the input and output data available as TEXT datasets, with results that get transparently displayed as standard, 2-dimensional HTML. 2. Secondary Data Second. Most GIS mapping simplifies reality, often shakily, by relying on readily available secondary data, such as census data. Most of the serious data needed to achieve results in this framework, such as standardized public service data, multi-stakeholder valuations, or standardized mitigation and adaptation impact data, isn’t available. It’s not clear that the satellites, or the public censuses, relied on by organizations such as the EEA can deeply change the availability of the needed data (i.e. for all 7 capitals). That probably explains the need for Footnote 11, the likelihood of Footnote 16, and most of this reference. 3. GIS Data Third: GIS requires the skills of GIS specialists who have access to specialized GIS data. It represents another black box hurdle that professionals who may be capable of applying this framework should not have to immediately overcome. If the Locational Indexes introduced in Section C work, even primitively, that data can be “upgraded” to GIS data in future releases (i.e. refer to Appendix C in the Technology Assessment 2 reference). K. Applied Resource Conservation Accounting (RCA) Algorithms and Platforms (12* and 16*) The CTA reference explains that the primary characteristic of CTAs is their use of online algorithms to measure the risk and uncertainty of Indicators. The Indicator threshold and boundary metrics, introduced in Section D, account for risk and uncertainty, thereby supporting Section A’s principle, Reports the Risk and Uncertainty of Stock Balances. Additional CTA techniques, including scenario analysis, spatial and temporal scale trends, multi-stakeholder service valuations, vulnerability assessment, and more advanced mathematical algorithms, are needed in order to use CTAs as an overall digital decision support platform for implementing this RCA Framework. The following image (IPBES, 2016a) introduces an international ecosystem assessment framework currently being developed as such a digital platform. The image shows that more comprehensive decision support platforms integrate natural resource/human system modeling (i.e. Multiple System Components), multiple scenario types, multiple spatial scales, and multiple temporal scales. Although not transparent, the framework’s overall performance objective, Good Quality of Life, must be defined in terms of the values of diverse stakeholder groups. In relation to this RCA Framework, that performance objective is an integral part of the decision making processes introduced in Section H that apply the framework. Appendix B. Social Performance Measures, introduces new or upgraded algorithms that apply this framework. They also begin to demonstrate how to link the models, stakeholder valuations, scenario types, spatial scales, and temporal scales, needed when using CTAs as a digital decision support platform, similar to the linkages displayed in the following image (IPBES, 2016a). Additional algorithms will be introduced in future releases that use much more technology-intensive approaches, such as recent Artificial Intelligence approaches (15*). L. Mainstreamed Resource Stock Assessment Platforms The OECD (2015) inadvertently makes the following point about achieving economies and scale in the collection, management, and delivery of public and private sector performance data. “The data constraints countries face in the context of [climate change] adaptation are similar to the constraints they face when monitoring and evaluating other development priorities. Lessons learned from development practice can, therefore, inform development support targeted at enhancing data availability for [climate change] adaptation.” UNEP DTU (2016) points out that Monitoring and Evaluation of climate change adaptation, like many Indicator Assessment approaches, is too complex and expensive for approaches that are not being “mainstreamed”. By mainstreaming, they mean applied in multi-stock, multi-sectoral, multi-scale, multi-valuation, integrated, digital platforms. In the context of this reference, they also mean multi-model, or “multi-algorithm”. In practice, that means this RCA Framework must demonstrate, in Appendices B and C, practical ways to “align and harmonize” (KPMG, 2016, UN-SETAC 2016), or “mainstream”, the immense number of Indicators, Indicator systems, and stock assessment techniques found in the references, The World Health Organization (O’Neill et al, 2016) and UNDP (2016a, 2016b) demonstrates the process. Biologists, climate scientists, ecologists, malnutrition M&E experts, disaster risk assessors, stock valuation experts, investment advisors, company CFOs, sustainability officers, auditors, and public sector managers, must find “mainstreamed platforms” to jump on board, or each group will continue reinventing the same wheel. These professions should develop Indicators and “algorithms”, customized for their purposes, but not to develop thousands of competing, single sector, single capital, single country, single valuation, single model, expensive IT platforms for assessing resource stocks. Just because funding sources outside of the Silicon Valley may not understand “mainstreamed platforms” very well, and therefore bestow large sums of money to build “special interest group platforms”, that’s not an excuse for harmful IT policy (12*). The risk of deterioration to public services may lead to “unwise” citizen quality of life –with a downward spiral that benefits no one. This reference demonstrates how to develop generic tools and platforms that use widely accepted Indicator systems to protect and improve generic public capital stocks and services by using generic public and private sector performance assessments. Mainstreaming, as applied in generic, online knowledge platforms, allows economies of scale and scope to be realized, public service risk reductions to be widespread, and companies to be admired or admonished. Appendix B. Social Performance Measures The following metrics provide proof of corporate and public entity claims made in financial reports about the risk and uncertainty associated with their impacts on their community’s human capital, physical capital, economic capital, natural resources capital, social capital, cultural capital, and institutional capital. In order to support the Section B principle, Supports Transparent Corporate and Public Sector Materiality Reporting, the story behind these Measures must be told thoroughly and well. These types of measures assume that social networks (even if currently nonexistent) share the development burden because they understand what “unwise citizen of quality of life” means for them and their descendants and what economies of scale and scope imply for risk reductions (11*). A. Total Social Performance Score for Companies and Communities All Indicator Threshold systems must integrate widely accepted Risk Reduction Indicator systems, such as the UN’s SDGs and climate change-related Sendai DRR Indicators or the EU’s EMAS Indicators, with this framework’s RCA WBS. Individual Indicators are customized for specific sectors, specific capital stock assessments, and public versus private sector purposes, but they must be applied using the same dataset structures and algorithms. These systems must allow them to be used in a wide assortment of assessments, including RCAs, M&Es, DRRs, and S-LCAs. If not defined carefully, the Indicators used to define the “impact pathways” in these systems can easily double count the benefits, or Impacts. Example 1 will illustrate how the value of biodiversity improvements can double count crop production service values arising from mitigating the Freshwater Supply and Pollination risks or “stressors”. The best way to avoid double counting is to fully understand the underlying science and then “proof” these systems with extensive field work (i.e. which this nonconventional ngo’s resources don’t allow). Generic Threshold Levels. Up to 7 levels of Indicator Thresholds are defined in a generic way that allows them to be used for all Indicators within an Indicator Threshold system, or within the system’s Indicator Factors. The following examples employ the RAND scoring technique, which converts qualitative indicators to quantitative high and low values that will be normalized when used in scoring systems. Indicator Threshold Systems. Indicator Thresholds TEXT datasets must be defined in a general enough way to allow them be used for systems of Indicators applicable in a wide array of quality of life scenarios. Characteristics of these systems include: 1. Social Networks and Clubs. Objective, science-based, online, social networks build and maintain these Threshold Systems for priority resource stocks. The initial Indicators used in these systems should correspond directly to established Indicators systems, such as the SDG, Sendai DRR, and ISO 14001. Individual clubs customize them when applied for their own purposes. 2. Customer-focused Thresholds: The degree of effort spent on developing Threshold systems should be proportionate to the issues being addressed. RAND (2016) demonstrates how some companies and organizations develop very applied and prioritized systems with a small number of Indicators. At the other end of the scale, USEPA (2016) demonstrates how these systems may require hundreds of Indicators to deal with major issues with severe consequences, such as increasing the resiliency of large urban cities to climate change. 3. Driving Factor, or Impact Pathway, Indexes. The driving factor, or Indexes, used with each Indicator derive from the 7 capitals. 4. 4 Hierarchical Indicator Levels Rooted in the Capitals. Using the example of social impact pathways, Separate Indicator Thresholds must be defined for “impact pathways”, “causal chains”, “exposure pathways”, or “results chains” rooted in the capitals, whether this reference’s 7 capitals, or the 3 capitals employed by many of the third party references (i.e. Environmental, Socioeconomic, and Corporate Governance). In the example of social impact pathways, the 4 levels are translated into Actions (AF), Conditions (CF), Services (SF), and Impacts (IF) in specific industries (i.e. coffee production). 5. Location-based Thresholds. Separate Thresholds can be defined for locations or ecosystems (i.e. locationid column). 6. Multiple Rating Systems. Multiple threshold ratings can be used within these systems (i.e. by inserting new rows with the new rating system above the appropriate factors). 7. Public and Private Sector Uses. Indicators are defined in general terms that can be used for private and public sector purposes. 8. Quantitative Relative Scores. Their corresponding scoring systems will convert the qualitative Threshold values to quantitative scores. The Thresholds and Scoring systems must guard against double counting service values. 9. TEXT Datasets. The raw scoring data result is returned as comma-separated-value TEXT datasets. Be careful not to use any commas in the original Indicator and Index names. This TEXT file is then used to build whatever multimedia support is desired, such as tables showing percent change between periods, or EPA-style graphs displaying the areas of lowest urban resiliency. The following examples illustrate these characteristics. A critical part of these Thresholds is to understand and correctly identify the linkages and interactions in the social impact pathways or results chains. Automating these linkages will continue to be a future goal, or “release”. Indicators use the following labeling conventions. The references used in this tutorial suggest that the main limit introduced in this labeling system, 26 characters in the alphabet, will not impose a serious constraint. Total Risk Indexes must start with the hardcoded characters “TR” and contain no more than 3 characters. Locational Indexes must contain no more than 2 characters. This reference recommends using the following labeling conventions: NC = Natural Capital, IC = Institutional Capital, HC = Human Capital, EC = Economic Capital, PC = Physical Capital, SC = Social Capital, CC = Cultural Capital. Large scale Indexes may want to customize them for large scale issues. Categorical Indexes must contain no more than 3 characters. This reference recommends using the following labeling conventions: NCA = Natural Capital Index1 … NCZ = Natural Capital Index26. Indicators must contain at least 4 characters. This reference recommends using the following labeling conventions: AF1A = Action Indicator1 … AF1Z = Action Indicator26, CF1A = Condition Indicators, SF1A = Service Indicators, IF1A = Impact Indicators. Version 2.1.2 moved all examples into Appendix C. Note how the UN SDG Indicators (Indicators such as 6.5.1) have been integrated into these systems. The “Action Indicators” (i.e. AF1A = Action Indicator 1) can be defined as SDG risks, as industry or company risks and activities, or as the “climate and ecosystem stressors” found in several of the ecosystem assessment techniques introduced in Appendix A. Although these thresholds are fictitious, it’s possible that some scientific organization may have already established thresholds for the SDGs, DRRs, and other established systems. If not, a new “Resource Conservation Accounting Standards Board” may be needed (i.e. none of the existing standards organizations introduced in this Appendix use this type of risk assessment approach) (5*). Customers should not have to independently complete unique Threshold Systems when they use Social Performance Measures. In practice, these types of Thresholds, as well as their applied Scoring systems, are usually developed by interdisciplinary teams, such as online social networks that include biologists, ecologists, engineers and social scientists. That helps to explain deficiencies in these tables (11*). Indicator Scoring Algorithms. Algorithms convert Qualitative Threshold levels, as defined by a corresponding quantitative Threshold score, to quantitative scoring systems (i.e. similar to the RAND, Eurostat, OECD, FAO, and Technology Assessment 2 techniques). In this example, the quantitative thresholds defined in the previous Indicator Threshold tables are used to score each trend period. Algorithms use these values to run calculations and fill in the final scores, Index rows, and parent Indicator properties. The 4 level Indicator hierarchy used with the Scores offers flexibility in how Scores are defined. The characteristics of the hierarchy used in the following table include (refer to the Technology Assessment 2 tutorial for further details about these 4 levels): 1. 4 Indicators and a Score. Using the example of social impact pathways, each base element uses 4 separate Indicators to score the RCA Framework: Indicator 1 = Actions, Indicator 2 = Conditions, Indicator 3 = Services, and Indicator 4 = Impacts. The Score is defined flexibly, such as actual impacts as percentage of target impacts or actual impacts as percentage of benchmark impacts. 2. Locational Indexes. These Indexes are the principal “drivers of change” that are being addressed, or fixed. Example 1 includes Natural Capital, Physical Capital, and Economic Capital as Locational Indexes. Example 2 includes Habitat Change, Climate Change, and Overexploitation as Locational Indexes. The Locational Indexes are based on the purpose for the capital stock assessment and can include issues such as Civil Rights and Gender Equality. 3. Categorical Indexes. The main factors, or explanatory factors, influencing the Locational Index are defined by multiple Categorical Indexes, such as Natural and Economic Capital in Example 1, and Economic Development and Land Use in Example 2. The Categorical Indexes come directly from the background Indicator Threshold systems. Multiple stocks from the 7 capitals can be, and usually are, used as explanatory factors, or Categorical Indexes. Although not demonstrated yet in this reference, sophisticated algorithms are available (i.e. see OECDb, 2008) for generating scores using this type of Indicator system. 4. Indicators. Each Categorical Index contains one or more Indicators used for scoring. Each separate Indicator can include 3 types of Indicators in separate 4 level hierarchies: benchmarks, targets, and actuals. Appendix A, Example 1 demonstrates how these Indicators are used for monitoring and evaluation, or “learning and accountability”. 5. Quantitative Relative Scores. Their corresponding scoring systems will convert the qualitative Threshold values to quantitative scores. The Thresholds and Scoring systems must guard against double counting service values. RAND (2016) describes the use of these scores as follows: “In such an [normalized and weighted Indicator] analysis, absolute values are far less important than relative comparisons of those values. Such normalization allows one to assess where among the [] risk areas where [] leadership should have the most concerns and ultimately where resources could or should be allocated to mitigate risks and improve chances of mission success. In the classical risk assessment format, when risks are represented by the likelihood of some outcome occurring (e.g., 10 percent of five fatalities), absolute values are very important. Once these risk values have been normalized to a common ordinal scale (e.g., 1-to-5 scale), the values become less descriptive and therefore lack the actionable aspect of their classical counterpart.” The last statement explains why these Threshold Systems will be explained further with new quantitative and qualitative algorithms. 6. Optional M&E. If using this algorithm for M&E, Mitigation and Adaptation Alternatives must contain an underscore followed by at least 1 character. We suggest the following convention: “_A” … “_Z”. These Indicators act as targets, and the initial Indicators serve as benchmarks. This reference recommends using the following convention for Indicators acting as actuals: “_AA” … “_AZ”. The examples in Appendix D document alternative, simpler, ways to conduct M&E with this system. 7. Adaptive Efficiency: This reference acknowledges the difficulty, or social engineering hubris, of establishing a complete science that fully quantifies the socioeconomic factors influencing public services. An argument can be made that no one has fully achieved that accomplishment yet. Nevertheless, most of the references, as well as the author’s direct experience, suggest a) the importance of starting to establish a scientific basis for these measurements, b) the learning, and increasing stock of knowledge, that takes place allows the science and supporting institutions to evolve, and c) the collaborative process used to complete Social Performance Measures at least helps to identify “socially acceptable paths forward”. Appendix C. Examples (15*) Appendix C has been moved to the sibling reference, Social Performance Analysis 2. Appendix D. Definitions of Social Performance Terms This Appendix defines the principal terms used in this reference. These informal definitions derive from the author’s experience and current work in technology development. Readers are encouraged to think more deeply about these terms, and to produce their own tutorials, by referring to dictionaries, encyclopedias, and scientific publications (i.e. see the Glossaries and Annexes in the UN 2014, TFCRFD 2016, GRI Standards Glossaries, EU MAES 2016, US CEQ 2013, and IBPES 2016, references). Public Goods or Public Capital Stocks. Any asset that is not privately owned and generates services demanded by people. Private ownership must be established through widely enforced laws (8*). The laws get established by a community’s formal, rather than informal, institutions (i.e. courts). The formal institutions must not be corrupt, in the process of being corrupted, or in a serious state of decline. Public Services: Public services, such as ecosystem services, are the direct or indirect contributions, including economic, environmental and social effects, which public capital stocks, such as ecosystems, make to the environment and human populations. Capital Improvements. Increases in the quantity or quality of human capital, physical capital, economic capital, natural resources capital, social capital, cultural capital, and institutional capital. Socially Sound: The social benefits of firms and public entities outweighs their social costs. Social Performance Measures are used to rate the social productivity and performance of firms and public entities. In the context of DevTreks, social soundness has to be defined in terms of costs, benefits, productivity, tradeoffs, and performance. When benefits can’t be monetized, they should be expressed in alternative ways –such as cost per QALY (quality adjusted life year) or per capita cost per unit decrease in emissions. The Community. This is a general term used to loosely define the boundaries where Social Performance measurements take place. In the natural resources conservation area, these boundaries can vary from a local watershed to an entire ecosystem, In the case of climate change, “the community” is defined as the planet. In the context of this reference, this term reflects the reality that most resource conservation planning occurs in boundaries defined by political jurisdictions, including cities, counties, and states. In the case of a private sector firm, it’s often defined in terms of market segments of consumers. In the former case, the public sector entity carrying out a Resource Conservation Value Accounting assessment, is not a specific government agency –it’s a team of specialists working for various government agencies. In the latter case, the public sector entity carrying out a Resource Conservation Accounting assessment, can be a specific government entity, such as an environmental protection agency. It can also be a political entity –a mayor’s task force or a governor’s establishment of a conservation technology assessment district. Good Actors. Firms that care deeply about investor, customer, and informed citizen, demand for socially sound behavior. It’s not unusual for these firms to go out of their way to attain 3rd party confirmation about the social soundness of their actions, goods, and services. Bad Actors. Firms that don’t privately care about investor, customer, or informed citizen, demand for socially sound behavior. It’s not unusual for these firms to use their marketing departments and political lobbyists to try to mask the social unsoundness of their actions, goods, and services. Good and Bad Actor Groups. Good and bad actors need not be individual firms or public sector entities. They can be defined as groups of good or bad actors by defining general characteristics that fit multiple firms or public entities. Their characteristics are defined in a way that allows citizens to easily identify the individual actors. Unlike many existing resource conservation reporting systems, a major reporting goal is to transparently “point fingers” at these actors. Informed Citizens. Voters who doubt the veracity of tabloid, television, and Internet reporting and take concrete action to educate themselves to discern good from bad, real from fake, and important from contrived. Their goal is to become fully productive members of society who can make life better for their children and their children’s descendants. They are as concerned about having a well-paying job that can support their children as with having a planet and community that their children’s descendants actually want to live on and work in. Conserving Scarce Resources. In the context of this tutorial, the term “conserving scarce resources” has two meanings. In the Performance Analysis 1 reference, it primarily means presenting proof, in the form of traditional economic and technical measurements, such as Net Returns, of economically sound performance and productivity. In the Performance Analysis 2 reference, it primarily means presenting public proof about a firm or public entity’s claims about their impacts on the public services generated by public goods. Firms use the Social Performance Measures in this reference to gather and present public evidence about their claims about their social soundness. Social Performance Measures deal with public capital stock measurements that prove socially sound performance and productivity. Social Performance Measures. Indicators, or Disclosures, used to quantify a firm or public entity’s impacts on the public services generated by public goods. Firms and public entities use Social Performance Measures that follow a Resource Conservation Value Accounting Framework as background evidence, or materiality, to support financial reporting claims about conserving scarce resources. Their goal is to actively contribute to balanced lives and a balanced planet. Investors, customers, and informed citizens use Social Performance Measures to take concrete action to support good actors and to chasten bad actors. Resource Conservation Value Accounting (RCA). An accounting framework that uses Social Performance Measures to report on a firm or public entity’s impacts on public goods. They supply background evidence to support corporate and public entity financial reporting claims about the conservation of scarce resources. They help companies and public entities to actively contribute to balanced lives and a balanced planet. Conservation Technology Assessment (CTA). CTA is the analysis of resource stock flows and balances, and conservation technologies that are designed to prevent or correct imbalances in the stocks. The CTA definition uses the term Conservation in a general microeconomic sense –firms, households, and governments can improve lives and livelihoods by allocating scare resources well. The term Technology is also used in a general way –management practices, firm performance, projects, and policies are also assessed using CTAs. CTA-Prevention (CTAP). CTAP is the numeric assessment of the costs and benefits of a portfolio of mitigation and adaptation interventions that prevent or correct resource stock damages. Assessments use relevant Conservation Technology Assessment (CTA) algorithms to quantify the risk and uncertainty associated with resource stock measurement and valuation. Materiality. The following definition derives from the FASB: “Information is material if omitting it or misstating it could influence decisions that users make on the basis of the financial information of a specific reporting entity”. Indicator. Eurostat (2014) definition: “An indicator is defined as a statistical variable presented in the form of a time series and chosen to monitor the evolution of a specific aspect of the issue dealt with (e.g. the aspects of health, economic efficiency or natural resources for the issue of sustainable development) towards a desired direction or a target value. This understanding of the term ‘indicator’ corresponds to what is also called ‘performance’ or ‘normative’ indicators.” Indicator Assessment. Eurostat (2014) definition: “Indicator-based assessment refers to the positive, negative, neutral (or any intermediate class between ‘positive’ and ‘negative’) qualification of an indicator based on the comparison between its observed evolution (and/or status), and the desired evolution set for the indicator by means of a frame of reference.” Indicator Thresholds. Eurostat (2014) definition: “Thresholds are needed to delimit the positive, negative and neutral qualification classes attributed to indicators by the indicator-based assessment process (comparison of the observed and desired evolutions). The definition of the thresholds depends on the type of assessment method chosen, in particular on how the desired and observed evolution are compared.” Transparent Proof. Scientific evidence that is completed online, stored uniformly online, and easily accessed online by people and machines. This proof is maintained in online knowledge banks and passed down to future generations. Technology Development, Diffusion, and Adoption. Technological advances that improve worker and organization productivity result in better economic performance. These advances are the principle way that economic development takes place (9*). In the context of corporate financial risks from climate change, the TFCRFD (2016) uses the following explanation to help define this term. “Technological improvements or innovations that support the transition to a low-carbon, energy-efficient economic system can have a significant impact on organizations. For example, the development and use of emerging technologies such as renewable energy, battery storage, energy efficiency, and carbon capture and storage will affect the competitiveness of certain organizations, their production and distribution costs, and ultimately the demand for their products and services from end users. To the extent that new technology displaces old systems and disrupts some parts of the existing economic system, winners and losers will emerge from this “creative destruction” process. The timing of technology development [, diffusion, and adoption], however, is a key uncertainty in assessing technology risk.” That organization uses the following timeline, or technology diffusion and adoption path, to illustrate goals for corporate disclosure of the financial risks associated with climate change. A similar path is appropriate for the adoption and diffusion of the types of applied Social Performance tools introduced in this reference. Public Capital Stock Resiliency measures the ability of a community to: 1. Identify, support, and learn from, the positive impacts to public capital stock services caused by sound private sector firm behavior and/or good public sector management. 2. Reduce exposure and sensitivity to, and recover and learn from, the negative impacts to public capital stock services caused by unsound private sector firm behavior and/or bad public sector management. Please use Wikipedia’s definitions of the following terms associated with public capital stocks. Feel free to contribute to those definitions. The Technology Assessment tutorials have examples and datasets that further explain these concepts. Institutional Capital is addressed separately because Wikipedia does not cover the topic. Natural Resources Capital, Human Capital, Physical Capital, Social Capital, Cultural Capital, Economic Capital Institutional Capital. In the context of Performance Analysis, the following image provides the intended context for this term. The author built this image about a decade ago to summarize Douglass North’s theory of “adaptive efficiency”, as defined next, and to understand economic performance and development better. That Nobel laureate’s publications, such as “Transaction costs, institutions, and economic performance” and “Understanding the Process of Economic Change”, cover this topic in greater depth. Summaries can be found on amazon. Examples of formal institutions include laws and regulations. Examples of informal institutions include social norms and shared beliefs (refer to the Levitsky and Ziblatt reference (NYT, 2016) for a concrete example of the importance of informal institutions). In the context of the applied RCA Framework introduced in Appendix A, the IPCC WG2 reference provides several good contextual overviews of the fundamental role of Institutions in protecting and improving resource stocks and their services (i.e. Sections 2.2.2, Institutional Context, and 20.4.2, Assuring Effective Institutions in Developing, Implementing, and Sustaining Resilient Strategies). Appendix 1 in the IBPES (2016b) references also gives a formal definition of the term and demonstrates the central role played by this capital stock in conducting thorough capital stock assessments. A key contribution this reference makes to this definition is to put cloud-based IT companies at the forefront of institutional capital improvement. Adaptive Efficiency. A theory of societal change, coined by Douglass C. North, that attributes successful economic change to human effectiveness in adapting formal rules and informal norms (i.e. institutions) that constrain their behavior towards productive outcomes. Adaptive efficiency describes society’s ability to create institutions that have the responsiveness to change course (i.e. beliefs and created institutions) when outcomes deviate from intentions. History is full of examples where good human intention led to unexpected, and disastrous, outcomes. Good institutions reduce the gap between intention and outcome. They do so by increasing the stock of knowledge that helps people to predict outcomes and make willful decisions to use this knowledge to improve their lot. Adaptive efficiency explains the manner by which humans develop rules, beliefs, behaviors, and customs that support productive outcomes. Good institutions come about by adapting to solve new problems, learning from past mistakes, filtering out bad information, and reducing the uncertainty of outcomes. They tend to evolve slowly, learning step-by-step, making incremental progress, adapting to changing circumstances, gaining credibility, and eventually improving society. Adaptive Management. Adaptive Management is defined by PR&G (2013) as “Adaptive Management is a deliberate, iterative, and scientific based process of designing, implementing, monitoring, and adjusting an action, measure, or project to address changing circumstances and outcomes, reduce uncertainty, and maximize one or more goals over time.” DevTreks –social budgeting that improves lives and livelihoods 1